OpenTSDB and Grafana

1. Background

   Monitor the real-time operation and history of redis to understand the operation of redis

2. Purpose

Through this monitoring, you can monitor the running status of the redis service in real time; and understand the trend of redis through historical data. Take appropriate actions to achieve the purpose of monitoring.

3. Monitoring environment construction

(3.1) Modify the hostname

 (3.1.1) in /etc/hosts

 

192.168.165.130 nameNode #Add ip and hostname 

 

 

 (3.1.2) 在/etc/sysconfig/network

 

NETWORKING=yesHOSTNAME=nameNode #hostname and restart the machine 

 

 

(3.2) Install jdk-7u79-linux-x64.rpm

 

rpm -ivh jdk-7u79-linux-x64.rpm 

 Modify the /etc/profile file

 

 

JAVA_HOME = / usr / java / jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib.tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH

 
     

 

Appears through the command java -version, indicating that the Java configuration is complete

 (3.3) Hadoop pseudo-distributed configuration installation

 

tar zxvf hadoop-2.5.1-x64.tar.gz
mv hadoop-2.5.1 hadoop
mv hadoop /usr/

 

 

 

(3.3.1) Placement HADOOP_HOME

Inside etc/profile

 

JAVA_HOME = / usr / java / jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib.tools.jar
PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin

 

 

(3.3.2) Currently etc / hadoop/core-site.xml

 

<configuration>
    <property>      
	<name>hadoop.tmp.dir</name>      
	<value>/usr/hadoop/tmp</value>      
	<description>A base for other temporary directories.</description>  
    </property>  
    <property>      
        <name>fs.defaultFS</name>      
	<value>hdfs://nameNode:9000</value>  
    </property>
</configuration>

 

 

(3.3.3) in etc/hadoop/hdfs-site.xml

 

<configuration>    
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/hadoop/hdfs/name</value>
    </property>
    <property>        
	<name>dfs.datanode.data.dir</name>        
        <value>/usr/hadoop/hdfs/data</value>    
    </property>    
    <property>        
	<name>dfs.replication</name>        
	<value>1</value>    
    </property>
</configuration>

 

 

(3.3.4)在etc/hadoop/mapred-site.xml

 

<configuration>
  <property>
     <name>mapreduce.framework.name</name>   
     <value>Yarn</value>
  </property>
</configuration>

 

 

(3.3.5) At etc / hadoop/yarn-site.xml

 

<configuration>
  <property>
      <name>Yarn.nodemanager.aux-services</name>
      <value>mapreduce.shuffle</value>
  </property>
  <property>
    <description>The address of the applications manager interface in the RM.</description>
        <name>Yarn.resourcemanager.address</name>
        <value>192.168.165.130:18040</value>
    </property>
  <property>
  <description>The address of the scheduler interface.</description>
       <name>Yarn.resourcemanager.scheduler.address</name>
       <value>192.168.165.130:18030</value>
  </property>
  <property>
      <description>The address of the RM web application.</description>    
      <name>Yarn.resourcemanager.webapp.address</name>       
      <value>192.168.165.130:18088</value>
  </property>
  <property>
     <description>The address of the resource tracker interface.</description>
     <name>Yarn.resourcemanager.resource-tracker.address</name>
     <value>192.168.165.130:8025</value>
   </property>
</configuration>

 

 

(3.3.6)/etc/hadoop/hadoop-env.sh

Modify the directory address of JAVA_HOME in

 

export JAVA_HOME = / usr / java / jdk1.7.0_79 

 

 

3.4 Installation of hbase

(3.4.1) Execute under /usr/local/hc

 

tar zxvf hbase-1.0.1.1-bin.tar.gzmv  hbase-1.0.1.1 hbase
mv hbase /usr/

 

 

(3.4.2) Modify /hbase/conf/hbase-env.sh

 

export JAVA_HOME=/usr/java/jdk1.7.0_79 (your own jdk installation path)
export HBASE_CLASSPATH=/usr/hadoop/conf set to Hadoop's conf directory is used to guide Hbase to find Hadoop
export HBASE_MANAGES_ZK=true

 

 

(3.4.3) Modify /hbase/conf/hbase-site.sh

 

<configuration>
        <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
        </property>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://nameNode:9000/hbase</value>
          </property>
          <property>
                <name>dfs.replication</name>
                <value>1</value>
          </property>
</configuration>

 

 

3.5 Start hadoop and hbase

(3.5.1) The namenode needs to be formatted for the first startup.

 

bin / hadoop purpose -format 

 

 

(3.5.2) Start hadoop

 

	./start-dfs.sh

	./start-yarn.sh

 

 You can browse through the browser: http://192.168.165.130:50070/  ; the following interface will appear, indicating that the startup is successful;

 

(3.5.3) Start hbase

在 / usr / hbase / bin  

 

./start-hbase.sh 

 You can enter the hbase console through the ./hbase shell and execute console commands to operate hbase

 

*After hbase was upgraded to version 1.0.0, the default port was changed. Among them, port 16020 is the default port used by the hmaster service and the hregionserver service, resulting in port conflict.

This problem can be circumvented by starting the regionserver using a separate regionserver startup script.

Instructions:

bin/local-regionservers.sh start 1 

 Close hbase with ./stop-hbase.sh

 

3.6 Install opendTSDB

(3.6.1) What is opendTSDB

OpenTSDB uses HBase to store all time series (without sampling) to build a distributed and scalable time series database. It supports second-level data collection of all metrics, supports permanent storage, can do capacity planning, and is easily integrated into the existing alarm system. OpenTSDB can obtain corresponding metrics from large-scale clusters (including network devices, operating systems, and applications in the cluster) and store, index, and serve them, making these data easier to understand, such as web-based and graphical Wait.

 

For operation and maintenance engineers, OpenTSDB can obtain real-time status information of infrastructure and services, and display various software and hardware errors, performance changes, and performance bottlenecks of the cluster. For managers, OpenTSDB can measure system SLAs, understand interactions between complex systems, and show resource consumption. The overall operation of the cluster can be used to assist in budgeting and coordination of cluster resources. For developers, OpenTSDB can show the main performance bottlenecks of the cluster, frequently encountered errors, so that they can focus on solving important problems.

(3.6.2) Basic Concepts

Metric . The nominal name of a measurable unit. metric does not include a value or a time, it is just a label, which contains a value and time is called datapoints, metric is connected with commas and no spaces are allowed, for example:

hours.worked

webserver.downloads

accumulation.snow

Tags . A metric should describe what is being measured, and in OpenTSDB, it should not be defined too simply. Often, it is better to use Tags to describe metrics with the same dimensions. Tags consist of tagk and tagv, the former representing a group and the latter representing a specific item.

Time Series . A metric's collection of data points with multiple tags.

Timestamp. An absolute time describing when a value or a given metric was defined.

Value . A Value represents the actual value of a metric.

UID. In OpenTSDB, each metric, tagk or tagv is assigned a unique identifier called UID when it is created, and they can be combined to create a sequence of UID or TSUID. In the storage of OpenTSDB, for each metric, tagk or tagv, there is a counter starting from 0. Each time a new metric, tagk or tagv comes, the corresponding counter will increase by 1. The UID is automatically assigned when the data point is written to the TSD. You can also assign UIDs manually, provided auto metric is set to true. By default, UIDs are encoded as 3Bytes, and each UID type can have up to 16,777,215 UIDs. You can also modify the source code to 4Bytes. There are several ways to display the UID. The most common way is when accessing through the http api, the UID of 3 bytes is encoded as a hexadecimal string.

*Precautions

a. Reduce the number of metric, tag name and tag value.

b. Use the same tag name for each metric

c. Consider the query methods commonly used by the system, and select the appropriate time series metric and tag

 

d. The number of tags for each metric should be kept within 5 and no more than 8.

(3.6.3) opendTSDB download and installation

OpenTSDB depends on jdk and Gnuplot. Gnuplot needs to be installed in advance. The minimum version is 4.2 and the maximum is 4.4. Run the following command to install:

yum install automakeyum install gnuplot autoconf 

 Next download and install openTSDB

 

git clone git://github.com/OpenTSDB/opentsdb.git

cd opentsdb

./build.sh

 

#TSDB communication port
tsd.network.port = 4242
#Save the data to the HBase table
tsd.storage.hbase.data_table = tsdb
#ZooKeeper Quorum
tsd.storage.hbase.zk_quorum = localhost

 

 

(3.6.4) Initialize table creation

Create the table file opentsdb/src/create_table.sh. If this is the first time you use your HBase instance to run OpenTSDB, you first need to create the necessary HBase tables:

Go to the opentsdb folder

 

env COMPRESSION=none HBASE_HOME=/usr/hbase ./src/create_table.sh

tsdtmp=${TMPDIR-'/tmp'}/tsd # For best performance, make suremkdir -p "$tsdtmp" # your temporary directory uses tmpfs
./build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir="$tsdtmp"
At this point you can access the TSD's network interface via: 127.0.0.1:4242 (assuming this is running on your host).

 

(3.6.5) Create your first indicator

First create the metrics, the command will output its UID.

tsdb mkmetric tag name 1 tag name 2   

 

3.7 A dressed grafana

(3.7.1) Download grafana-2.6.0-1.x86_64.rpm

 

The address is: https://grafanarel.s3.amazonaws.com/builds/grafana-2.6.0-1.x86_64.rpm

service grafana-server start 

 

 

 

After startup, enter http://localhost:3000 in the browser , the default port number is 3000, the default user name is admin, and the password is admin. Click to Login.

(3.7.2) Interface introduction

 

The main page (below) is mainly divided into two parts, the left is the button interface, and the right is the dynamic display chart interface. Where dashboards are all the data display dials. There can be many dynamic charts in each watch face.



 

Data Sources is the data source management interface for grafana connections, which will be introduced later.

(3.7.3) Display data 

To display data, the first step is to have data. If you already have a database (grafana natively supports influxDB, Graphite, OpenTSDB), you can directly set the data source, otherwise you need to install the database yourself. We use OpenTSDB as the data source.

 

(3.7.4) Configure data source

Click Data Source on the left side of the main page to enter the data source configuration interface. Click Add new to enter the data source editing interface. 



 

  (Add data source interface)



 

  (Edit data source interface)

Edit data source

Name: The name of the custom data source, use this reference elsewhere.

Default: Default data source configuration. Default selection for multiple data sources.

Type: Select the type of data source, which can be selected according to the version and type of the database installed by himself, and his database can be selected by himself.

Http settings

 

Url: The connection address of the database. The format is: http://IP:Port . IP is the IP address exposed by the database, the default port number of OpenTSDB is 4242



 

1. Now that the data source has been configured successfully, let's create a dial to display the data.



 

Click Dashboards-Home-New, the following interface appears. Click the settings button on the right---settings to enter the settings interface of the dashboard.



 

Configure to query metrics mkmetric, tags filter to filter the desired data.

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326526288&siteId=291194637