By Telegraf + InfluxDB + Grafana quickly build detailed steps to monitor system

A first portion configured and deployed Telegraf

Telegraf is to achieve data collection tools. Telegraf has a small memory footprint features, developers can easily add support for the expansion of other services through the plug-in system.

In the platform monitoring system can be used to run various components of gathering information Telegraf, without the need to own handwriting script timing acquisition, greatly reduce the difficulty of obtaining data; and Telegraf configuration is very simple, as long as the basic foundation of Linux can get started quickly. Telegraf acquired in time series data, the data structure contains timing information, by means of the data obtained by collecting needle Influxdb can complete various analysis and calculation operations.

Step 1 Download the RPM file

wget https: //dl .influxdata.com /telegraf/releases/telegraf-1 .8.3-1.x86_64.rpm

Step 2 yum install the downloaded RPM file

yum localinstall telegraf-1.8.3-1.x86_64.rpm

Step 3 Start Service

service telegraf start

Additional information:

Save the definition and collection after collection of data 1. Data items to decide what are the conf file. Telegraf user can modify the configuration file to configure the acquisition entry, the default configuration file is located /etc/telegraf/telegraf.conf

2.Telegraf There are four types of plug-ins

Plug-in type Functional Description
Input Plugin (Inputs) Collect a variety of time-series index, it contains a variety of plug-in system information and application information.
Processing plug-ins (Process) When the collected metrics data stream to make some simple treatment, such as all the indicators to add, delete, modify a Tag. Just be for the current index data.
Polymeric plug (Aggregate) Unlike plug plug polymerization process, it is that the object to be processed is a certain time all the data flowing through the plug-in (so, each widget has a set of polymerization, only the data processing time period), such as taking maximum, minimum, average and other operations.
Output plug (Outputs) The data collected, processed and after polymerization, to the data storage system, may be a variety of places, such as: file, InfluxDB, various services like message queues.

3.https: //github.com/influxdata/telegraf in Input Plugins part of a monitored item configuration for each system, applications, services, and we can add directly to the existing conf file.

 Output Plugins data collection portion is arranged to store the address.

4. simultaneously save data to a plurality of types of databases. The following data is saved to the collected set of InfluxDB.

5. If necessary (role of the server / monitoring needs) to regenerate telegraf.conf file, for example, monitor the entry for cpu, data output is influxdb. (Emphasis on telegraf service default configuration file in / etc / telegraf / down again)

telegraf --input-filter cpu --output-filter influxdb config > telegraf.conf

A second portion of the installation and deployment InfluxDB

InfluxDB is to achieve  data storage  tool. InfluxDB is a good time sequence databases for data storage performance with a time stamp, logs, things like sensor, can easily handle high write and high query loads (data acquisition and data visualization very common scene).

InfluxDB has three characteristics:

  • Sequential (Time Series): flexible use of time-related functions (e.g. maximum, minimum, sum, etc.);

  • Metric (Metrics): a large number of real-time data is calculated;

  • Event (Event): support any event data, in other words, the data in any event, we can do the operation.

Specific installation process is as follows:

Step 1 Download InfluxDB RPM files

 
wget https: //dl .influxdata.com /influxdb/releases/influxdb-1 .7.6.x86_64.rpm

 

Step 2 to install the downloaded file

 
yum localinstall influxdb-1.7.6.x86_64.rpm

  

Step 3 Start Service

 
systemctl start influxdb    ----启动服务
systemctl status influxdb ----查看服务状态

  

Step 4 Login Challenge

Additional information:

1. Default generated influxdb.conf located /etc/influxdb/influxdb.conf.

2. several default path should be noted that the data file, or adjust

Data Documentation Documents path Explanation Explanation
meta / Var / lib / influxdb / meta Controls the parameters for the Raft consensus group that stores metadata about the InfluxDB cluster.
Stored database metadata.
data / Var / lib / influxdb / data The directory where the TSM storage engine stores TSM files.
The final storage of stored data, files .tsm the end.
wal / Var / lib / influxdb / wal The directory where the TSM storage engine stores WAL files.
Stored database metadata.

 3. commonly used commands

command Realize the function
show databases Show all databases
use XXXX Go to one of the following database
show measurements The table below shows all the current library
select * from "XXXXX" Query data in designated tables; the middle a little table number, table name in double quotes;

 4 corresponds to measurement, tags, fields, points with conventional relational database:

InfluxDB objects Relational database objects
measurement table
tags Indexed columns
fields Row
points Line data

The third part Grafana installation and deployment

Grafana is to achieve  data presentation (data visualization)  tools. Grafana is an open source metric analysis and visualization tool for cross-platform, it can query the data collected and visual display, and timely notification.

It mainly has the following six characteristics:

1, showing the way: fast and flexible client charts, panel inserts, there are many different ways of visualizing metrics and logs, the official repository has a rich instrument panel plug-ins, such as a variety of ways to show heat charts, line charts, graphs and so on;

2, the data source: Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch, CloudWatch KairosDB and the like;

3, notify reminder: to visually define alert rule of the most important indicators, Grafana will continue to calculate and send notifications, notified by Slack, PagerDuty, etc. When the data reaches a threshold value;

4, showing the mixing: a mixture of different data sources in the same chart, each query based on the specified data source, even the custom data source;

5. Note: The use of rich events from different data sources annotated chart, hover event will display the full metadata and mark on the event;

6, filter: Ad-hoc filter allows to dynamically create a new key / value filter, which filters all queries are automatically applied to the data source.

Step 1 downloads the installation package RPM

 
wget https: //dl .grafana.com /oss/release/grafana-6 .2.4-1.x86_64.rpm

  

Step 2 Installation

 
yum localinstall grafana-6.2.4-1.x86_64.rpm

 

Step 3 service open

 
systemctl start grafana-server.service    ----开启服务
systemctl status grafana-server.service  ----服务状态查看

Step 4 Verify

After installation, the default port is 3000. You can log in directly through a browser to access: http: //172.XXX.XXX.XXX: 3000

Step 5 Grafana monitoring data collected on display

(1) Set the data source,

 

 Note that the data types need to select the source InfluxDB 

(2) may be provided a Folder, to put together similar dashboards

(3) to set a new monitoring item

 Format AS field, select [Time Series], can not choose [Table]

Guess you like

Origin www.linuxidc.com/Linux/2019-07/159204.htm
Recommended