And customized source code analysis of mysqld_exporter (single mysqld_exporter monitor multiple database instances)

mysqld_exporter prometheus is provided by the official exporter for monitoring the status of running mysql. The relevant information is available: https://github.com/prometheus/mysqld_exporter .

 

1. Configuration

Look at how it is configured. The main configuration is divided into two parts, the connection information is mysql monitoring target, and the other part is provided to monitor the parameters of the exporter crawled.

 

The first is the connection information:

The method of setting the connection information in two ways. The first is through the environment variable, for example:

export DATA_SOURCE_NAME='user:password@(hostname:3306)/'
./mysqld_exporter <flags>

Another method is set by the configuration file. Profile will FUNC parseMycnf () function is converted to the same format as the environment variable settings. This setting is then passed golang the db library and database connections.

Priority for both settings, when the presence of environmental variables (length greater than 0) will not parse the configuration file.

 

Then set the monitoring parameters of the exporter crawl:

Set used here to indicate the scope of the monitoring parameters. First, by using scrapers exporter in a constant recording default acquisition range set A.

exporter also allows the time of starting the exporter, the acquisition range set by the parameter setting start B.

When the set B is absent, A set of commencement; set B, when present, into force set B, set A failure.

Prometheus during data acquisition exporter, may carry the collect a [] C. Parameter setting acquisition range

When the set C do not exist, Prometheus final collection ranges A or B (depending on which set force); when present set C, de final collection range Prometheus C and A or B (depending on which set force) of the intersection .

 

2. Operating mode

exporter monitoring data collected mainly by the Collector to achieve.

The first is the registered route. Note mysqld_exporter.go 277 and 278 lines:

    handlerFunc := newHandler(collector.NewMetrics(), enabledScrapers)
    http.Handle(*metricPath, promhttp.InstrumentMetricHandler(prometheus.DefaultRegisterer, handlerFunc))

As it can be seen in the main handler newHandler (), a function of line 162 back to the body. 164 is the default and a scraper, 165 is to obtain the collect line [] parameters prometheus band. In the line 196-208 of the collect [] have been processed, the intersection of the request and the scraper.

In row registered collector Prometheus 210-211, the process of the collector inlet 85 in a row New /collector/exporter.go () function. Function New () returns a structure called the Exporter. This structure implements Prometheus acquisition interface, and therefore in its member functions Collect 117 lines () is acquired position data.

Collect () function calls scrape 126 lines () function. After scrape () function to do some database initialization operations, in line through all 160 scraper, and go func call all the scraper Scrape () function, realize the collection target data.

In summary, for mysqld_exporter, Prometheus only when accessing its data interface, database connection Exporter fishes and collect data. For a plurality of scraper, exporter take a plurality of co-routines concurrency data collection. (Concurrent connections depends on the specific concurrency provided mysql account for the exporter)

 

3. Customization

For a single mysqld_exporter, memory consumption in the tens plurality M. In practical applications, only a single instance of monitoring a single exporter mysql database is a sore point of the exporter.

The data described in Section 2 acquired characteristic, which has not been accessed data interface when almost no other action, and therefore from the performance in terms of cost, to monitor multiple databases with a single exporter and there will be no big problem.

(Of course, an obvious question is, requests for multiple databases is a serial or parallel? If you choose parallel, using a separate coroutine each scraper each database, in too many coroutine when the performance will not be impact. this is a problem need further discussion. but the next content to avoid this problem.)

How to make the exporter can monitor multiple database instances? A straightforward idea is, when Prometheus accessing data interface, multi-instance passing a parameter that is monitored address and port of the target database, such as "localhost: 3306".

Then, when we are dealing with Prometheus access (i.e., previously mentioned newHandler), if resolved to instance parameters, the replacement instance information out configuration database connection information, so we can use the configuration parameters Prometheus selection monitored database instance.

Usually, Prometheus's profile should look like:

scrape_configs:

  - job_name: 'prometheus'

    static_configs:
      - targets: ['localhost:9090']

      - params
        "collect[]":
          - ***

 

But this can only get access interface each time the monitoring data of a database instance. How to integrate these data together?

This time relabel config Prometheus configuration on debut. (Here, please refer to the specific documentation https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config )

Here we explain the difference between relabel and metrics relabel of. relabel Prometheus is taking effect before the data access interfaces, metrics relabel after receiving the data is in effect.

relabel config provides the following two label:

The first is __address__. Usually, when we configure the monitored object of Prometheus, target while monitoring the goal. In relabel stage, target will be automatically passed to __address__, and then relabel Prometheus addresses as data access interface.

Therefore, relabel stage, we can directly __address__ this label be replace, so that you can re-enact the address data access interface Prometheus.

The second is __param_ <name>. That is, we can relabel stage, carried out by the label processing, and carries the specified parameters and content when accessing data interface.

For example as follows:

scrape_configs:

  - job_name: 'prometheus'

    static_configs:
      - targets:
        - localhost:3306
        - localhost:3308

      - params
        "collect[]":
          - ***
    
    relabel_config
      - source_labels: ['targets']
        target_label: __address__
      - source_labels: ['__address__']
        target_label: __param_instance
      - source_labels: ['__address__']
        replacement: localhost:9104

Suppose we played two local mysql in 3306 and 3308 the two ports, 9104 and then played a customized exporter.

Look relabel, we put targets into __address__, then __address__ into __param_instance, such as access to the original target data interface parameters instance. The address of the access interface is to replace localhost: 9104.

This exporter via a data interface in the parameter increases, relabel binding Prometheus configuration, to achieve a plurality of database instances using a single monitor mysqld_exporter.

 

If you need deeper customization, such as specifying the data collected by the sql statement, etc., with mysqld_exporter inappropriate. To achieve this, we need to implement a separate Colletor, so the higher cost of development.

For custom sql statement this requirement, you can use sql_exporter achieved. For more information please refer to https://github.com/free/sql_exporter .

Guess you like

Origin www.cnblogs.com/wangzhao765/p/11247830.html