Use the Elastic Stack to monitor and tune Golang applications

Golang is being favored by more and more developers because of its simple syntax, quick start and convenient deployment. After a Golang program is developed, it is necessary to care about its operation. Today, I will introduce to you how to use Elastic Stack. To analyze the memory usage of the Golang program, it is convenient for long-term monitoring of the Golang program to tune and diagnose, and even find some potential memory leaks and other problems.
 
The Elastic Stack is actually a collection, including several open source software such as Elasticsearch, Logstash and Beats, and Beats also includes Filebeat, Packetbeat, Winlogbeat, Metricbeat and the new Heartbeat, huh, a bit too much, um, what each beat does It's different, it doesn't matter, we mainly use Elasticsearch, Metricbeat and Kibana today.
 
Metricbeat is a collection program specially used to obtain internal running indicator data of servers or application services. It is also written in Golang. The deployment package is relatively small, only about 10M, and has no dependence on the deployment environment of the target server. Small, in addition to monitoring the resource usage of the server itself, it also supports common application servers and services. The current support list is as follows:

  • Apache Module
  • Couchbase Module
  • Docker Module
  • HAProxy Module
  • kafka Module
  • MongoDB Module
  • MySQL Module
  • Nginx Module
  • PostgreSQL Module
  • Prometheus Module
  • Redis-Module
  • System Module
  • ZooKeeper Module

Of course, it is also possible that your application is not in the above list. It doesn't matter. Metricbeat is extensible. You can easily implement a module. The Golang Module used in this article is the extension module I just added for Metricbeat, which has been merged. Enter the master branch of Metricbeat, which is expected to be released in version 6.0. If you want to know how to extend this module, you can check the  code path  and  PR address .
 
The above may not be attractive enough, let's take a look at Kibana's visual analysis of the data collected by Metricbeat using the Golang module:
 

df9c563e-f831-11e6-835c-183f3f9e5b94.png


 
The above picture is briefly interpreted:
the top column is the summary information of Golang Heap, you can get a general understanding of the memory usage and GC situation of Golang, System represents the memory requested by the Golang program from the operating system, which can be understood as the memory occupied by the process (note that Not the virtual memory corresponding to the process), Bytes allocated means the memory currently allocated by Heap, which is the memory that can be used directly in Golang, GC limit means that when Golang's Heap memory allocation reaches this limit value, it will start to execute GC, and this value will Changes with each GC, GC cycles represents the number of GCs in the monitoring cycle;
 
the three columns in the middle are the statistics of heap memory, process memory and objects; Heap Allocated represents those that are in use and not used but have not been reclaimed. The size of the object; Heap Inuse is obviously the size of the active object; Heap Idle represents allocated but free memory; the

bottom two columns are the monitoring statistics of GC time and GC times, CPUFraction This represents the CPU time spent on GC by the process Percentage, the larger the value, the more frequent the GC is, and more time is wasted on the GC. Although the trend in the above figure is steep, the range is between 0.41% and 0.52%, which seems to be OK. If the GC ratio accounts for a single digit If there are even more ratios, it will definitely require further optimization of the program.
 
With this information, we can know the memory usage and allocation of the Golang and the execution of GC. If we want to analyze whether there is a memory leak, we can see whether the trend of memory usage and heap memory allocation is stable. In addition, GC_Limit and Byte Allocation If it keeps rising, there must be a memory leak. Combined with historical information, it is also possible to analyze the memory usage and GC impact of different versions/commits on Golang.

Next, I will show you how to use it. First, you need to enable Golang's expvar service, expvar (https://golang.org/pkg/expvar/ ) is a standard package provided by Golang that exposes internal variables or statistics.
The method used is very simple, just need to introduce the package in the Golang program, it will automatically register with the existing http service, as follows:

import _ "expvar"

If Golang does not start the http service, use the following method to start one, where the port is 6060, as follows:

func metricsHandler(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "application/json; charset=utf-8")

	first := true
	report := func(key string, value interface{}) {
		if !first {
			fmt.Fprintf(w, ",\n")
		}
		first = false
		if str, ok := value.(string); ok {
			fmt.Fprintf(w, "%q: %q", key, str)
		} else {
			fmt.Fprintf(w, "%q: %v", key, value)
		}
	}

	fmt.Fprintf(w, "{\n")
	expvar.Do(func(kv expvar.KeyValue) {
		report(kv.Key, kv.Value)
	})
	fmt.Fprintf(w, "\n}\n")
}

func main() {
   mux := http.NewServeMux()
   mux.HandleFunc("/debug/vars", metricsHandler)
   endpoint := http.ListenAndServe("localhost:6060", mux)
}

The default registered access path is /debug/vars. After the compilation is started, you can    access these internal variables exposed by expvar in JSON format through http://localhost:6060/debug/vars . Golang's runtime.Memstats is provided by default. Information, that is, the data source analyzed above, of course, you can also register your own variables, which will not be mentioned here for the time being.
 
OK, now our Golang program has been started and exposed the memory usage at runtime through expvar, now we need to use Metricbeat to get this information and store it in Elasticsearch.
 
The installation of Metricbeat is actually very simple, download the package of the corresponding platform and decompress it (download address: https://www.elastic.co/downloads/beats/metricbeat  ), before starting Metricbeat, modify the configuration file: metricbeat.yml

metricbeat.modules:
  - module: golang
     metricsets: ["heap"]
     enabled: true
     period: 10s
     hosts: ["localhost:6060"]
     heap.path: "/debug/vars"

The above parameters enable the Golang monitoring module, and will obtain the returned memory data of the configuration path once every 10 seconds. We also configure the configuration file and set the data to be output to the local Elasticsearch:

output.elasticsearch:
  hosts: ["localhost:9200"]


Now start Metricbeat:

./metricbeat -e -v

Now you should have data in Elasticsearch. Of course, remember to ensure that Elasticsearch and Kibana are available. You can flexibly customize the visualization in Kibana according to the data. It is recommended to use Timelion for analysis. Of course, you can also directly import the provided sample dashboard for convenience. , which is the effect of the first picture above.
Please refer to this document on how to import the sample dashboard: https://www.elastic.co/guide/e ... .htmlIn 
 
addition to monitoring the existing memory information, if you have some internal business indicators you want to It is also possible to expose it, and it is also possible to do it through expvar. A simple example is as follows:

var inerInt int64 = 1024
pubInt := expvar.NewInt("your_metric_key")
pubInt.Set(inerInt)
pubInt.Add(2)

A lot of internal running information is also exposed inside Metricbeat, so Metricbeat can monitor itself. . .
First, bring the parameters to set the address of pprof monitoring when starting, as follows:

./metricbeat -httpprof="127.0.0.1:6060" -e -v

In this way, we can access the internal operation through  [url=http://127.0.0.1:6060/debug/vars]http://127.0.0.1:6060/debug/vars [/url], as follows:

{
"output.events.acked": 1088,
"output.write.bytes": 1027455,
"output.write.errors": 0,
"output.messages.dropped": 0,
"output.elasticsearch.publishEvents.call.count": 24,
"output.elasticsearch.read.bytes": 12215,
"output.elasticsearch.read.errors": 0,
"output.elasticsearch.write.bytes": 1027455,
"output.elasticsearch.write.errors": 0,
"output.elasticsearch.events.acked": 1088,
"output.elasticsearch.events.not_acked": 0,
"output.kafka.events.acked": 0,
"output.kafka.events.not_acked": 0,
"output.kafka.publishEvents.call.count": 0,
"output.logstash.write.errors": 0,
"output.logstash.write.bytes": 0,
"output.logstash.events.acked": 0,
"output.logstash.events.not_acked": 0,
"output.logstash.publishEvents.call.count": 0,
"output.logstash.read.bytes": 0,
"output.logstash.read.errors": 0,
"output.redis.events.acked": 0,
"output.redis.events.not_acked": 0,
"output.redis.read.bytes": 0,
"output.redis.read.errors": 0,
"output.redis.write.bytes": 0,
"output.redis.write.errors": 0,
"beat.memstats.memory_total": 155721720,
"beat.memstats.memory_alloc": 3632728,
"beat.memstats.gc_next": 6052800,
"cmdline": ["./metricbeat","-httpprof=127.0.0.1:6060","-e","-v"],
"fetches": {"system-cpu": {"events": 4, "failures": 0, "success": 4}, "system-filesystem": {"events": 20, "failures": 0, "success": 4}, "system-fsstat": {"events": 4, "failures": 0, "success": 4}, "system-load": {"events": 4, "failures": 0, "success": 4}, "system-memory": {"events": 4, "failures": 0, "success": 4}, "system-network": {"events": 44, "failures": 0, "success": 4}, "system-process": {"events": 1008, "failures": 0, "success": 4}},
"libbeat.config.module.running": 0,
"libbeat.config.module.starts": 0,
"libbeat.config.module.stops": 0,
"libbeat.config.reloads": 0,
"memstats": {"Alloc":3637704,"TotalAlloc":155
... ...

For example, you can see the processing of the output module Elasticsearch above. For example, the output.elasticsearch.events.acked parameter indicates the message sent to Elasticsearch Ack and returned.
 
Now we need to modify the configuration file of Metricbeat. The Golang module has two metricsets, which can be understood as two monitored indicator types. We now need to add a new expvar type, which is another custom indicator. The corresponding configuration file is modified as follows:

- module: golang
  metricsets: ["heap","expvar"]
  enabled: true
  period: 1s
  hosts: ["localhost:6060"]
  heap.path: "/debug/vars"
  expvar:
    namespace: "metricbeat"
    path: "/debug/vars"

The above parameter namespace represents a command space for custom indicators, mainly for the convenience of management. Here is the information of Metricbeat itself, so namespace is metricbeat.
 
Restarting Metricbeat should be able to receive new data, we go to Kibana.
 
Assuming that we focus on
the two indicators output.elasticsearch.events.acked and output.elasticsearch.events.not_acked, we can simply define a graph in Kibana to see the success and failure trends of messages sent by Metricbeat to Elasticsearch.
Timelion expression:

.es("metricbeat*",metric="max:golang.metricbeat.output.elasticsearch.events.acked").derivative().label("Elasticsearch Success"),.es("metricbeat*",metric="max:golang.metricbeat.output.elasticsearch.events.not_acked").derivative().label("Elasticsearch Failed")

The effect is as follows:
 

Snip20170304_9.png


As can be seen from the above figure, the messages sent to Elasticsearch are very stable, and there is no message loss. At the same time, about the memory situation of Metricbeat, we open the imported Dashboard to view:
 

Snip20170304_10.png



The content of how to use Metricbeat to monitor Golang applications is almost here. The above describes how to monitor Golang's memory and custom business monitoring indicators based on expvar. Combined with the Elastic Stack, you can quickly analyze it. Everyone is useful.

Finally, this Golang module has not been released yet. It is estimated that it will be released in beats 6.0. Those who are interested in early adopters can download the source code and package it by themselves.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325546129&siteId=291194637