Table of contents
3. Prometheus monitoring platform
1. Deploy the data collector cadvisor
3. Deploy the visualization platform Gragana
4. Enter the background console
1. Common command monitoring
docker ps
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30d9a0e764a3 busybox "sh" 12 seconds ago Up 11 seconds busybox2
0d44a42e10dc busybox "sh" 17 seconds ago Up 15 seconds busybox1
field meaning
container id: the ID of the container images: based on the created image command: the process used by the current running environment created: the time created status: the running time ports: the mapped port names: the name of the container
docker top
View the processes in the specified container
[root@localhost ~]# docker top busybox1
UID PID PPID C STIME TTY TIME CMD
root 2335 2314 0 11:16 pts/0 00:00:00 sh
options
Same as the ps option running in Linux a Displays all processes under the current terminal, including processes of other users. u user-based format to display process status. x displays all processes, not differentiated by terminal. -A shows all processes. -e This parameter has the same effect as specifying the "A" parameter. -f Display UID, PPID, C and STIME fields.
View detailed docker container process
[root@localhost ~]# docker top busybox1 aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2335 0.0 0.0 1320 252 pts/0 Ss+ 11:16 0:00 sh
docker stats
Dynamically display the running process of the container
View process information for all containers
[root@localhost ~]# docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
30d9a0e764a3 busybox2 0.00% 56KiB / 3.686GiB 0.00% 648B / 0B 0B / 0B 1
0d44a42e10dc busybox1 0.00% 56KiB / 3.686GiB 0.00% 1.09kB / 0B 1.18MB / 0B 1
View the dynamic process of the container
[root@localhost ~]# docker stats busybox1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0d44a42e10dc busybox1 0.00% 56KiB / 3.686GiB 0.00% 1.09kB / 0B 1.18MB / 0B 1
field meaning
container id: container id name: container name CPU %: CPU usage percentage MEM USAGE/limit: the total memory used by the container, and the total memory it is allowed to use mem%: the memory usage percentage net I/O: the container receives through its network interface and the amount of data sent block I/O: the amount of data the container writes and reads from the block device on the host pids: the number of processes or threads the container has created
options
-a: Display all containers (the default display is running) --no-stream: Disable stream statistics, only pull the first result --no-trunc: Do not truncate the output
2.weave scope
The biggest feature of weave scope is that it will automatically generate a docker container map, allowing us to intuitively understand, monitor and control.
1. Download
wget https://github.com/weaveworks/scope/releases/download/v1.13.2/scope
2. Install
[root@localhost ~]# chmod +x scope
[root@localhost ~]# ./scope launch
Unable to find image 'weaveworks/scope:1.13.2' locally
1.13.2: Pulling from weaveworks/scope
ba3557a56b15: Pull complete
3ac4c0e9800c: Pull complete
d052e74a4dae: Pull complete
aacb9bf49f73: Pull complete
06841e6f61a9: Pull complete
ee99b95c7732: Pull complete
dd0e726a9a15: Pull complete
05cb5f9d0d32: Pull complete
e956cf3e716a: Pull complete
Digest: sha256:8591bb11d72f784f784ac8414660759d40b7c0d8819011660c1cc94271480a83
Status: Downloaded newer image for weaveworks/scope:1.13.2
458ddccb286a03b96e523e41d149ee102f6007cd55b4be179334675e5e7c311e
Scope probe started
Weave Scope is listening at the following URL(s):
* http://192.168.2.5:4040/
3. Access query
http://192.168.2.5:4040/
3. Prometheus monitoring platform
1. Deploy the data collector cadvisor
[root@localhost ~]# docker run -v /:/rootfs:ro -v /var/run/:/var/run/:rw -v /sys:/sys:ro -v /var/lib/docker: /var/lib/docker:ro -p 8080:8080 -d --name cadivsor google/cadvisor Visit :http://192.168.2.5:8080/containers/
2. Deploy Prometheus
[root@localhost ~]# docker run -d -p 9100:9100 \
-v "/proc:/host/proc" \
-v "/sys:/host/sys" \
-v "/:/rootfs" \
--net=host \
prom/node-exporter \
--path.procfs /host/proc \
--path.sysfs /host/sys \
--collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/devicemapper|rootfs/var/lib/docker/aufs)($$|/)"
Unable to find image 'prom/node-exporter:latest' locally
latest: Pulling from prom/node-exporter
aa2a8d90b84c: Pull complete
b45d31ee2d7f: Pull complete
b5db1e299295: Pull complete
Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd
Status: Downloaded newer image for prom/node-exporter:latest
WARNING: Published ports are discarded when using host network mode
795214fa2248b18fee6e6600bb567493db4be265b5c79c9445eb96020aab3578
Write a Prometheus monitoring configuration file
[root@localhost ~]# vi prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090','localhost:8080','localhost:9100']
Main configuration file content
static_configs: - targets: ['localhost:9090','localhost:8080','localhost:9100'] Note: Fill in the IP address and port number of the cadvisor you need to monitor
3. Deploy the visualization platform Gragana
[root@localhost ~]# docker run -d -i -p 3000:3000 \
-e "GF_SERVER_ROOT_URL=http://grafana.server.name" \
-e "GF_SECURITY_ADMIN_PASSWORD=secret" \
--net=host \
grafana/grafana
4. Enter the background console
http://192.168.2.5:3000/Default username and password admin password: GF_SECURITY_ADMIN_PASSWORD=secret field content Note: If you do not specify the password is admin
1. Add Prometheus module
2. Add docker container monitoring template
Docker container template: 193 Linux host monitoring template: 9276
monitoring results