PLG log system (docker)

overview

The PLG log system has the advantages of light weight, simple deployment, single configuration, and less waste of resources. The module is composed of Prontaik+Loki+Grafaba.
Promtail

Promtail is the proxy that collects logs and sends them to loki. Benchmark Logstash in ELK.
Promtail is a log collection agent. Its main mode of work is to discover log files stored on disk and forward the log files associated with a set of tags to Loki. It is usually deployed to each machine that needs to monitor the application. / on the container. Promtail is primarily used to discover targets, append tags to log streams, and push logs to Loki. As of now, Promtail can track logs from two sources: local log files and systemd logs (only supports AMD64 architecture)

Loki

Loki is the master server responsible for storing logs and handling queries. Benchmark ElasticSearch in ELK.
Loki is a group of components that can form a fully functional log stack. Unlike other log systems, Loki only builds the index of log tags and does not index the original log messages. It sets a set of tags for log data, which means that the operation of Loki The cost is lower and the efficiency can be improved by several orders of magnitude

Grafana

Grafana provides the user interface. Kibana Grafana in ELK
is an open source visualization and analysis software that allows users to query, visualize, alert and explore monitoring indicators. Grafana mainly provides dashboard solutions for time series data, supporting more than dozens of data sources

processing flow

insert image description here

service deployment

  • Server deployment
  1. Create the yml file
    vim /data/plg/docker-compose.yml
version: "3"

networks:
  loki:

services:
  loki:
    image: grafana/loki:2.6.1
    container_name: PLG_loki
    restart: unless-stopped
    ports:
      - "3100:3100"
    volumes:
      - /data/plg/etc/loki:/etc/loki
      - /data/plg/data/loki:/loki
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - loki

  promtail:
    image: grafana/promtail:2.6.1
    container_name: PLG_promtail
    restart: unless-stopped
    volumes:
      - /data/cc/logs/:/161/                 #映射日志文件目录
      - /data/plg/etc/promtail:/etc/promtail
    command: -config.file=/etc/promtail/config.yml
    networks:
      - loki

  grafana:
    image: grafana/grafana:latest
    container_name: PLG_grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      #- /data/plg/etc/grafana/grafana.ini:/etc/grafana/grafana.ini
      - /data/plg/data/grafana/:/var/lib/grafana/
      - /data/plg/logs/grafana:/var/log/grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin            #设置用户名admin
      - GF_SECURITY_ADMIN_PASSWORD=Weihu12345   #设置密码(不设置默认admin/admin)
    networks:
      - loki
  1. Write a configuration file
  • loki

vim /data/plg/etc/loki/local-config.yaml

auth_enabled: false       #启用认证

server:
  http_listen_port: 3100    #监听端口

common:
  path_prefix: /loki   #数据目录
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1    #副本数量,测试的情况下或者日志本身并不具备高数据安全要求的情况下就设置为1就好
  ring:
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h
limits_config:
  # 这部分配置是为了增加日志采集的限制,默认的限制是很小的,很容易触发报错
  #   # 需要注意的是,报错本身只是会丢日志,不会导致采集器或者loki宕机
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  ingestion_rate_mb: 40       #修改每用户摄入速率限制,即每秒样本量,默认值为4M
  ingestion_burst_size_mb: 20  #修改每用户摄入速率限制,即每秒样本量,默认值为6M
chunk_store_config:
        #max_look_back_period: 168h   #回看日志行的最大时间,只适用于即时日志
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: true #日志保留周期开关,默认为false
  retention_period: 128h  #日志保留周期

#配置alertmanager地址
ruler:
  alertmanager_url: http://192.168.0.161:9093

# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
#  reporting_enabled: false
  • promtail

vim /data/plg/etc/promtail/config.yml

server:
  http_listen_port: 9080    #http监听端口
  grpc_listen_port: 0

positions:
  filename: /etc/promtail/positions.yaml     #配置文件位置

clients:
  - url: http://192.168.0.161:3100/loki/api/v1/push     #Loki服务器推送地址

scrape_configs:
#多个日志目录配置多个job项
- job_name: obc-161-*log           #job名称,相当于elk的索引
  static_configs:
  - targets:
      - localhost                  #日志存放主机Loki地址
    labels:
      job: obclog-161-*log         #标签
      __path__: /161/obc/*.log     #日志位置不支持下级目录,支持同级多个目录例如:/data/{A,B,C}/*log

3. Start the service

docker-compose -f /data/plg/docker-compose.yml up -d

4. Grafana access to Loki data source

  • Add loki data source
    insert image description hereinsert image description hereinsert image description here
  • search log
    insert image description here

Guess you like

Origin blog.csdn.net/weixin_49566876/article/details/129745925