一把过springboot kafka 整合ELK

在这里插入图片描述

前言

直接存放到文件中,不方便查找,我们可以借用ELK快速查询。

环境

  • centos7
  • logstash
  • kibana
  • elasticsearch
  • kafka

步骤

安装 ELK

  1. 安装elk,这里使用docker-compose.yml
version: "3.2"
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
        environment:
            - "discovery.type=single-node"
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
            - bootstrap.memory_lock=true
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - data01:/usr/share/elasticsearch/data
        container_name: elasticsearch
        hostname: elasticsearch
        restart: always
        ports:
            - "9200:9200"
            - "9300:9300"
        networks:
            - elk
    logstash:
        image: docker.elastic.co/logstash/logstash:7.5.2
        container_name: logstash
        hostname: logstash
        restart: always
        privileged: true
        volumes:
            - /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
            - /etc/logstash/conf.d/:/usr/share/logstash/conf.d/
        ports:
            - 9600:9600
            - 5044:5044
        networks:
            - elk
        depends_on:
            - elasticsearch
    kibana:
        image: docker.elastic.co/kibana/kibana:7.5.2
        environment:
            I18N_LOCALE: zh-CN
        container_name: kibana
        hostname: kibana
        restart: always
        ports:
            - "5601:5601"
        networks:
            - elk
        depends_on:
            - elasticsearch

volumes:
    data01:
        driver: local

networks:
    elk:
        driver: bridge

这里logstash的配置文件挂在到了/etc/logstash目录上

安装kafka

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper   ## 镜像
    ports:
      - "2181:2181"                 ## 对外暴露的端口号
  kafka:
    image: wurstmeister/kafka       ## 镜像
    volumes:
      - /etc/localtime:/etc/localtime ## 挂载位置(kafka镜像和宿主机器之间时间保持一直)
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.56.124   ## 修改:宿主机IP
      KAFKA_ZOOKEEPER_CONNECT: 192.168.56.124:2181       ## 卡夫卡运行是基于zookeeper的
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.56.124:9092
  kafka-manager:
    image: sheepkiller/kafka-manager                ## 镜像:开源的web管理kafka集群的界面
    environment:
      ZK_HOSTS: 192.168.56.124                    ## 修改:宿主机IP
    ports:
      - "9000:9000"

配置logstash

  1. 创建logstash的配置文件目录
mkdir -p /etc/logstash/
cd /etc/logstash
mkdir conf.d
touch logstash.yml
cd conf.d
touch kafka.yml

配置logstash.yml文件

path.config: /usr/share/logstash/conf.d/*.conf
path.logs: /var/log/logstash

配置kafka.yml文件

input {
    
    
  kafka {
    
    
      bootstrap_servers => "192.168.56.124:9092"
      topics_pattern => ".*"
      group_id => "logstash2_servivce"
      consumer_threads => 10
      auto_offset_reset => "earliest"
      decorate_events => "true"
  }
}


filter {
    
    
  json{
    
    
    source => "message"
  }
}

output {
    
    
  elasticsearch {
    
    
    hosts  => "192.168.56.124:9200"
    action => "index"
    index  => "%{[@metadata][kafka][topic]}-%{+YYYY.MM.dd}"
  }
}

启动

到docker-compose文件目录执行启动命令

docker-compose up -d

docker ps

在这里插入图片描述

springboot项目

这里可以直接查看源码, demo项目

测试

  1. 多次访问http://localhost:8080/hello?name=aa
  2. 配置kibana

在这里插入图片描述
在这里插入图片描述
建立索引
在这里插入图片描述
查看日志数据
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/qq_37362891/article/details/114074883