kafka as elk cache

ELK faced large-scale cluster log collection in a large amount of data collected is not timely, or the risk of downtime, you can choose redis single node, but compared to redis, kafka cluster high availability features, better, let's configure kafka cluster configuration elk as a cache method.

Installation configuration kafka cluster

A. The initial preparation environment

1. Server ready
Host computer address
db01 10.0.0.200
db02 10.0.0.201
DB03 10.0.0.202
# cat /etc/redhat-release  #这里我使用的是centos7.6的系统
CentOS Linux release 7.6.1810 (Core) 
2. Download the installation package

Before installing kafka, required on each server configured zookeeper

#(以下操作,三台服务器同时操作)
#创建目录
mkdir /kafka 
cd /kafka

#下载kafka地址
http://kafka.apache.org/downloads

#下载kafka
wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.3.1/kafka_2.11-2.3.1.tgz

#下载zookeeper地址
http://zookeeper.apache.org/releases.html

#下载zookeeper
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

#jdk准备好
jdk-8u151-linux-x64.tar.gz
3. Configure the java environment
#解压tar包
tar xf jdk-8u151-linux-x64.tar.gz -C /opt

#创建软链接
ln -s /opt/jdk1.8.0_151/ /opt/jdk

#配置环境变量
#在/etc/profile后三行添加如下:
# tail -3 /etc/profile 
export JAVA_HOME=/opt/jdk
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin

#验证
source /etc/profile 

java -version
java version "1.8.0_151"

II. Installation zookeeper Service

1. Install kafka
#解压zookeeper
tar xf zookeeper-3.4.14.tar.gz -C /opt/

#创建软链接
ln -s /opt/zookeeper-3.4.14 /opt/zookeeper

#创建数据目录
mkdir -p /data/zookeeper

#编辑配置文件
cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg

vim /opt/zookeeper/conf/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.0.0.200:2888:3888
server.2=10.0.0.201:2888:3888
server.3=10.0.0.202:2888:3888

#复制到另外两台
scp /opt/zookeeper/conf/zoo.cfg 10.0.0.201:/opt/zookeeper/conf/
  
scp /opt/zookeeper/conf/zoo.cfg 10.0.0.202:/opt/zookeeper/conf/

#创建myid
echo 1 > /data/zookeeper/myid  #server1
echo 2 > /data/zookeeper/myid  #server2
echo 3 > /data/zookeeper/myid  #server3
3. Start zookeeper
#三台都启动zookeeper
/opt/zookeeper/bin/zkServer.sh start

#查看状态
# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
# Mode: follower 从库
# Mode: leader  主库

#配置环境变量,更方便的使用命令
echo "export PATH=/opt/zookeeper/bin/:$PATH" >> /etc/profile
source /etc/profile

#测试
zkCli.sh -server 10.0.0.202:2181 #可以登录任意一个节点

create /test dhc  #创建

get /test  #查看

set /test ctt #改变key的值

III. Installation and testing kafka

1. Install kafka
#以下操作3台服务器都需要操作,以server1为事列

#解压
tar xf kafka_2.11-2.3.1.tgz -C /opt/
cd /opt

#创建软链接
ln -s kafka_2.11-2.3.1 kafka
cd kafka

#编辑配置文件

vim config/server.properties  #我只放修改的,其他的不动

broker.id=1   #对应zookeeper的id
listeners=PLAINTEXT://10.0.0.200:9092 #配置自己的ip地址
log.retention.hours=24   #改成24小时
zookeeper.connect=10.0.0.200:2181,10.0.0.201:2181,10.0.0.202:2181 #配置集群


#测试启动
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties  #只是测试

#最后一行出现一下,证明启动成功
INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
 
#ctrl +c掉,重新后台运行
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

#测试
ps all|grep kafka

#查看日志
tail -f /opt/kafka/logs/server.log

#在node1上创建test
# bin/kafka-topics.sh --create --bootstrap-server 10.0.0.200:9092 --replication-factor 1 --partitions 1 --topic test

#在node2上创建test1
# bin/kafka-topics.sh --create --bootstrap-server 10.0.0.201:9092 --replication-factor 1 --partitions 1 --topic test1

#在node3上查看创建了什么,如果可以看到证明成功
# bin/kafka-topics.sh --list --bootstrap-server 10.0.0.202:9092


#具体实验步骤请看官网
http://kafka.apache.org/quickstart

Guess you like

Origin www.cnblogs.com/dinghc/p/12027418.html