linux machine deployment kafka (filebeat + elk composition)

 

filebeat + elk combination of single deployment kafka

 

ready:

kafka download link address: http: //kafka.apache.org/downloads.html

Download it here kafka_2.12-2.10.0.0.tgz (kafka and zookeeper are using the same package).

 

First, install and configure the JDK (download jdk, you can configure the environment)

JAVA_HOME=/opt/jdk1.8.0_131

CLASSPATH=.:$JAVA_HOME/lib.tools.jar

PATH=$JAVA_HOME/bin:$PATH

export JAVA_HOME CLASSPATH PATH

 

$ java -version

java version "1.8.0_131"

Java(TM) SE Runtime Environment (build 1.8.0_131-b11)

Java HotSpot(TM) Server VM (build 25.131-b11, mixed mode)

Or specify kafka jdk environment variable in the bin / kafka-run-class.sh

we bin / kafka-run-class.sh

JAVA_HOME=/opt/jdk1.8.0_131

Second, the installation Kafka

1 , install glibc

# yum -y install glibc.i686

2, extract kafka_2.12-2.10.0.0.tgz

Configure the zookeeper

$cd  kafka_2.12-2.10.0.0

$vi config/zookeeper.properties

dataDir=/data/soft/kafka/data

dataLogDir=/data/soft/kafka/log

clientPort = 2181

maxClientCnxns=100

tickTime=2000

initLimit = 10

 

Zookeeper started directly after configuration:

$bin/zookeeper-server-start.sh config/zookeeper.properties

 

If there is no error, you can turn back to start:

$nohup bin/zookeeper-server-start.sh config/zookeeper.properties &

Reconfiguration kafka

$ vi config/server.properties

broker.id=0

listeners=PLAINTEXT://0.0.0.0:9092

advertised.listeners=PLAINTEXT://server20.srv:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/log/kafka

num.partitions = 2

num.recovery.threads.per.data.dir=1

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=6000

 

Start kafka:

$ bin/kafka-server-start.sh config/server.properties

 

If there is no error, you can turn back to start:

$nohup bin/kafka-server-start.sh config/server.properties &

Check the Startup: The port is enabled by default 2181 (zookeeper) and 9202 (kafka).

3 , test Kafka used to live :

(1) Create topic

$bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

 

(2) to view the created topic

$ bin/kafka-topics.sh --list --zookeeper localhost:2181

test

 

(3), the production test message (a message sent by the client analog)

$bin/kafka-console-producer.sh --broker-list 192.168.53.20:9092 --topic test

> ..Hello world .. # typing Enter

 

(4), the test message consumer (client receives analog information)

$bin/kafka-console-consumer.sh --bootstrap-server 192.168.53.20:9202 --topic test --from-beginning

 

..hello world .. # Description kafka deployed correctly if information is received normally

 

(5), delete the topic

$bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test

 

Completion of the above on behalf of kafka single charter a successful installation.

Third, the configuration filebeat

filebeat.yml file to add configuration information, comment out the original logstash output.

#------------------- Kafka output ---------------------

output.kafka:

  hosts: ["server20.srv:9092"]

  topic: 'kafka_logstash'

 

Fourth, configure logstash

logstash.conf add configuration information file, comment out the original input {beats ...}.

  input {

    kafka {

      codec => "json"

      bootstrap_servers => "server20.srv:9092"

      topics => ["kafka_logstash"]

      group_id => "kafka-consumer-group"

      decorate_events => true

      auto_offset_reset => "latest"

  }

Kafka access address configured on the server logstash:

$ cat /etc/hosts

122.9.10.106    server20.srv    8bet-kafka

Five, Kafka used to live configuration file reference

$ cat config/server.properties | egrep -v '^$|#'

broker.id=0

listeners=PLAINTEXT://0.0.0.0:9092

advertised.listeners=PLAINTEXT://server20.srv:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/log/kafka

num.partitions = 2

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

 

$cat config/zookeeper.properties | egrep -v '^$|#'

dataDir=/data/soft/kafka/data

dataLogDir=/data/soft/kafka/zookeeper_log

clientPort = 2181

maxClientCnxns=100

tickTime=2000

initLimit = 10

$cat config/producer.properties | egrep -v '^$|#'

bootstrap.servers=localhost:9092

compression.type=none

$cat config/consumer.properties | egrep -v '^$|#'

bootstrap.servers=localhost:9092

group.id=kafka-consumer-group

Sixth, after the test configuration consume messages communicated, if accepted normal, then success

$bin/kafka-console-consumer.sh --bootstrap-server server20.srv:9202 --topic test --from-beginning

 

 

Guess you like

Origin www.cnblogs.com/immense/p/11402640.html