Installation Configuration 1.1 kafka

Official Documents

kafka-2.1.0-src.tgz

 

KAFKA-STANDLONE.txt

rz uploaded to the / usr / local / under

kafka_2.11-2.1.0.tgz

Extract the tar xzvf file

Mv renamed the file name

Environment variables vi / etc / profile

409E783618ED4E7C94D87C3DB02528F5

export KAFKA_HOME=/usr/local/kafka

46731DE557EE45FD811B06890D60175C

Build environment variable source / etc / profile

Start zkServer.sh start

cd kafka/

cd config/

F5A06FB1CA6945F3ADB5186C4A78256E

/*

Or delete files passed in

server.properties

*/

 

Deletion configuration inside replaced as follows:

broker.id=1

listeners=PLAINTEXT://192.168.16.100:9092

#advertised.listeners=PLAINTEXT://your.host.name:9092

#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/var/kafka/log

num.partitions = 1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

# The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

# The minimum age of a log file to be eligible for deletion due to age

log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining

# segments drop below log.retention.bytes. Functions independently of log.retention.hours.

#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according

# to the retention policies

log.retention.check.interval.ms=300000

zookeeper.connect=192.168.16.100:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

delete.topic.enable=true

Configuration

Two start kafka

kafka-server-start.sh /usr/local/kafka/config/server.properties

Three create a clone TOPIC

kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

View TOPIC

kafka-topics.sh --list --zookeeper localhost:2181

Four Send a message

kafka-console-producer.sh --broker-list 192.168.16.100:9092 --topic test

And the like can be entered message DING XIANG HUA

Surprised execute commands and enter Chinese lilac

Five clones session starts back consumers can use a multi-boot

kafka-console-consumer.sh --bootstrap-server 192.168.16.100:9092 --topic test --from-beginning

Delete TOPIC

kafka-topics.sh --delete --topic test --zookeeper localhost:2181

kafka installation .txt

Knowledge install three .mp4

Guess you like

Origin www.cnblogs.com/dasiji/p/11246831.html