Build kafka test environment, winds+linux

1. First introduce the linux environment, stand-alone version

1. Download and install kafka, Kafka comes with zookeeper, you can use the in-band, if the production environment is best to use a separately configured zookeeper cluster environment

wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.10.1.1/kafka_2.10-0.10.1.1.tgz

Go to the config directory

The main concern: the server.properties file is enough, we can find it in the directory:

 

There are many files, you can find the Zookeeper file here, we can start it according to the zk cluster that comes with Kafka, but it is recommended to use an independent zk cluster

-rw-r--r-- 1 root root  906 12月 16 02:04 connect-console-sink.properties

-rw-r--r-- 1 root root  909 12月 16 02:04 connect-console-source.properties

-rw-r--r-- 1 root root 2760 12月 16 02:04 connect-distributed.properties

-rw-r--r-- 1 root root  883 12月 16 02:04 connect-file-sink.properties

-rw-r--r-- 1 root root  881 12月 16 02:04 connect-file-source.properties

-rw-r--r-- 1 root root 1074 12月 16 02:04 connect-log4j.properties

-rw-r--r-- 1 root root 2061 12月 16 02:04 connect-standalone.properties

-rw-r--r-- 1 root root 1199 12月 16 02:04 consumer.properties

-rw-r--r-- 1 root root 4369 12月 16 02:04 log4j.properties

-rw-r--r-- 1 root root 1900 12月 16 02:04 producer.properties

-rw-r--r-- 1 root root 5336 12月 16 02:04 server.properties

-rw-r--r-- 1 root root 1032 12月 16 02:04 tools-log4j.properties

-rw-r--r-- 1 root root 1023 12月 16 02:04 zookeeper.properties

 

attribute explanation

broker.id=0 #The unique identifier of the current machine in the cluster, which is the same as the myid of zookeeper

port=19092 #The default port for kafka to provide external services is 9092

host.name=192.168.7.100 #This parameter is disabled by default. There is a bug in 0.8.1, DNS resolution problems, and failure rate problems.

num.network.threads=3 #This is the number of threads used by the borrower for network processing

num.io.threads=8 #This is the number of threads used by the borrower for I/O processing

log.dirs=/opt/kafka/kafkalogs/ #The directory where messages are stored. This directory can be configured as a "," comma-separated expression. The above num.io.threads needs to be

 

This directory is greater than the number of this directory. If multiple directories are configured, the place where the newly created topic persists the message is, in the current comma-separated directory, that point

 

Put the one with the least number of zones

socket.send.buffer.bytes=102400 #Send buffer buffer size, the data is not sent all at once, it is first stored in the buffer and reaches a certain size

 

In delivery, I can improve performance

socket.receive.buffer.bytes=102400 #kafka receives the buffer size, when the data reaches a certain size, it will be serialized to disk

socket.request.max.bytes=104857600 #This parameter is the maximum number of requests to request messages to kafka or send messages to kafka. This value cannot exceed

 

java stack size

num.partitions=1 #The default number of partitions, a topic defaults to 1 partition number

log.retention.hours=168 #The maximum persistence time of the default message, 168 hours, 7 days

message.max.byte=5242880 #The maximum value of message storage is 5M

default.replication.factor=2 #kafka saves the number of copies of the message, if one copy fails, the other can continue to provide services

replica.fetch.max.bytes=5242880 #Maximum direct number of fetched messages

log.segment.bytes=1073741824 #This parameter is: because kafka's message is in the form of appending to the file, when it exceeds this value, kafka will update

 

start a file

log.retention.check.interval.ms=300000 #Check the log failure time configured above every 300000 milliseconds (log.retention.hours=168 ), to

 

Directory to see if there are expired messages, if so, delete

log.cleaner.enable=false #Whether to enable log compression, generally does not need to be enabled, if enabled, it can improve performance

zookeeper.connect=192.168.7.100:12181,192.168.7.101:12181,192.168.7.107:1218 #Set the connection port of zookeeper

 

Here stand-alone test environment, no modification

 

1. Start the service, you must start zookeeper before starting kafka

> bin/zookeeper-server-start.sh config/zookeeper.properties &

 

#Start Kafka from the background

>bin/kafka-server-start.sh config/server.properties &  

 

Stop Kafka server 

bin/kafka-server-stop.sh  

 

2. Check if the service is started

#jps

20348 Jps

4233 QuorumPeerMain

18991 Kafka

 

3. Create a topic to verify whether the creation was successful

#Create Topic

> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

#explain

--replication-factor 1 #copy 1 copy

--partitions 1 #Create 1 partition

--topic #The topic is test

#View topic

> bin/kafka-topics.sh --list --zookeeper localhost:2181

test

 

 

Create a publisher''' on a server

#Create a broker, publisher

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

 

Create a subscriber''' on a server

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

 

The following describes the installation configuration and common commands of zookeeper

1. Zookeeper stand-alone installation and configuration

Download the zookeeper binary installation package

http://www.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

Or zookeeper online installation

#wget http://www.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

#tar zxvf zookeeper-3.4.6.tar.gz

# cd zookeeper-3.4.6

# mkdir data

# chmod 777 data

# cd conf

# cp zoo_sample.cfg zoo.cfg

# vi zoo.cfg 

---------------

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit = 10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/root/zookeeper-3.4.6/data #This is what I modified

# the port at which the clients will connect

clientPort=2181

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

# autopurge.purgeInterval = 1

-------------------

Configuration instructions:

tickTime: This time is used as the time interval for maintaining heartbeat between Zookeeper servers or between the client and the server, that is, each tickTime will

 

Send a heartbeat.

dataDir: As the name suggests, it is the directory where Zookeeper saves data. By default, Zookeeper also saves log files for writing data in this directory.

clientPort: This port is the port where the client connects to the Zookeeper server. Zookeeper will listen to this port and accept the client's access request.

 

start zookeeper

# ./zkServer.sh start #Start 

When these configuration items are configured, you can now start zookeeper:

# netstat -at|grep 2181 #View zookeeper port

# netstat -nat #View port information

# cd /root/zookeeper-3.4.6/bin

# jps #View the name of the started service

# ./zkServer.sh status #View status

# ./zkServer.sh stop #Close

So far, the zookeeper stand-alone environment has been built. For the cluster environment, see: http://www.cnblogs.com/linjiqin/p/5861599.html

 

windows environment

1. Official documentation http://kafka.apache.org/quickstart

cmd, enter the decompressed directory, E:\Download\soft\kafka_2.10-0.10.1.1\bin\windows

Start zookeeper first, then start kafka

bin/windows/zookeeper-server-start.bat ../../config/zookeeper.properties

 

open cmd again

bin/windows/kafka-server-start.bat ../../config/server.properties

 

create topic

 

bin/windows/kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

#explain

--replication-factor 1 #copy 1 copy

--partitions 1 #Create 1 partition

--topic #The topic is test

 

#View topic

bin/windows/kafka-topics.bat --list --zookeeper localhost:2181

 

Create a publisher on one server

#Create a broker, publisher

bin/windows/kafka-console-producer.bat --broker-list localhost:9092 --topic test

Enter, post content, aaa, bbb, etc.

 

Create a subscriber on a server

bin/windows/kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326443593&siteId=291194637