Practice of big data - using a combination kafka log collection and flume

Experimental log collection flume + Kafka

Open the terminal first inputs: sudo service ssh restart

Restart the ssh service. After you enter the following command to turn zookeeper service: zkServer.sh start

Enter: cd /home/user/bigdata/apache-flume-1.9.0-bin

Flume into the directory, then enter: bin / flume-ng agent --conf conf --conf-file conf / flume-conf.properties --name agent1 open flume.

After then open a terminal, configure the hosts then go to kafka directory, enter: echo "127.0.0.1" $ HOSTNAME | sudo tee -a / etc / hosts

Kafka into the directory. Input: cd /home/user/bigdata/kafka_2.11-1.0.0 nohup bin / kafka-server-start.sh config / server.properties> ~ / bigdata / kafka_2.11-1.0.0 / logs / server.log 2> & 1 &

Background start kafka. Enter jps view the process:

 

Input:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic1 创建kafka的topic。

Continue to enter bin / kafka-console-consumer.sh --bootstrap-server localhost: 9092 --topic topic1

Kafka open information sent by the comsumer consumption.

Then open the third terminal, enter: telnet localhost 44445

之后进行任意输入,会发现kafka的终端中出现了你输入的话。

输入的话在另一个终端出现:

后面再总结实验。

 

 

Guess you like

Origin blog.csdn.net/weixin_44961794/article/details/91049753