flume+kafka

System version: Ubuntu 16.04 LTS

Kernel: 4.4.0-42-generic
JDK: 1.8.0_101

 

zookeeper:3.4.9

flume:1.6.0

kafka:2.11-0.10.0.1

 

Installation directory: /opt/bigdata/

一:zookeeper

Download the tar package and extract it to /opt/bigdata/zookeeper-3.4.9

Create a configuration file: sudo cp zoo_sample.cfg zoo.cfg

Modify dataDir to dataDir=/opt/bigdata/zookeeper-3.4.9/data

Dynamic zookeeper: sudo bin / zkServer.sh start

Check zookpeeper status: sudo bin/zkServer.sh status

cxh@ubuntu:/opt/bigdata/zookeeper-3.4.9$ sudo bin/zkServer.sh status
[sudo] password for cxh:
ZooKeeper JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: standalone

The startup log output is in zookeeper.out

 

二:flume

Download and extract to /opt/bigdata/flume-1.6.0

Create configuration file: cp flume-conf.properties.template flume-conf.properties

Revise:

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
#a1.sources.r1.type = netcat
#a1.sources.r1.bind = localhost
#a1.sources.r1.port = 44444

#从文件中读取
a1.sources.r1.command = tail -f n+1 /opt/bigdata/flume-1.6.0/logs/flume.log

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = test
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1

Start flume:

sudo bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name a1 -Dflume.root.logger=INFO,console

 

Three: kafka

Download and extract to /opt/bigdata/kafka_2.11-0.10.0.1

Modify the configuration file: conf/server.properties

listeners=PLAINTEXT://localhost:9092

 

log.dirs=/opt/bigdata/kafka_2.11-0.10.0.1/logs

  Start the service sudo bin/kafka-server-start.sh config/server.properties &
  view topic bin/kafka-topics.sh --list --zookeeper localhost:2181
  create test topic bin/kafka-topics.sh --create - -zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

view again sudo bin/kafka-topics.sh --list --zookeeper localhost:2181
Simulate producer sudo bin/kafka-console-producer. sh --broker-list localhost:9092 --topic test
consumer sudo bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326653611&siteId=291194637