zookeeper&kafka虚拟机伪集群搭建实验

环境:
     CentOS 5.5 Server版 32位
     虚拟机内存4096M(最开始没注意,默认1024M,结果开启多个kafka时,就不断GC,导致应用无法使用)
     CPU 1*1
     硬盘 20G

1.下载软件
   
     zookeeper: http://www.apache.org/dyn/closer.cgi/zookeeper/
     wget  http://apache.fayea.com/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz

     kafka:http://kafka.apache.org/downloads
     wget http://mirrors.hust.edu.cn/apache/kafka/0.10.1.1/kafka_2.11-0.10.1.1.tgz

     JDK:jdk-8u121-linux-i586.rpm
     wget:http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-i586.rpm?AuthParam=1484725145_218b02b9ed050daba89d99daced369e0
     

2.创建用户
    
     groupadd dev
     useradd -G dev zookeper
     passwd zookeeper
     useradd -G dev afka
     passwd kafka
     

3.安装软件
    
     JDK
     rpm -install jdk-8u121-linux-i586.rpm
     
     zookeeper
     tar -xvf zookeeper-3.4.9.tar.gz

     kafka
     tar -xvf kafka_2.11-0.10.1.1.tgz
     

4.更新配置
    
     zookeeper
     cd zookeeper/conf/
     mv zoo_sample.cfg zoo.cfg
     vi zoo.cfg 
     修改如下行
          dataDir=/home/zookeeper/data/zookeeper/z01
          clientPort=2181 #因为是部署在同一个虚拟机上的伪集群,所以端口不能冲突,多个节点分别修改为不同端口
          #localhost可以使用,最好修改为虚拟机所分配的IP。前一个端口是多个节点之间相互通讯用,后一个端口是用于leader选举通讯。由于是伪集群,所以要避免端口冲突。server.x,x表示当前服务节点
          server.1=localhost:2280:2281
          server.2=localhost:2282:2283
          server.3=localhost:2284:2285
     拷贝zookeeper分别到三个不同目录,形成三个节点的伪集群
     进入dataDir所配置的目录,心中myid文件,文件内容分别为1和2和3,即标识当前服务是哪个节点

     kafka
     cd  kafka/config
     #需要几个broker,就复制几份
     cp service.properties service-1.properties
     cp service.properties service-2.properties
     cp service.properties service-3.properties
     cp service.properties service-4.properties
     修改service-x.properties
     broker.id=1 #按照顺序分别是1/2/3/4
     listeners=PLAINTEXT://192.168.88.129:9091 #修改为虚拟机所分配的IP,由于是伪集群,端口需要调整,避免冲突
     log.dirs=/home/kafka/data/k01/logs #由于是伪集群,避免目录冲突
     zookeeper.connect=localhost:2181 #zookeeper的地址和端口,根据前面zookeeper的配置进行调整(三个节点任意一个即可)由于伪集群,zookeeper和kafla在同一个虚拟机上,所以可以使用localhost,建议修改为虚拟机所分配的IP
     

     5.服务器console验证
          #启动zookeeper 进入zookeeper三个节点的bin目录
               cd /home/zookeeper/app/zookeeper/z01/bin
               ./zkServer.sh start
               cd /home/zookeeper/app/zookeeper/z02/bin
               ./zkServer.sh start
                cd /home/zookeeper/app/zookeeper/z03/bin
               ./zkServer.sh start

#创建topics 5个分区 每个分区分别复制到3个broker(不能大于broker的总数量)  进入kafka/bin zookeeper的地址任选一个
./kafka-topics.sh --create --zookeeper 192.168.88.129:2181 --replication-factor 3 --partitions 5 --topic test

#查看topic信息  进入kafka/bin zookeeper的地址任选一个
./kafka-topics.sh --describe --zookeeper 192.168.88.129:2181 --topic test

#列出topic  zookeeper的地址任选一个
./kafka-topics.sh --list --zookeeper 192.168.88.129:2181

#生产者 kafka的地址任选一个
./kafka-console-producer.sh --broker-list 192.168.88.129:9091 --topic test

#消费者 kafka的地址任选一个 bootstrap-server意味这个地址是用于发现所有broker地址的,并不表示仅针对这个broker
./kafka-console-consumer.sh --bootstrap-server 192.168.88.129:9091 --topic test --from-beginning

6.java客户端访问
     pom.xml文件内容(切记增加logback/log4j,日志对问题的发现很有帮助)
     
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.study</groupId>
    <artifactId>kafka-client</artifactId>
    <version>0.1.0</version>
   
    <properties>
        <java.version>1.8</java.version>
    </properties>
       <dependencies>
           <dependency>
                     <groupId>org.apache.kafka</groupId>
                     <artifactId>kafka-clients</artifactId>
                     <version>0.10.1.0</version>
              </dependency>
              <dependency>
              <groupId>ch.qos.logback</groupId>
              <artifactId>logback-core</artifactId>
              <version>1.1.8</version>
      </dependency>
      <dependency>
              <groupId>ch.qos.logback</groupId>
              <artifactId>logback-classic</artifactId>
              <version>1.1.8</version>
      </dependency>
      <dependency>
              <groupId>org.slf4j</groupId>
              <artifactId>slf4j-api</artifactId>
              <version>1.7.22</version>
      </dependency>
       </dependencies>
              
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                                                       <configuration>
                                                              <source>1.6</source>
                                                              <target>1.6</target>
                                                       </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Producer代码示例
    
public static void main(String[] args) {
              Properties props = new Properties();
              props.put("bootstrap.servers", "192.168.88.129:9092");//用于发现其他broker地址的初始地址
              props.put("acks", "all");//表示需要等待所有follower的确认后,消息才commit
              props.put("retries", 0);//表示不重试
              props.put("batch.size", 16384);
              props.put("linger.ms", 100);
              props.put("buffer.memory", 33554432);
              props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
              props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
              
              String topic = "test";
              Producer<String, String> producer = new KafkaProducer<String,String>(props);
              for(int i = 0; i < 10; i++){
                     ProducerRecord<String, String> rec = new ProducerRecord<String, String>(topic, "Key88&:"+i, "Value88&:"+i);//key用来计算partition,默认是用key.hash后计算出来
                     System.out.println(rec);
                     producer.send(rec, new Callback(){
                           @Override
                           public void onCompletion(RecordMetadata metadata, Exception exception) {
                                  if(exception != null){
                                         exception.printStackTrace();
                       }
                       System.out.println("发送到服务器 -----Offset: " + metadata.offset() + "-----Topic:" + metadata.topic() + "-----partition:" + metadata.partition());
                           }
                           
                     });
              }
              producer.close();
       }

Consumer代码示例
public static void main(String[] args) {
              Properties props = new Properties();
              props.put("bootstrap.servers", "192.168.88.129:9091");//用于发现其他broker地址的初始地址
              props.put("group.id", "consumer-for-test");//consumer的分组
              props.put("enable.auto.commit", "true");//自动提交
              props.put("auto.commit.interval.ms", "1000");
              props.put("session.timeout.ms", "10000");//默认心跳是2000,如果要设置session.timeout.ms,则务必>2000,否则不设置
              props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
              props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
              KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(props);
              consumer.subscribe(Arrays.asList("test"));//注册哪些topic
              while (true) {
                     ConsumerRecords<String, String> records = consumer.poll(100);
                     for (ConsumerRecord<String, String> record : records)
                           System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
              }
       }

实验过程中kill掉部分kafka节点,当某个partition出现没有leader时,producer和consumer均无法正常响应,随后手工启动部分kafka节点,使得所有partition均有leader后,producer和consumer再次可正常响应

切记防火墙打开相应端口
#开启防火墙端口
/sbin/iptables -I INPUT -p tcp --dport 2181 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 2182 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 2183 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 9091 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 9092 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 9093 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 9094 -j ACCEPT

/etc/rc.d/init.d/iptables save
/etc/init.d/iptables restart
#查看防火墙
/etc/init.d/iptables status

Producer和Consumer的其他示例,参考
          http://kafka.apache.org/0101/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html
          http://kafka.apache.org/0101/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html

猜你喜欢

转载自yangbb.iteye.com/blog/2354186