Kafka and zookeeper download address and installation method and JAVA consumer method

kafka download address: http://archive.apache.org/dist/kafka/3.0.1/kafka_2.13-3.0.1.tgz

zookeeper download address: http://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/

installation method:

tar -xf zookeeper-3.4.6.tar.gz -C /usr/src Extract to the specified directory (directory customization)

Configure environment variables

vim /etc/profile   

Add the following content to the bottom of the file

export ZOOKEEPER_HOME=/usr/src/zookeeper-3.4.6 full path of decompressed directory
export PATH=$PATH:$REDIS_HOME/bin:$TWEMPROXY_HOME/sbin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin

source /etc/profile refreshes the environment variables to make the configuration take effect

Enter: #cd /usr/src/zookeeper-3.4.6

Enter: #cd conf 

#cp zoo_sample.cfg zoo.cfg //Copy a zoo.cfg, start zk, the default loaded configuration file is zoo.cfg

Then configure the data storage address

 The meaning of each variable inside:

1.tickTime:发生心跳的间隔时间,用于计算的时间单元的基数单位,默认是2000毫秒

2.initLimit :用于集群,slave首次跟随leader,leader最长等待时间,超时没有匹对成功,Flower将会被Leader放弃。
默认是10,它的计算以tickTime的倍数来表示:10*2000=20秒

3.syncLimit :主从同步限制(用于集群),master主节点与从节点之间发送消息,请求和应答时间长度。(心跳机制)最长不能超过多少个tickTime的时间长度。
一旦超过设定的心跳时间,主从仍未能流畅应答,从节点就会被抛弃。
默认是5,它的计算,也是以tickTime的倍数来表示:5*2000=10秒
4.dataDir :必须配置的持久化目录,不建议放在temp的临时目录下
比如,可以存到:/usr/local/zookeeper/dataDir

5.dataLogDir :日志目录
比如,可以存到:/usr/local/zookeeper/dataLogDir

6.clientPort :连接服务器的端口,默认2181
如果是伪分布,我们搭建的三台服务器,各端口可能是不同(依次排列的)

7.server.A=B:C:D (集群参数,我这里没用到,后续内容没写)

A:是一个数字,表示这个是第几号服务器;

B:服务器的ip地址(伪集群,在同一台虚拟实现集群,IP是一样的,相当于一台电脑安装了多个zk,ip相同,端口不通)

C:主从信息交互端口

D:Leader选举端口。表示的是如果集群中的Leader服务器挂了,需要一个端口来重新进行选举 ,选出一个新的Leader ,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于B都是一样, 所以不同的Zookeeper实例通信端口号不能一样 ,所以要给它们分配不同的端口号
例如:server.2=192.168.31.215:2888:3888

 Start: zkServer.sh start

View: zkServer.sh status

Java combined with kafka consumption: Main parameters: ip port group id topic

//所需依赖
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>2.0.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.0.1</version>
        </dependency>
public static void readMsg(){
        //通过Properties传入配置
        Properties prop = new Properties();
        //建立与kafka集群的连接 ip为生产者kafkaip
        prop.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.230.63.211:9092");
        //创建消费者 组id broker
        prop.put(ConsumerConfig.GROUP_ID_CONFIG,"mrs.shgcp.cmcc.com");
        //key值的反序列化
        prop.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getTypeName());
        //value值的反序列化
        prop.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getTypeName());
        //开启自动提交
        prop.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true");
        //自动提交时间间隔
        prop.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,"1000");
        //自动重置消费组的偏移量
        prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
        //实例化消费组
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(prop);
        //选择topic和对应分区 topic
        TopicPartition tp = new TopicPartition("com.cmcc.***", 0);
        //创建list接TopicPartition
        ArrayList<TopicPartition> list = new ArrayList<TopicPartition>();
        //这里add()方法需要的参数类型是collection
        list.add(tp);
        //assign和subscribe都可以消费topic
        consumer.assign(list);
        while(true){
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            records.forEach(record->
                    System.out.println(record.topic()+":"+record.partition()+":"+record.key()+":"+record.value()));
        }
    }

    public static void main(String[] args) {
        readMsg();
    }

Guess you like

Origin blog.csdn.net/qq_37889636/article/details/127512391
Recommended