kafka测试同一个消费组的多个消费者负载均衡实例(集成spring)

版权声明:不短不长八字刚好@wzy https://blog.csdn.net/qq_38089964/article/details/83104238

这里使用的是zookeeper和kafka3台机器的集群,这样能保证如过有一台机器炸了还能运行,在集群环境中,要在kafka的 server.properties中配置zookeeper集群地址等信息,最重要的是num.partitions=3.这样一个分区就是一个机器,所以当kafka发消息的时候就会发送到每个机器上。

就是因为这个才踩到一个坑,集群配置好了,创建topic设置了3个分区,2个备份,但是发现就消息只发到了一个机器上,只有一个机器上有这个topic的log文件。

虽然说这样依旧能实现多消费者的负载均衡(那所有消费者都是从收到消息的这台机器上拉取消息),但是集群设置有一个特点:容错性;这种情况在这台机器炸了之后就没办法运行了,更别说负载均衡了。

中央配置文件:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans.xsd">
  <bean
    class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
      <list>
        <value>classpath:config.properties</value>
      </list>
    </property>
  </bean>
  <import resource="kafka/consumer1.xml"/>
  <import resource="kafka/consumer2.xml"/>
  <import resource="kafka/producer.xml"/>
</beans>

消费者1:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans.xsd">


  <!-- 定义consumer的参数 -->
  <bean id="consumerFactory1" class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
    <constructor-arg>
      <map>
        <entry key="bootstrap.servers" value="${bootstrap.servers}"/>
        <entry key="group.id" value="1"/>
        <entry key="enable.auto.commit" value="true"/>
        <entry key="auto.commit.interval.ms" value="1000"/>
        <entry key="session.timeout.ms" value="15000"/>
        <entry key="key.deserializer" value="org.apache.kafka.common.serialization.LongDeserializer"/>
        <entry key="value.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
      </map>
    </constructor-arg>
  </bean>

  <!-- 实际执行消息消费的类 -->
  <bean id="messageListernerConsumerService1" class="cn.wzy.JobConsumer1"/>

  <!-- 消费者容器配置信息 -->
  <bean id="containerProperties1" class="org.springframework.kafka.listener.config.ContainerProperties">
    <constructor-arg value="test1"/>
    <property name="messageListener" ref="messageListernerConsumerService1"/>
  </bean>

  <bean id="messageListenerContainer1" class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
        init-method="doStart">
    <constructor-arg ref="consumerFactory1"/>
    <constructor-arg ref="containerProperties1"/>
  </bean>

</beans>

消费者2:一模一样的配置,只是消费者处理类换了;

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans.xsd">


  <!-- 定义consumer的参数 -->
  <bean id="consumerFactory2" class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
    <constructor-arg>
      <map>
        <entry key="bootstrap.servers" value="${bootstrap.servers}"/>
        <entry key="group.id" value="1"/>
        <entry key="enable.auto.commit" value="true"/>
        <entry key="auto.commit.interval.ms" value="1000"/>
        <entry key="session.timeout.ms" value="15000"/>
        <entry key="key.deserializer" value="org.apache.kafka.common.serialization.LongDeserializer"/>
        <entry key="value.deserializer" value="org.apache.kafka.common.serialization.StringDeserializer"/>
      </map>
    </constructor-arg>
  </bean>

  <!-- 实际执行消息消费的类 -->
  <bean id="messageListernerConsumerService2" class="cn.wzy.JobConsumer2"/>

  <!-- 消费者容器配置信息 -->
  <bean id="containerProperties2" class="org.springframework.kafka.listener.config.ContainerProperties">
    <constructor-arg value="test1"/>
    <property name="messageListener" ref="messageListernerConsumerService2"/>
  </bean>

  <bean id="messageListenerContainer2" class="org.springframework.kafka.listener.KafkaMessageListenerContainer"
        init-method="doStart">
    <constructor-arg ref="consumerFactory2"/>
    <constructor-arg ref="containerProperties2"/>
  </bean>

</beans>

处理类则是简单的输出收到的消息:

package cn.wzy;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;

public class JobConsumer1 implements MessageListener<Long, String> {

  public void onMessage(ConsumerRecord<Long, String> record) {
    System.out.println("==JobConsumer1 received:" + record.key() + " : " + record.value());
  }


}
package cn.wzy;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;

public class JobConsumer2 implements MessageListener<Long, String> {

  public void onMessage(ConsumerRecord<Long, String> record) {
    System.out.println("==JobConsumer2 received:" + record.key() + " : " + record.value());
  }


}

生产者:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans.xsd">

  <!-- 定义producer的参数 -->
  <bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
    <constructor-arg>
      <map>
        <entry key="bootstrap.servers" value="${bootstrap.servers}"/>
        <entry key="group.id" value="0"/>
        <entry key="retries" value="10"/>
        <entry key="batch.size" value="16384"/>
        <entry key="linger.ms" value="1"/>
        <entry key="buffer.memory" value="33554432"/>
        <entry key="key.serializer" value="org.apache.kafka.common.serialization.LongSerializer"/>
        <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
      </map>
    </constructor-arg>
  </bean>
  <!--发送消息的监听器-->
  <bean id="listener" class="cn.wzy.Listener"/>
  <!-- 创建kafkatemplate bean,使用的时候,只需要注入这个bean,即可使用template的send消息方法 -->
  <bean id="KafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
    <constructor-arg ref="producerFactory"/>
    <constructor-arg name="autoFlush" value="true"/>
    <property name="defaultTopic" value="test1"/>
    <property name="producerListener" ref="listener"/>
  </bean>

</beans>

bootstrap.servers=192.168.60.131:9092,192.168.60.132:9092,192.168.60.133:9092

测试类:

package cn.wzy;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import java.util.Random;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:application.xml")
public class SendTest {


  @Autowired
  private KafkaTemplate<Long, String> kafkaTemplate;

  @Test
  public void test() throws InterruptedException {
    for (long i = 1; i < 50; i++) {
      kafkaTemplate.sendDefault(i,"message : hello world");
    }
    Thread.sleep(5000);
  }
}

结果:

猜你喜欢

转载自blog.csdn.net/qq_38089964/article/details/83104238
今日推荐