[Kafka] producers and consumers a real family practice system

You will find in the project combat, in fact, whether it is micro-services is, DDD both mean the design principles in order to fulfill the low coupling, high cohesion and whether RabbitMQ or Kafka, all the way through a message queue system decoupling, in the entry to abandon the series I introduced in detail their background and use models, due to the recent project uses a RabbitMQ and Kafka, so simply build a simple model of producers and consumers.
Here Insert Picture Description

Producer end

End of the producer, the producer's production stop message sent on kafka server clusters, based on their topic and partition:
Here Insert Picture Description
We Kafka message sent when the outer packaging method is as follows, need to pass topic of a Kafka, a Partition tenantId used to calculate the identity and message needs to be passed .

public static bool SendKafkaExportData(
      string appName,
      int tenantId,
      int userId,
      string metaObjName,
      string viewName,
      string exportFileName,
      SearchCondition condition,
      string version = null,
      int total = -1,
      ExportFileType fileType = ExportFileType.Xlsx,
      string applicationContext = null,
      string msgTemplate = null)
    {
      Common.HelperObjects.ArgumentHelper.AssertNotEmpty(appName, nameof (appName));
      Common.HelperObjects.ArgumentHelper.AssertNotEmpty(metaObjName, nameof (metaObjName));
      Common.HelperObjects.ArgumentHelper.AssertNotEmpty(viewName, nameof (viewName));
      Common.HelperObjects.ArgumentHelper.AssertNotEmpty(exportFileName, nameof (exportFileName));
      Common.HelperObjects.ArgumentHelper.AssertPositive(tenantId, nameof (tenantId));
      Common.HelperObjects.ArgumentHelper.AssertPositive(userId, nameof (userId));
      Common.HelperObjects.ArgumentHelper.AssertNotNull<SearchCondition>(condition, nameof (condition));
      bool flag = true;
      try
      {
        ExportRequestDataModel exportRequestData = ExportRequestDataModel.GetExportRequestData(appName, tenantId, userId, metaObjName, viewName, exportFileName, condition, version, total, fileType, applicationContext, msgTemplate);
        long num = KafkaProducer.Send<ExportRequestDataModel>("TMLSent", tenantId, exportRequestData);
        ExportRequestDataModel.logger.Debug((object) string.Format("{0}-{1}-{2}发送Kafka消息{3}成功", (object) appName, (object) tenantId, (object) userId, (object) num));
      }
      catch (Exception ex)
      {
        ExportRequestDataModel.logger.Error((object) string.Format("{0}-{1}-{2}发送Kafka消息异常", (object) appName, (object) tenantId, (object) userId), ex);
        flag = false;
      }
      return flag;
    }

And wherein the core methods: long num = KafkaProducer.Send<ExportRequestDataModel>("TMLSent", tenantId, exportRequestData);implementation logic is as follows, carrying the message sequence kafka into binary array:

    /// <summary>Send a message to a topic.</summary>
    /// <param name="topic">The name of the topic to send the message to.</param>
    /// <param name="tenant">The id of the tenant the message belongs to.</param>
    /// <param name="value">The message content.</param>
    /// <returns>The offset of the message.</returns>
    public static long Send<T>(string topic, int tenant, T value) where T : IBinarySerializable
    {
      ArgumentHelper.AssertNotEmpty(topic, nameof (topic));
      ArgumentHelper.AssertPositive(tenant, nameof (tenant));
      return KafkaProducer.Send(topic, tenant, (object) value == null ? (byte[]) null : BigEndianEncoder.Encode<T>(value));
    }

Message transmission mechanism is as follows, to obtain the desired topic for calculating Partition tenantId identification string and binary message can be sent directly to the sequence of:

    /// <summary>Send a message to a topic.</summary>
    /// <param name="topic">The name of the topic to send the message to.</param>
    /// <param name="tenant">The id of the tenant the message belongs to.</param>
    /// <param name="value">The message content.</param>
    /// <returns>The offset of the message.</returns>
    public static long Send(string topic, int tenant, byte[] value)
    {
      ArgumentHelper.AssertNotEmpty(topic, nameof (topic));
      ArgumentHelper.AssertPositive(tenant, nameof (tenant));
      try
      {
        return KafkaProtocol.Produce(topic, tenant, value);
      }
      catch (ConnectionPoolException ex)
      {
        return KafkaProtocol.Produce(topic, tenant, value);
      }
      catch (KafkaException ex)
      {
        if (ex.Error == ErrorCode.NotLeaderForPartition || ex.Error == ErrorCode.LeaderNotAvailable)
          return KafkaProtocol.Produce(topic, tenant, value);
        throw;
      }
    }

The core of the transmission method is:

public static long Produce(string topic, int tenant, byte[] value)
    {
      TopicConfig topicConfig = BaseConfig<KafkaMapping>.Instance.GetTopicConfig(topic);
      int num = tenant % KafkaProtocol.GetTopicPartitionCount(topic);  //计算
      int partitionLeader = KafkaProtocol.GetPartitionLeader(topic, num);  //设置leader
      try
      {
        using (KafkaSession kafkaSession = new KafkaSession(topicConfig.Cluster, partitionLeader))  //创建一个kafka消息发送实例
        {
          Message message = new Message(value, TimeUtil.CurrentTimestamp);
          ProduceRequest request = new ProduceRequest((IDictionary<TopicAndPartition, MessageSet>) new Dictionary<TopicAndPartition, MessageSet>()
          {
            {
              new TopicAndPartition(topic, num),   //设置topic和partition
              new MessageSet(topicConfig.Codecs, (IList<Message>) new List<Message>()
              {
                message
              })
            }
          });   //设置要发送的消息
          ProduceResponse produceResponse = kafkaSession.Issue<ProduceRequest, ProduceResponse>(request);   //发送Kafka消息并
          KafkaProtocol.CheckErrorCode(produceResponse.Error, topic, new int?(num), new int?(tenant));
          return produceResponse.Offset;
        }
      }
      catch (Exception ex)
      {
        KafkaProtocol.RefreshPartitionMetadata(topic);
        throw;
      }
    }

Such a message that we need to pass is sent to the topic and corresponding partition corresponding to a (different partition may be stored on different machines, so take the same I data tenants number will be placed in the same partition), no longer own message distribution package.

Consumers end

In the consumer side, providing consumers with a cluster partition consumption, should be noted that: For a group, the number of consumers should not be more than the number of partitions, because in a group, each partition can only be bound up to a consumer that a consumer can consume more than one partition, a partition can only give a consumer spending (to ensure that a partition of the message will not be competition for consumers performed in a Group) , therefore, if a the number of consumers group is greater than the number of partitions, then the excess consumer will not receive any messages
Here Insert Picture Description
at the consumer end, the machine needs to warm up and turn on the news consumer services, of course, but also a way to close news services, consumer services open means open message received and opened the message processing thread, close the message service message received empathy means closed and close the message processing thread.

  /// <summary>
  /// 接收导出消息的服务
  /// </summary>
  public class ReceiveMsgProvider : IReceiveMsgProvider
  {
      #region 日志、构造方法以及单例
 
      protected static readonly LogWrapper Logger = new LogWrapper();
 
      private ReceiveMsgProvider()
      {
      }
 
      public static ReceiveMsgProvider Instance { get; } = new ReceiveMsgProvider();
 
      #endregion 日志、构造方法以及单例
 
      #region 开启消息接收服务
 
      public bool _ActivateService()
      {
          // 预热
         Cloud.Plugins.Helper.ESBProxy.WarmUp();
 
          //开启消息接收服务
          StartMessageService();
 
          //开始处理ExportQueue队列中的消息
          ExportConsumer.Instance.BeginImportData();
 
          Logger.Debug("_ActivateService was called.");
 
          return true;
      }
 
      protected void StartMessageService()
      {
          try
          {
              //开始消费消息
              ExportConsumer.Instance.Start();
          }
          catch (Exception ex)
          {
              Logger.Error(ex);
          }
      }
 
      #endregion 开启消息接收服务
 
      #region 关闭消息接收服务
 
      public bool _UnActivateService()
      {
          //关闭消息接收服务
          StopMessageService();
 
          //关闭处理queue的线程
          ExportConsumer.CloseQueueThreads(); 
 
          Logger.Debug("_UnActivateService was called.");
          return true;
      }
 
      protected void StopMessageService()
      {
          try
          {
              //停止消费消息
              ExportConsumer.Instance.Stop();
          }
          catch (Exception ex)
          {
              Logger.Error(ex);
          }
      }
 
     
  }

Wherein the opening and closing message received core service as follows:


       /// <summary>
        /// ESB服务调用入口:启动
        /// </summary>
        public void Start()
        {
             _loggging.Debug("ESB服务调用入口:启动");
            _consumer = new KafkaGroupConsumer(ExportKafkaConst.ExportKafkaConsumerGroup, ExportKafkaConst.ExportKafkaTopic, OnMessage);   //OnMessage即是处理消费逻辑的方法
            _consumer .Start();
        }

        /// <summary>
        /// ESB服务调用入口:停止
        /// </summary>
        public void Stop()
        {
            _loggging.Debug("ESB服务调用入口:停止");
            if (_consumer  != null && _consumer .IsRunning)
            {
                _consumer .Stop();
            }
        }

Of course, after this set of production and consumption systems to build up the most important thing it is to receive messages and consume it:

       /// <summary>
        /// 接收导出消息并放置到缓存队列里
        /// </summary>
        /// <param name="context"></param>
        /// <returns></returns>
        public bool OnMessage(Message context)
        {

            logger.Debug(string.Format("接收到消息:{0}", Newtonsoft.Json.JsonConvert.SerializeObject(context.Value)));
            ExportRequestDataModel data = null;
            try
            {
                //读取消息内容
                data = BigEndianEncoder.Decode<ExportRequestDataModel>(context.Value);
                if (data == null || data.MsgType != ExportMessageType.Export)
                {
                    //无消息需要处理
                    return true;
                }
                else
                {
                    logger.Debug(string.Format("成功取到数据:{0}", Newtonsoft.Json.JsonConvert.SerializeObject(data)));
                }


                bool enQueueResult = ExportQueue.En_Queue(data, ApplicationContext.Current.TenantId);

                if (!enQueueResult)
                {
                    logger.Error("导出数据 In_Queue失败:userId:" + ApplicationContext.Current.UserId + " tenantId:" + ApplicationContext.Current.TenantId);
                }
            }
            catch (Exception ex)
            {
                var contextInfo = JsonConvert.SerializeObject(context);
                logger.Error($"导出写入队列失败,接收到的消息为:{contextInfo} ,信息异常信息:" + ex);
            }
            return true;
        }

Messages one by one into the team after the next step is to use a number of threads of course we started to consume it:

        /// <summary>
        /// 开始对ExportQueue队列中的数据进行梳理
        /// </summary>
        public void BeginImportData()
        {

            ///初始化线程List
            ExportQueue.InstanceExportThreadsList();
            int count = ExportQueue.ExportThreadsList.Count;

            for (int i = 0; i < count; i++)
            {
                logger.Debug("开启线程th_" + i + "");
                ExportQueue.ExportThreadsList[i] = new System.Threading.Thread(new System.Threading.ParameterizedThreadStart(DealExportDataInQueue));
                ExportQueue.ExportThreadsList[i].Start(i);
            }
            // 处理小队列
            ExportQueue.ExportSmallThread = new System.Threading.Thread(new System.Threading.ParameterizedThreadStart(DealExportDataInQueue));
            ExportQueue.ExportSmallThread.Start(ExportQueue.SmallIndex);
        }

Use Kafka producers and consumers to achieve an overall flow system is the way to be honest I was little knowledge of some of the implementation details, but overall know how to run a full set of processes is through, as to some of the higher order cognitive expected from after more in-depth study to understand.

Published 240 original articles · won praise 114 · views 180 000 +

Guess you like

Origin blog.csdn.net/sinat_33087001/article/details/103117029