RabbitMQ introduced using 2-Work queues

Work queues

Next is the part2

In this one, we create a work queue for allocation of time-consuming tasks among multiple workers.

Work Queues The main idea is to avoid the implementation of resource-intensive task immediately and had to wait for the completion of its implementation. Our task will be packaged as a message and send it to the queue, a worker process running in the background will bring up the task and ultimately perform the task, when you manage a lot of work node, the task will be shared between them.

This concept is especially useful in web applications, because in a short HTTP request impossible task window handle complex

Here we send a special String, with Thread.sleep () aid to simulate some of the time-consuming work, with simple point represent the complexity of a task, such as Hello. This task requires two seconds indicate treatment.

We will modify some code on the basis of the original producers, file name send_work.py:

import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(
    'localhost'))#默认端口5672,可不写

#创建通道,声明一个管道,在管道里发送消息
channel = connection.channel()
#在管道里声明queue
channel.queue_declare(queue='hello')
message = ''.join(sys.argv[1:]) or "Hello World"
#一条消息永远不能直接发送到队列,它总需要经过一个交换exchange
channel.basic_publish(exchange='',
                      routing_key='hello',
                      body=message)#设置routing_key(消息队列的名称)和body(发送的内容)
print("[x] Sent %r" % message)
connection.close()#关闭连接,队列关闭

We will modify some code on the basis of the original consumers, file name receive_work.py:

#receiving(消费者接收者)
import pika
import time
#创建一个连接
connection = pika.BlockingConnection(
    pika.ConnectionParameters('localhost'))#默认端口5672,可不写
#创建通道,声明一个管道,在管道里发送消息
channel = connection.channel()

#把消息队列的名字为hello,把消费者和queue绑定起来,生产者和queue的也是hello
#为什么又声明了一个hello队列
#如果确定已经声明了,可以不声明。但是你不知道那个机器先运行,所以要声明两次
channel.queue_declare(queue='hello')

#回调函数get消息体
def callback(ch,method,properties,body):#四个参数为标准格式
    #管道内存对象,内容相关信息
    print("打印看下是什么:",ch,method,properties) #打印看下是什么
    print(" [x] Received %r" % body)
    time.sleep(body.count(b'.'))
    print("[x] Done")
    
#消费消息
channel.basic_consume(
    queue='hello',#你要从那个队列里收消息
    on_message_callback=callback,#如果收到消息,就调用callback函数来处理消息
    auto_ack=True #写的话,如果接收消息,机器宕机消息就丢了
    #一般不写,宕机则生产者检测到发给其他消费者
)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming() #创建死循环,监听消息队列,可使用CTRL+C结束监听

For the above two open consumer console, open a console producer, operating results are as follows:

By default, RabbitMQ will sequentially sends each message to the next consumer, the consumer will get the same average number of messages. Distributing messages in this manner is referred to as loop (evenly distributed)

Message confirmation (reply message)

To complete a task may take some time. You might think that if a consumer to start a long task, and only completed a part of, then what will happen. In our current code, once the RabbitMQ sends a message to customers, which immediately marks it for deletion. under these circumstances. If we are forced to close a working node. We will lose the message it is processing. We will also lose all the messages sent to this particular worker, but no deal

Usually we do not want to hang up because a node lost any news, but want to be able to pass these messages to other surviving node for processing

To ensure that the message can not be lost,

RabbitMQ Support message acknowledgments (message confirmation), a specific message is received, a return ack tell RabbitMQ can be freely deleted

If a consumer to hang up (closed channel, TCP connection is closed) can not send ack to RabbitMQ, RabbitMQ case will be appreciated that a message can not be processed, it will send it back to the other consumers. This treatment process to ensure that the information is not lost, even if the occasional consumer hang

No message timeout,

RabbitMQ will deliver the message again after a consumer hangs up, even if takes a long time to process messages

Guess you like

Origin www.cnblogs.com/venvive/p/11730016.html