RabbitMq connects Java and Python

Recently I wrote a crawler project in Python. For convenience, I made a control terminal in Java, and then used RabbitMq to string them together.

First of all, the Java code, the singleton mode adopted by both the producer and the consumer, where the consumer automatically consumes when tomcat starts. Not much to say, code

//消费者
public class ScrapyRabbitCon{
    
    
	//队列名
    private final static String QUEUE_NAME = "pythonjava";
    private static ScrapyRabbitCon rabbitmq;

    public static ScrapyRabbitCon getRabbit() {
    
    
        if(rabbitmq==null){
    
    
            try {
    
    
				rabbitmq = new ScrapyRabbitCon();
			} catch (IOException | TimeoutException e) {
    
    
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
        }
        return rabbitmq;
    }
    private ScrapyRabbitCon() throws IOException, TimeoutException {
    
    
        ConnectionFactory factory = new ConnectionFactory();
        factory.setHost("localhost");
        factory.setUsername("guest");
        factory.setPassword("guest");
        factory.setPort(5672);
//        factory.setConnectionTimeout(2);
        Connection connection = factory.newConnection();
        Channel channel = connection.createChannel();
        channel.queueDeclare(QUEUE_NAME,false,false,false,null);
        Consumer consumer = new DefaultConsumer(channel) {
    
    
            @Override
            public void handleDelivery(String consumerTag, Envelope envelope,
                                       AMQP.BasicProperties properties, byte[] body)
                    throws IOException {
    
    
                String message = new String(body, "UTF-8");
                //此处采用Swing弹窗显示接收到的消息
                JOptionPane.showMessageDialog(null, message, "ERROR", JOptionPane.ERROR_MESSAGE);
                System.out.println(message);
            }
        };
        channel.basicConsume(QUEUE_NAME,true, consumer);
    }

	//生产者
    public class ScrapyRabbitPro {
    
    
	//队列名
    private final static String QUEUE_NAME = "javapython";
    private Channel channel;
    private static ScrapyRabbitPro sendRabbit;
    public static ScrapyRabbitPro getSendRabbit(){
    
    
        if(sendRabbit==null){
    
    
            try {
    
    
				sendRabbit = new ScrapyRabbitPro();
			} catch (IOException | TimeoutException e) {
    
    
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
        }
        return sendRabbit;
    }
    private ScrapyRabbitPro() throws IOException, TimeoutException {
    
    
        ConnectionFactory factory = new ConnectionFactory();
        factory.setHost("localhost");
        factory.setUsername("guest");
        factory.setPassword("guest");
        factory.setPort(5672);
//        factory.setConnectionTimeout(2);
        Connection connection = factory.newConnection();
        channel = connection.createChannel();
        channel.queueDeclare(QUEUE_NAME,false,false,false,null);
    }
    public void send(JSONObject message){
    
    
        try {
    
    
			channel.basicPublish("", QUEUE_NAME, null, message.toString().getBytes("utf-8"));
		} catch (IOException e) {
    
    
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
        System.out.println("Producer Send +'" + message + "'");
    }

The above is the code for implementing RabbitMq in Java, where the producer encapsulates a send method, calling the send method can send the corresponding json format message, which is consumed by the consumer on the Python side.

def callback(ch, method, properties, body):  # 定义一个回调函数,用来接收生产者发送的消息
    global TASKINFO, TASKSTATUS
    body = body.decode('utf-8')
    js = json.loads(body)
    taskid = js.get("taskid")
    TASKINFO = get_taskinfo(taskid)
    TASKSTATUS = get_taskstatus(taskid)
    mq = get_or_save_mq("pythonjava")
    if js.get("method") == 'start':
        writeconf(taskid)
        t1 = threading.Thread(target=go, args=(mq,))
        t1.start()
    if js.get("method") == 'stop':
        t2 = threading.Thread(target=ki, args=(mq,))
        t2.start()


credentials = pika.PlainCredentials('guest', 'guest')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
channel = connection.channel()
channel.queue_declare(queue='javapython')

channel.basic_consume(callback,
                      queue='javapython',
                      no_ack=True)
print('[消费者] waiting for msg .')
channel.start_consuming()  # 开始循环取消息

The above is the code of the Python consumer. At present, the consumer code is completed in the form of a script. As the entrance of the entire crawler, the consumer code monitors the commands from the Java control terminal to control the operation of the entire crawler.

def get_or_save_mq(queue_name):
    mq = MQ_DICT.get(queue_name)
    if mq:
        return mq
    else:
        mq = InitMq(queue_name)
        MQ_DICT[queue_name] = mq
        return mq


class InitMq:
    def __init__(self, uuid):
        queue = uuid
        print("***********初始化MQ驱动*************")
        credentials = pika.PlainCredentials('guest', 'guest')
        connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', 5672, '/', credentials))
        self.channel = connection.channel()
        self.channel.queue_declare(queue=queue)
        self.routing_key = queue

    def send_data(self, body):
        self.channel.basic_publish(exchange='', routing_key=self.routing_key, body=body.encode('utf-8'))

The above is the producer code in Python. The producer sends the error messages and prompt messages generated by the crawler to the Java control end.

Guess you like

Origin blog.csdn.net/mrliqifeng/article/details/86772325