A brief introduction and application of Disruptor

Foreword

Recent work is a busy, working in the project, saw a lot of people have their own set of data to achieve the task processing mechanism, the individual feels a bit chaotic, and also facilitate the ongoing maintenance of others, so the thought of a data processing mode, namely producers buffer queue, the consumer model to achieve unity of all logic.

The following is a demonstration of the Disruptor when basic use. Use need to introduce dependent

<dependency>
  <groupId>com.lmax</groupId>
  <artifactId>disruptor</artifactId>
  <version>3.4.2</version>
</dependency>

Name interpretation

  • Ring Buffer

    Environmental buffer, 3.0 version previously considered to be key members of the Disruptor. After the 3.0 version, a ring buffer is only responsible for storage of data by way Disruptor events and updates. In some advanced application scenarios, Ring Buffer can implement a custom user by completely replaced.

  • Sequence

    Sequence Disruptor used as a method to determine the location of a particular component. Each user (EventProcessor) and Disruptor maintain itself as a sequence. Most rely on the movement of the concurrent code sequence values, thus the sequence of many current characteristics of supported AtomicLong. In fact, the only real difference between the two is a sequence that contains extra features to prevent errors and other shared values ​​between the sequences.

  • Sequencer

    Sequencer is the real core, two implementations (single producer, multi-consumer) of this interface implements all concurrent algorithms quickly and correctly transfer data for between producers and users.

  • Sequence Barrier

    Sequencer sequence generated by the barrier, and Sequencer contain references to any of the sequence-dependent consumers. It contains the logic to determine whether there is any event available to users processing.

  • Wait Strategy

    Wait for policy determines how consumers will wait for news producers generated, Disruptor the message on the event (Event) in.

  • Event

    From producer to consumer data units. Representation of a specific event codes fully defined by the user does not exist.

  • Event Processor

    EventProcessor hold specific consumers (Consumer) of Sequence, and to provide for the event loop call event processing implementation.

  • BatchEventProcessor

    BatchEventProcessor它包含事件循环的有效实现,并将回调到已使用的EventHandle接口实现。

  • EventHandler

    Disruptor定义的事件处理接口,由用户实现,用于处理事件,是Consumer的真正实现。

  • Producer

    生产者,只是泛指调用Disruptor发布事件的用户代码,Disruptor没有定义特定接口或类型。

架构图

简单实用Disruptor

1 定义事件

事件就是通过Disruptor进行交换的数据类型。

package com.disruptor;

public class Data {

    private long value;

    public long getValue() {
        return value;
    }

    public void setValue(long value) {
        this.value = value;
    }
}

2 定义事件工厂

事件工厂定义了如何实例化第一步中定义的事件。Disruptor通过EventFactory在RingBuffer中预创建Event的实例。

一个Event实例被用作一个数据槽,发布者发布前,先从RingBuffer获得一个Event的实例,然后往Event实例中插入数据,然后再发布到RingBuffer中,最后由Consumer获得Event实例并从中读取数据。

package com.disruptor;

import com.lmax.disruptor.EventFactory;

public class DataFactory implements EventFactory<Data> {

    @Override
    public Data newInstance() {
        return new Data();
    }
}

3 定义生产者

package com.disruptor;

import com.lmax.disruptor.RingBuffer;

import java.nio.ByteBuffer;

public class Producer {

    private final RingBuffer<Data> ringBuffer;

    public Producer(RingBuffer<Data> ringBuffer) {
        this.ringBuffer = ringBuffer;
    }

    public void pushData(ByteBuffer byteBuffer) {
        long sequence = ringBuffer.next();

        try {
            Data even = ringBuffer.get(sequence);
            even.setValue(byteBuffer.getLong(0));
        } finally {
            ringBuffer.publish(sequence);
        }
    }
}

4 定义消费者

package com.disruptor;

import com.lmax.disruptor.WorkHandler;

import java.text.MessageFormat;


public class Consumer implements WorkHandler<Data> {

    @Override
    public void onEvent(Data data) throws Exception {
        long result = data.getValue() + 1;

        System.out.println(MessageFormat.format("Data process : {0} + 1 = {1}", data.getValue(), result));
    }
}

5 启动Disruptor

  • 测试Demo
package com.disruptor;

import com.lmax.disruptor.RingBuffer;
import com.lmax.disruptor.dsl.Disruptor;

import java.nio.ByteBuffer;
import java.util.concurrent.ThreadFactory;


public class Main {

    private static final int NUMS = 10;

    private static final int SUM = 1000000;

    public static void main(String[] args) {
        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        long start = System.currentTimeMillis();

        DataFactory factory = new DataFactory();

        int buffersize = 1024;

        Disruptor<Data> disruptor = new Disruptor<Data>(factory, buffersize, new ThreadFactory() {
            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r);
            }
        });

        Consumer[] consumers = new Consumer[NUMS];
        for (int i = 0; i < NUMS; i++) {
            consumers[i] = new Consumer();
        }

        disruptor.handleEventsWithWorkerPool(consumers);
        disruptor.start();

        RingBuffer<Data> ringBuffer = disruptor.getRingBuffer();
        Producer producer = new Producer(ringBuffer);

        ByteBuffer bb = ByteBuffer.allocate(8);
        for (long i = 0; i < SUM; i++) {
            bb.putLong(0, i);
            producer.pushData(bb);
            System.out.println("Success producer data : " + i);
        }
        long end = System.currentTimeMillis();

        disruptor.shutdown();

        System.out.println("Total time : " + (end - start));
    }
}
  • 结果(部分结果展示)
Data process : 999,987 + 1 = 999,988
Success producer data : 999995
Data process : 999,990 + 1 = 999,991
Data process : 999,989 + 1 = 999,990
Data process : 999,991 + 1 = 999,992
Data process : 999,992 + 1 = 999,993
Data process : 999,993 + 1 = 999,994
Data process : 999,995 + 1 = 999,996
Success producer data : 999996
Success producer data : 999997
Success producer data : 999998
Success producer data : 999999
Data process : 999,994 + 1 = 999,995
Data process : 999,996 + 1 = 999,997
Data process : 999,997 + 1 = 999,998
Data process : 999,998 + 1 = 999,999
Data process : 999,999 + 1 = 1,000,000
Total time : 14202

由结果展示可见,边生产、边消费。

彩蛋

1 事件转换类

package com.mm.demo.disruptor.translator;

import com.lmax.disruptor.EventTranslatorOneArg;
import com.mm.demo.disruptor.entity.Data;

public class DataEventTranslator implements EventTranslatorOneArg<Data, Long> {

    @Override
    public void translateTo(Data event, long sequence, Long arg0) {
        System.out.println(MessageFormat.format("DataEventTranslator arg0 = {0}, seq = {1}", arg0, sequence));
        event.setValue(arg0);
    }
}

2 消费者

2.1 消费者Demo1

消费者每次将event的结果加1。

package com.mm.demo.disruptor.handler;

import com.lmax.disruptor.EventHandler;
import com.mm.demo.disruptor.entity.Data;

import java.text.MessageFormat;

public class D1DataEventHandler implements EventHandler<Data> {

    @Override
    public void onEvent(Data event, long sequence, boolean endOfBatch) throws Exception {
        long result = event.getValue() + 1;
        Thread t = new Thread();
        String name = t.getName();
        System.out.println(MessageFormat.format("consumer "+name+": {0} + 1 = {1}", event.getValue(), result));
    }

}

这里是使用的是EventHandler。也是使用WorkHandler,EventHandler和WorkHandler的区别是前者不需要池化,后者需要池化。

2.2 消费者Demo2

package com.mm.demo.disruptor.handler;

import com.lmax.disruptor.EventHandler;
import com.mm.demo.disruptor.entity.Data;

import java.text.MessageFormat;


public class D2DataEventHandler implements EventHandler<Data> {

    @Override
    public void onEvent(Data event, long sequence, boolean endOfBatch) throws Exception {
        long result = event.getValue() + 2;
        System.out.println(MessageFormat.format("consumer 2: {0} + 2 = {1}", event.getValue(), result));
    }
}

2.3 串行依次计算

Consumer1执行完成再执行Consumer2。

package com.mm.demo.disruptor.process;

import com.lmax.disruptor.dsl.Disruptor;
import com.mm.demo.disruptor.entity.Data;
import com.mm.demo.disruptor.handler.D1DataEventHandler;
import com.mm.demo.disruptor.handler.D2DataEventHandler;

/**
 * 串行依次计算
 * @DateT: 2020-01-07
 */
public class Serial {

    public static void serial(Disruptor<Data> disruptor) {
        disruptor.handleEventsWith(new D1DataEventHandler()).then(new D2DataEventHandler());
        disruptor.start();
    }
}

2.4 并行实时计算

Consumer1和Consumer2同时执行。

package com.mm.demo.disruptor.process;

import com.lmax.disruptor.dsl.Disruptor;
import com.mm.demo.disruptor.entity.Data;
import com.mm.demo.disruptor.handler.D1DataEventHandler;
import com.mm.demo.disruptor.handler.D2DataEventHandler;

/**
 * 并行执行
 * @DateT: 2020-01-07
 */
public class Parallel {

    public static void parallel(Disruptor<Data> dataDisruptor) {
        dataDisruptor.handleEventsWith(new D1DataEventHandler(), new D2DataEventHandler());
        dataDisruptor.start();
    }
}

2.5 测试类

package com.mm.demo.disruptor;

import com.lmax.disruptor.BlockingWaitStrategy;
import com.lmax.disruptor.RingBuffer;
import com.lmax.disruptor.dsl.Disruptor;
import com.lmax.disruptor.dsl.ProducerType;
import com.mm.demo.disruptor.entity.Data;
import com.mm.demo.disruptor.handler.D1DataEventHandler;
import com.mm.demo.disruptor.process.Parallel;
import com.mm.demo.disruptor.process.Serial;
import com.mm.demo.disruptor.translator.DataEventTranslator;

import javax.swing.plaf.synth.SynthTextAreaUI;
import java.nio.ByteBuffer;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;


public class Main {

    private static final int BUFFER = 1024 * 1024;

    public static void main(String[] args) {

        DataFactory factory = new DataFactory();

        Disruptor<Data> disruptor = new Disruptor<Data>(factory, BUFFER, Executors.defaultThreadFactory(), ProducerType.MULTI, new BlockingWaitStrategy());

      
        Serial.serial(disruptor);
//        Parallel.parallel(disruptor);

        RingBuffer<Data> ringBuffer = disruptor.getRingBuffer();
        for (int i = 0; i < 2; i++) {
            ringBuffer.publishEvent(new DataEventTranslator(), (long)i);
        }
        disruptor.shutdown();
    }
}

总结

上边只演示了串行和并行的方式,其实还是通过组合的方式创建不的计算处理方式(需要创建多个事件处理器EventHandler)。

补充等待策略

  • BlockingWaitStrategy:最低效的策略,但是对cpu的消耗是最小的,在各种不同部署环境中能提供更加一致的性能表现。
  • SleepingWaitStrategy:性能和BlockingWaitStrategy差不多少,cpu消耗也类似,但是其对生产者线程的影响最小,适合用于异步处理数据的场景。
  • YieldingWaitStrategy:性能是最好的,适用于低延迟的场景。在要求极高性能且事件处理线程数小于cpu处理核数时推荐使用此策略。
  • BusySpinWaitStrategy:低延迟,但是对cpu资源的占用较多。
  • PhasedBackoffWaitStrategy:上边几种策略的综合体,延迟大,但是占用cpu资源较少。

参考

本文参考了Disruptor源码以及github中的部分说明。

Demo源码地址

github


  • 写作不易,转载请注明出处,喜欢的小伙伴可以关注公众号查看更多喜欢的文章。
  • 联系方式:[email protected]
  • QQ:95472323
  • 微信:ffj2000

Guess you like

Origin www.cnblogs.com/fengfujie/p/12163895.html