And Zhu Ye go over Java Concurrency (V): concurrent containers and synchronizer

In this section we will first review what java.util.concurrent some of the following concurrent containers, and then will simply look at the various synchronizer.

ConcurrentHashMap and performance ConcurrentSkipListMap

First of all, we have to test the performance of ConcurrentHashMap and ConcurrentSkipListMap. Uncomplicated version of the former corresponding to a HashMap, which is the jump table implementation, Map Key according sequential ordering (of course also possible to provide a sort Comparator).

In this example, we are not simply reading and writing test Map Key's performance, but implemented using Map the most common scenario of a multi-threaded environment: Key Statistics arise frequency range of our Key is 10000, then recycled 100 million (that is on average are about 10 000 Value), to 10 concurrent operations Map:

@Slf4j
public class ConcurrentMapTest {

    int loopCount = 100000000;
    int threadCount = 10;
    int itemCount = 10000;

    @Test
    public void test() throws InterruptedException {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start("hashmap");
        normal();
        stopWatch.stop();
        stopWatch.start("concurrentHashMap");
        concurrent();
        stopWatch.stop();
        stopWatch.start("concurrentSkipListMap");
        concurrentSkipListMap();
        stopWatch.stop();
        log.info(stopWatch.prettyPrint());
    }

    private void normal() throws InterruptedException {
        HashMap<String, Long> freqs = new HashMap<>();
        ForkJoinPool forkJoinPool = new ForkJoinPool(threadCount);
        forkJoinPool.execute(() -> IntStream.rangeClosed(1, loopCount).parallel().forEach(i -> {
                    String key = "item" + ThreadLocalRandom.current().nextInt(itemCount);
                    synchronized (freqs) {
                        if (freqs.containsKey(key)) {
                            freqs.put(key, freqs.get(key) + 1);
                        } else {
                            freqs.put(key, 1L);
                        }
                    }
                }
        ));
        forkJoinPool.shutdown();
        forkJoinPool.awaitTermination(1, TimeUnit.HOURS);
        //log.debug("normal:{}", freqs);

    }

    private void concurrent() throws InterruptedException {
        ConcurrentHashMap<String, LongAdder> freqs = new ConcurrentHashMap<>(itemCount);
        ForkJoinPool forkJoinPool = new ForkJoinPool(threadCount);
        forkJoinPool.execute(() -> IntStream.rangeClosed(1, loopCount).parallel().forEach(i -> {
                    String key = "item" + ThreadLocalRandom.current().nextInt(itemCount);
                    freqs.computeIfAbsent(key, k -> new LongAdder()).increment();
                }
        ));
        forkJoinPool.shutdown();
        forkJoinPool.awaitTermination(1, TimeUnit.HOURS);
        //log.debug("concurrentHashMap:{}", freqs);
    }

    private void concurrentSkipListMap() throws InterruptedException {
        ConcurrentSkipListMap<String, LongAdder> freqs = new ConcurrentSkipListMap<>();
        ForkJoinPool forkJoinPool = new ForkJoinPool(threadCount);
        forkJoinPool.execute(() -> IntStream.rangeClosed(1, loopCount).parallel().forEach(i -> {
                    String key = "item" + ThreadLocalRandom.current().nextInt(itemCount);
                    freqs.computeIfAbsent(key, k -> new LongAdder()).increment();
                }
        ));
        forkJoinPool.shutdown();
        forkJoinPool.awaitTermination(1, TimeUnit.HOURS);
        //log.debug("concurrentSkipListMap:{}", freqs);
    }
}
复制代码

Here you can see the three realized here:

  • For the realization of normal, we have full read and write and then locked HashMap
  • For ConcurrentHashMap, we clever use of a computeIfAbsent () method to achieve the Key determine whether there is acquired through calculation Value, put Key Value three steps to get a Value is LongAdder (), then because LongAdder is thread-safe so a direct call Increase () method, a line of code to achieve the effect of 5 lines
  • The same is true ConcurrentSkipListMap

Results are as follows:

image_1dg7ckmsp1k4cde81agdpc1fit9.png-64.2kB

We can see that the concurrent use of word frequency statistics ConcurrentHashMap cleverly implemented, its performance compared to the version lock too high. Notably, a ConcurrentSkipListMap the containsKey, get, put, remove, and similar operating time complexity is log (n), coupled with its orderly, performance and ConcurrentHashMap gap.

If we look at ConcurrentSkipListMap print the final result, almost like this:

image_1dg7dcv63506qsiahte1s17b7m.png-353.9kB
Entry can be seen in accordance with the Key to sort.

The method of operation of those atoms ConcurrentHashMap

In this section we compare computeIfAbsent () and putIfAbsent () the difference between these two methods is easy because of misuse leading to some of the Bug.

  • The first is the difference in performance, if Key exists, computeIfAbsent since passed a function, the function probably never executed, and putIfAbsent require direct by value. So if you want to get, then Value is costly, computeIfAbsent performance will be better
  • The second is the difference in use, computeIfAbsent return is the value after the operation, if the previous value is not present, then returns calculated value, if already present then the return value as it exists, the putIfAbsent returns before value if the value does not exist then the original will be null

Write a program to test:

@Slf4j
public class PutIfAbsentTest {

    @Test
    public void test() {
        ConcurrentHashMap<String, String> concurrentHashMap = new ConcurrentHashMap<>();
        log.info("Start");
        log.info("putIfAbsent:{}", concurrentHashMap.putIfAbsent("test1", getValue()));
        log.info("computeIfAbsent:{}", concurrentHashMap.computeIfAbsent("test1", k -> getValue()));
        log.info("putIfAbsent again:{}", concurrentHashMap.putIfAbsent("test2", getValue()));
        log.info("computeIfAbsent again:{}", concurrentHashMap.computeIfAbsent("test2", k -> getValue()));
    }

    private String getValue() {
        try {
            TimeUnit.SECONDS.sleep(1);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return UUID.randomUUID().toString();
    }
}
复制代码

Here Gets the value of the operation need 1s, you can see the results from running a second time when the value already exists, putIfAbsent also took 1s, rather than computeIfAbsent, but also see the first value when there is no return putIfAbsent a null, and returns computeIfAbsent calculated value:

image_1dg7e2vdb1iin1vp6113c1d4q1mkp13.png-203.7kB

When used according to their needs necessarily need to use the appropriate method.

ThreadLocalRandom misuse

The previous example where we have used ThreadLocalRandom, brief mention here ThreadLocalRandom possible misuse:

@Slf4j
public class ThreadLocalRandomMisuse {
    @Test
    public void test() throws InterruptedException {
        ThreadLocalRandom threadLocalRandom = ThreadLocalRandom.current();
        IntStream.rangeClosed(1, 5)
                .mapToObj(i -> new Thread(() -> log.info("wrong:{}", threadLocalRandom.nextInt())))
                .forEach(Thread::start);
        IntStream.rangeClosed(1, 5)
                .mapToObj(i -> new Thread(() -> log.info("ok:{}", ThreadLocalRandom.current().nextInt())))
                .forEach(Thread::start);
        TimeUnit.SECONDS.sleep(1);
    }
}
复制代码

A word, we should always ThreadLocalRandom.current (). NextInt () instead of this with an example of the ThreadLocalRandom.current () call to each nextInt (). Look at the two outputs can be found, wrong that 5 times to get random numbers are the same:

image_1dg7eb7ltbg2156p17ija8b1k281g.png-338kB

ConcurrentHashMap concurrent reduce functional testing

ConcurrentHashMap offers some more advanced methods may be concurrent merge operations, we write a program to compare the use of traversal and the use of reduceEntriesToLong () difference in average performance and the writing of all the values ​​in ConcurrentHashMap statistics:

@Slf4j
public class ConcurrentHashMapReduceTest {

    int loopCount = 100;
    int itemCount = 10000000;

    @Test
    public void test() {
        ConcurrentHashMap<String, Long> concurrentHashMap = LongStream.rangeClosed(1, itemCount)
                .boxed()
                .collect(Collectors.toMap(i -> "item" + i, Function.identity(),(o1, o2) -> o1, ConcurrentHashMap::new));
        StopWatch stopWatch = new StopWatch();
        stopWatch.start("normal");
        normal(concurrentHashMap);
        stopWatch.stop();
        stopWatch.start("concurrent with parallelismThreshold=1");
        concurrent(concurrentHashMap, 1);
        stopWatch.stop();
        stopWatch.start("concurrent with parallelismThreshold=max long");
        concurrent(concurrentHashMap, Long.MAX_VALUE);
        stopWatch.stop();
        log.info(stopWatch.prettyPrint());
    }

    private void normal(ConcurrentHashMap<String, Long> map) {
        IntStream.rangeClosed(1, loopCount).forEach(__ -> {
            long sum = 0L;
            for (Map.Entry<String, Long> item : map.entrySet()) {
                sum += item.getValue();
            }
            double average = sum / map.size();
            Assert.assertEquals(itemCount / 2, average, 0);
        });
    }

    private void concurrent(ConcurrentHashMap<String, Long> map, long parallelismThreshold) {
        IntStream.rangeClosed(1, loopCount).forEach(__ -> {
            double average = map.reduceEntriesToLong(parallelismThreshold, Map.Entry::getValue, 0, Long::sum) / map.size();
            Assert.assertEquals(itemCount / 2, average, 0);
        });
    }
}
复制代码

Execution results are as follows:

image_1dg7etsj31t9c1cg5pg71sfr1vio1t.png-86.2kB
We can see a parallel merge operations for better performance HashMap relatively large number, note that the incoming parallelismThreshold is not the degree of parallelism (not ForkJoinPool (int parallelism) that parallelism) meaning, but parallel threshold element, passing Long. MAX_VALUE canceled parallel, 1 full advantage of incoming ForkJoinPool.

Of course, we only demonstrate reduceEntriesToLong () a method, ConcurrentHashMap there are dozens of various reduceXXX () for Key, Value and Entry parallel merge operations.

ConcurrentHashMap misuse

In fact, prior to this to say in the article also mentioned, ConcurrentHashMap can not ensure that multiple operations on Map is atomic (unless mentioned computeIfAbsent before () and putIfAbsent (), etc.), such as in the example below we have a 9990 ConcurrentHashMap size, there are multiple threads in the calculation of its full strength from 10,000 how many gaps, then fill the gap:

@Test
public void test() throws InterruptedException {
    int limit = 10000;
    ConcurrentHashMap<String, Long> concurrentHashMap = LongStream.rangeClosed(1, limit - 10)
            .boxed()
            .collect(Collectors.toConcurrentMap(i -> UUID.randomUUID().toString(), Function.identity(),
                    (o1, o2) -> o1, ConcurrentHashMap::new));
    log.info("init size:{}", concurrentHashMap.size());

    ExecutorService executorService = Executors.newFixedThreadPool(10);
    for (int __ = 0; __ < 10; __++) {
        executorService.execute(() -> {
            int gap = limit - concurrentHashMap.size();
            log.debug("gap:{}", gap);
            concurrentHashMap.putAll(LongStream.rangeClosed(1, gap)
                    .boxed()
                    .collect(Collectors.toMap(i -> UUID.randomUUID().toString(), Function.identity())));
        });
    }
    executorService.shutdown();
    executorService.awaitTermination(1, TimeUnit.HOURS);

    log.info("finish size:{}", concurrentHashMap.size());
}
复制代码

This code is obviously problematic:

  • Using the first, such as size (), containsValue () et (Aggregation) method only when there is no concurrent updates are accurate, or only as statistics, monitoring, can not be used to run the control program logic
  • Second, even if the size () is accurate, calculate the gap after other threads may have been entered, data is added, though putAll () operation this operation is thread-safe, but the calculation of this gap, to fill the gap of logic and not atomic, not to say do not need to use the lock ConcurrentHashMap

Output:

image_1dg7frtvg1qmgdso1cv9men15f12a.png-351.4kB

You can see, there are some threads even calculated the negative gap, the final result is 10040, 40 more than expected limit.

Another point can not be considered misuse, just to mention, ConcurrentHashMap of Key / Value can not be null, HashMap is, why is this so? The figure is the author of ConcurrentHashMap reply:

image_1dg7ghtj114u219b81se319to1qg02n.png-282.5kB

Meaning that if the get (key) returns null, you do not know that in the end it is not key or value is null. Non-concurrent case you can use the rear contains (key) to judge, but not concurrent case, when you judge might Map has been modified.

CopyOnWriteArrayList test

CopyOnWrite meaning that almost no modification, and super high concurrent read a scene, if there are changes, we start anew copy, although the high price, but this allows concurrent reading of 99.9% lock-free, we have to try test its performance, first written test, we look at competition CopyOnWriteArrayList, manually lock the ArrayList and synchronizedList glossed over ArrayList:

@Test
public void testWrite() {
    List<Integer> copyOnWriteArrayList = new CopyOnWriteArrayList<>();
    List<Integer> arrayList = new ArrayList<>();
    List<Integer> synchronizedList = Collections.synchronizedList(new ArrayList<>());
    StopWatch stopWatch = new StopWatch();
    int loopCount = 100000;
    stopWatch.start("copyOnWriteArrayList");
    IntStream.rangeClosed(1, loopCount).parallel().forEach(__ -> copyOnWriteArrayList.add(ThreadLocalRandom.current().nextInt(loopCount)));
    stopWatch.stop();
    stopWatch.start("arrayList");
    IntStream.rangeClosed(1, loopCount).parallel().forEach(__ -> {
        synchronized (arrayList) {
            arrayList.add(ThreadLocalRandom.current().nextInt(loopCount));
        }
    });
    stopWatch.stop();
    stopWatch.start("synchronizedList");
    IntStream.range(0, loopCount).parallel().forEach(__ -> synchronizedList.add(ThreadLocalRandom.current().nextInt(loopCount)));
    stopWatch.stop();
    log.info(stopWatch.prettyPrint());
}
复制代码

100,000 operations not that much, as follows:

image_1dg7h4kskojhnuavv014o7104t34.png-73.1kB
The visible modifications CopyOnWriteArrayList because it involves copying the entire data, the cost is considerable.

Let's look at reading, to use a method to fill 10 million data, and then test iteration 100 million:

private void addAll(List<Integer> list) {
    list.addAll(IntStream.rangeClosed(1, 10000000).boxed().collect(Collectors.toList()));
}

@Test
public void testRead() {
    List<Integer> copyOnWriteArrayList = new CopyOnWriteArrayList<>();
    List<Integer> arrayList = new ArrayList<>();
    List<Integer> synchronizedList = Collections.synchronizedList(new ArrayList<>());
    addAll(copyOnWriteArrayList);
    addAll(arrayList);
    addAll(synchronizedList);
    StopWatch stopWatch = new StopWatch();
    int loopCount = 100000000;
    int count = arrayList.size();
    stopWatch.start("copyOnWriteArrayList");
    IntStream.rangeClosed(1, loopCount).parallel().forEach(__ -> copyOnWriteArrayList.get(ThreadLocalRandom.current().nextInt(count)));
    stopWatch.stop();
    stopWatch.start("arrayList");
    IntStream.rangeClosed(1, loopCount).parallel().forEach(__ -> {
        synchronized (arrayList) {
            arrayList.get(ThreadLocalRandom.current().nextInt(count));
        }
    });
    stopWatch.stop();
    stopWatch.start("synchronizedList");
    IntStream.range(0, loopCount).parallel().forEach(__ -> synchronizedList.get(ThreadLocalRandom.current().nextInt(count)));
    stopWatch.stop();
    log.info(stopWatch.prettyPrint());
}
复制代码

Execution results are as follows:

image_1dg7h9gou67s1ck71rae1goeatr3h.png-83.1kB
Yes indeed, CopyOnWriteArrayList performance is quite powerful, after all, read no lock, just think how many concurrent much concurrency.

After reading most of the concurrent containers let's look at five concurrent synchronizer.

CountDownLatch test

CountDownLatch in the previous article, there have been N times, also five kinds of concurrent synchronization is most frequently used in a general common scenarios are:

  • N threads waiting finished
  • Like so many times before performance testing example, the use of two CountDownLatch, all threads wait for a main thread initiated command to turn together, one for the main thread to wait for all child thread is finished
  • Asynchronous transfer synchronous asynchronous operation, many asynchronous RPC frame based communications network (such as Netty) are used to asynchronously CountDownLatch synchronous rotation, such as the following are taken from the source code fragment RocketMQ Remoting module:

image_1dg7p5lfu1mh71rtuugl1er8nlk4b.png-281.7kB

ResponseFuture take a look at the relevant code implementation:

public class ResponseFuture {
    private final int opaque;
    private final Channel processChannel;
    private final long timeoutMillis;
    private final InvokeCallback invokeCallback;
    private final long beginTimestamp = System.currentTimeMillis();
    private final CountDownLatch countDownLatch = new CountDownLatch(1);
    private final SemaphoreReleaseOnlyOnce once;
    private final AtomicBoolean executeCallbackOnlyOnce = new AtomicBoolean(false);
    private volatile RemotingCommand responseCommand;
    private volatile boolean sendRequestOK = true;
    private volatile Throwable cause;

...  
    public RemotingCommand waitResponse(final long timeoutMillis) throws InterruptedException {
        this.countDownLatch.await(timeoutMillis, TimeUnit.MILLISECONDS);
        return this.responseCommand;
    }

    public void putResponse(final RemotingCommand responseCommand) {
        this.responseCommand = responseCommand;
        this.countDownLatch.countDown();
    }
...
}
复制代码

After issuing a network request, we are waiting for a response after unlocking CountDownLatch after receiving the response we put data into, and then wait for a response to the request can continue to get the data.

Semaphore test

Semaphore can be used to limit concurrent, suppose we have a need to limit game players online at the same time, we first define a Player class, where we enter the number of players by restricting incoming Semaphore. In the code, we passed before learning to AtomicInteger, AtomicLong and LongAdder to count the total number of players, the longest duration and were waiting for the waiting time.

@Slf4j
public class Player implements Runnable {

    private static AtomicInteger totalPlayer = new AtomicInteger();
    private static AtomicLong longestWait = new AtomicLong();
    private static LongAdder totalWait = new LongAdder();
    private String playerName;
    private Semaphore semaphore;
    private LocalDateTime enterTime;

    public Player(String playerName, Semaphore semaphore) {
        this.playerName = playerName;
        this.semaphore = semaphore;
    }

    public static void result() {
        log.info("totalPlayer:{},longestWait:{}ms,averageWait:{}ms", totalPlayer.get(), longestWait.get(), totalWait.doubleValue() / totalPlayer.get());
    }

    @Override
    public void run() {
        try {
            enterTime = LocalDateTime.now();
            semaphore.acquire();
            totalPlayer.incrementAndGet();
            TimeUnit.MILLISECONDS.sleep(10);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            semaphore.release();
            long ms = Duration.between(enterTime, LocalDateTime.now()).toMillis();
            longestWait.accumulateAndGet(ms, Math::max);
            totalWait.add(ms);
            //log.debug("Player:{} finished, took:{}ms", playerName, ms);
        }
    }
}
复制代码

Main test code as follows:

@Test
public void test() throws InterruptedException {
    Semaphore semaphore = new Semaphore(10, false);
    ExecutorService threadPool = Executors.newFixedThreadPool(100);
    IntStream.rangeClosed(1, 10000).forEach(i -> threadPool.execute(new Player("Player" + i, semaphore)));
    threadPool.shutdown();
    threadPool.awaitTermination(1, TimeUnit.HOURS);
    Player.result();
}
复制代码

We limit the number of concurrent players for the 10 non-equitable access, fixed thread pool is 100 threads, a total of 10,000 players need the game, after the end of the program output is as follows:

image_1dg7pm9mmt3112srvku1gibprt4o.png-62kB
Try again Fair mode:
image_1dg7ps2jaepfifbmtg16fl117c5i.png-61.6kB
Can clearly see that the player is turned on after the fair longest wait mode does not wait that long, and the average waiting time is slightly longer than before, in line with expectations.

CyclicBarrier test

CyclicBarrier to let all threads waiting for each other, or to wait for all the threads together participants arrive at the meeting point after entering with a wait, continuous cycle. It can be reached by the last thread to do something after all the threads reach the meeting point "post-processing" operations, the post-processing operation can be passed when declaring CyclicBarrier also can be achieved by returning judge await () of.

This example we implement a simple scene, a performance need to wait three actors in place to begin performances, the show will require three. We do this by CyclicBarrier wait until all the actors in place, put in place after our show takes 2 seconds.

@Slf4j
public class CyclicBarrierTest {
    @Test
    public void test() throws InterruptedException {

        int playerCount = 5;
        int playCount = 3;
        CyclicBarrier cyclicBarrier = new CyclicBarrier(playerCount);
        List<Thread> threads = IntStream.rangeClosed(1, playerCount).mapToObj(player->new Thread(()-> IntStream.rangeClosed(1, playCount).forEach(play->{
            try {
                TimeUnit.MILLISECONDS.sleep(ThreadLocalRandom.current().nextInt(100));
                log.debug("Player {} arrived for play {}", player, play);
                if (cyclicBarrier.await() ==0) {
                    log.info("Total players {} arrived, let's play {}", cyclicBarrier.getParties(),play);
                    TimeUnit.SECONDS.sleep(2);
                    log.info("Play {} finished",play);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }))).collect(Collectors.toList());

        threads.forEach(Thread::start);
        for (Thread thread : threads) {
            thread.join();
        }
    }
}
复制代码

By if (cyclicBarrier.await () == 0) can be achieved post-processing operations do break through the fence after the last actor in place later, we look at this show is not a cycle three times, and after all the actors are not in place to start of:

10:35:43.333 [Thread-4] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 5 arrived for play 1
10:35:43.333 [Thread-1] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 2 arrived for play 1
10:35:43.333 [Thread-3] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 4 arrived for play 1
10:35:43.367 [Thread-2] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 3 arrived for play 1
10:35:43.376 [Thread-0] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 1 arrived for play 1
10:35:43.377 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Total players 5 arrived, let's play 1
10:35:43.378 [Thread-2] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 3 arrived for play 2
10:35:43.432 [Thread-3] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 4 arrived for play 2
10:35:43.434 [Thread-1] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 2 arrived for play 2
10:35:43.473 [Thread-4] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 5 arrived for play 2
10:35:45.382 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Play 1 finished
10:35:45.390 [Thread-0] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 1 arrived for play 2
10:35:45.390 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Total players 5 arrived, let's play 2
10:35:45.437 [Thread-3] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 4 arrived for play 3
10:35:45.443 [Thread-4] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 5 arrived for play 3
10:35:45.445 [Thread-2] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 3 arrived for play 3
10:35:45.467 [Thread-1] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 2 arrived for play 3
10:35:47.395 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Play 2 finished
10:35:47.472 [Thread-0] DEBUG me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Player 1 arrived for play 3
10:35:47.473 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Total players 5 arrived, let's play 3
10:35:49.477 [Thread-0] INFO me.josephzhu.javaconcurrenttest.concurrent.synchronizers.CyclicBarrierTest - Play 3 finished
复制代码

You can see from this example, our performance was carried out on reaching the final of Player1 actors in this thread, it is noteworthy point is that, when he was performing other actors have entered the wait state (not to be mistaken, CyclicBarrier make All threads are blocked after waiting to be processed and then allow other threads to continue to complete the next cycle), wait for him after the show continues to await () to start a new show.

Phaser test

Phaser and Barrier similar, but the former is more flexible, the number of participants that can be dynamically controlled, rather than to start first determined. Phaser by manually register () method to register as a participant, and () is represented by arriveAndAwaitAdvance that they have arrived, wait until after the break with the other participants to reach the fence.

For example, the following code, we iterations iterations action on all incoming tasks. Phaser termination condition is not greater than the number of iterations or Returns true if it ends its participation, onAdvance (). We first let the main thread to become a participant, and then let each participant has become a task, the task to run in the new thread, after the run is completed to reach the fence, the fence as long as there is no termination of the infinite loop. We are also on the main thread is an infinite loop, each stage is waiting to complete the task (after reaching the fence) other threads, and then reach their own fence to open the next task.

@Slf4j
public class PhaserTest {

    AtomicInteger atomicInteger = new AtomicInteger();

    @Test
    public void test() throws InterruptedException {
        int iterations = 10;
        int tasks = 100;
        runTasks(IntStream.rangeClosed(1, tasks)
                .mapToObj(index -> new Thread(() -> {
                    try {
                        TimeUnit.SECONDS.sleep(1);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    atomicInteger.incrementAndGet();
                }))
                .collect(Collectors.toList()), iterations);
        Assert.assertEquals(tasks * iterations, atomicInteger.get());
    }

    private void runTasks(List<Runnable> tasks, int iterations) {
        Phaser phaser = new Phaser() {
            protected boolean onAdvance(int phase, int registeredParties) {
                return phase >= iterations - 1 || registeredParties == 0;
            }
        };
        phaser.register();
        for (Runnable task : tasks) {
            phaser.register();
            new Thread(() -> {
                do {
                    task.run();
                    phaser.arriveAndAwaitAdvance();
                } while (!phaser.isTerminated());
            }).start();
        }
        while (!phaser.isTerminated()) {
            doPostOperation(phaser);
            phaser.arriveAndAwaitAdvance();
        }
        doPostOperation(phaser);
    }

    private void doPostOperation(Phaser phaser) {
        while (phaser.getArrivedParties() < 100) {
            try {
                TimeUnit.MILLISECONDS.sleep(10);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        log.info("phase:{},registered:{},unarrived:{},arrived:{},result:{}",
                phaser.getPhase(),
                phaser.getRegisteredParties(),
                phaser.getUnarrivedParties(),
                phaser.getArrivedParties(), atomicInteger.get());
    }
}
复制代码

10 iterations, each iteration of the 100 tasks, execute it and see:

image_1dgajh5dkphcg2o62q9301pvi9.png-416.5kB

It can be seen after the end of the while loop post-processing tasks of the main thread of its own not only reach the fence, this time it may do some post-processing tasks, after the completion of breaking the fence.

Exchanger test

Exchanger effect achieved is that two threads to exchange data at the same time (meeting point), to write a piece of code to test. In the following code, we define a producer thread continues to send data, the data is sent after a random sleep time by using Exchanger, consumers get the thread to achieve the effect immediately after the data producers to send data, here we do not use blocking queue to achieve:

@Slf4j
public class ExchangerTest {

    @Test
    public void test() throws InterruptedException {
        Random random = new Random();
        Exchanger<Integer> exchanger = new Exchanger<>();
        int count = 10;
        Executors.newFixedThreadPool(1, new ThreadFactoryImpl("producer"))
                .execute(() -> {
                    try {
                        for (int i = 0; i < count; i++) {
                            log.info("sent:{}", i);
                            exchanger.exchange(i);
                            TimeUnit.MILLISECONDS.sleep(random.nextInt(1000));
                        }
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                });

        ExecutorService executorService = Executors.newFixedThreadPool(1, new ThreadFactoryImpl("consumer"));
        executorService.execute(() -> {
            try {
                for (int i = 0; i < count; i++) {
                    int data = exchanger.exchange(null);
                    log.info("got:{}", data);
                }
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        });

        executorService.shutdown();
        executorService.awaitTermination(1, TimeUnit.HOURS);
    }
}
复制代码

Operating results are as follows:

image_1dg7q8eqiffnvca166kst8pk5v.png-501.7kB

summary

Concurrent containers this I do not do too much summed up, ConcurrentHashMap are really good too common, but be sure to pay attention to its thread-safe features ConcurrentHashMap how not to say that there is no problem with the error code to use in business is common.

Now we give an example to see performances summarize several concurrent Synchronizer:

  • Semaphore is to limit the number of viewers simultaneously watching the show after someone came to take a new look
  • CountDownLatch cast and crew who are not acting together can not start, ending its run
  • CyclicBarrier after the expiration of the cast and crew to show, the last person to be the director, the director will dominate the show, all the cast and crew after finishing re waiting for you after the show finished expire
  • Phaser is a list of the cast and crew every performance is subject to change, but also to ensure that all the cast and crew after the expiration of the curtain to

Similarly, the code, see my Github , welcome to clone themselves after playing, welcome thumbs up.

I welcome attention to the micro-channel public number: revel owner's garden

image_1dfvp8d55spm14t7erkr3mdbscf.png-45kB

Guess you like

Origin juejin.im/post/5d3443a451882536e6368672