"Java Concurrency in Practice" reading notes

from p90: The
Executor interface provides a means to decouple task submission and task execution. It is based on the producer-consumer model:

Executor is based on the producer consumer pattern, where activities that submit tasks are the producers (producing units of work to be done) and the threads that execute tasks are the consumers (consuming those units of work).

The value of decoupling submission from execution is that it lets you easily specify, and subsequently change without great difficulty, the execution policy for a given class of tasks.

Separating the specification of execution policy from task submission makes it practical to select an execution policy at deployment time that is matched to the available hardware.

Thread pool : The
thread line actually uses a BlockingQueue<Runnable>queue to store tasks, among which Runnable is the submitted task.
p92:

A thread pool is tightly bound to a work queue holding tasks waiting to be executed. Worker threads have a simple life: request the next task from the work queue, execute it, and go back to waiting for another task.

When the program uses Executors.newFixedThreadPool:

… uses an Executor with a bounded pool of worker threads. Submitting a task with execute adds the task to the work queue, and the worker threads repeatedly dequeue tasks from the work queue and execute them.

p103 interrupt:

Blocking library methods like Thread.sleep and Object.wait try to detect when a thread has been interrupted and return early. They respond to interruption by clearing the interrupted status and throwing InterruptedException

Experiment code:

class PrimeProducer extends Thread {
    
    
    private final BlockingQueue<BigInteger> queue;
    PrimeProducer(BlockingQueue<BigInteger> queue) {
    
    
        this.queue = queue;
    }
    public void run() {
    
    
        try {
    
    
            BigInteger p = BigInteger.ONE;
            while (!Thread.currentThread().isInterrupted())
//            while (true)
                queue.put(p = p.nextProbablePrime());
        } catch (InterruptedException consumed) {
    
    
            /* Allow thread to exit */
            System.out.println(queue);
            System.out.println("interrupted");
        }
    }
    public void cancel() {
    
     interrupt(); }

    public static void main(String[] args) throws InterruptedException {
    
    
              BlockingQueue<BigInteger> queue = new LinkedBlockingDeque<>();
        //        BlockingQueue<BigInteger> queue = new ArrayBlockingQueue<>(18);
        PrimeProducer t = new PrimeProducer(queue);
        t.start();
        Thread.sleep(10);
        t.cancel();
        System.out.println("main"+t.queue);
    }
}

What I don’t understand is that when I use LinkedBlockingDeque and use the while(true) loop, that is, I don’t actively use the isInterrupted() method to detect the interrupt bit, but rely on the blocking method put to automatically detect the interrupt bit and throw InterruptedException, the program is not as good as me. An exception is thrown unexpectedly. why is that. In the same situation, either LinkedBlockingQueue or ArrayBlockingQueue can be used.

p122 thread pool maximumPoolSize parameters :

The core size is the target size; the implementation attempts to maintain the pool at this size even when there are no tasks to execute, and will not create more threads than this unless the work queue is full. The maximum pool size is the upper bound on how many pool threads can be active at once. A thread that has been idle for longer than the keep alive time becomes a candidate for reaping and can be terminated if the current pool size exceeds the core size.

The meaning of several parameters of the thread pool is basically explained above. My only question is that when the number of core threads and the task queue are both full (assuming the maximum number of threads is greater than the number of core threads), a task is submitted at this time, and a new thread will be generated, but which task is this thread for? Is it a task that has been queued in the blockingqueue? Or is this task newly submitted at this time? Because of the following sentence in the book, I tend to be the former. Because at this time, this can only be done by scheduling tasks that have already been queued in the queue. So I did an experiment.

Using a FIFO queue like LinkedBlockingQueue or ArrayBlockingQueue causes tasks to be started in the order in which they arrived.

public class SdhThreadPoolExecutor {
    
    
    public static void main(String[] args){
    
    
        ThreadPoolExecutor pool = new ThreadPoolExecutor(2,4,
                360L, TimeUnit.SECONDS,new ArrayBlockingQueue<Runnable>(2));
        for (int i = 0; i < 6; i++) {
    
    
            try {
    
    
                Thread.sleep(1000L);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            int finalI = i;
            pool.submit(
                 () ->{
    
    
                     System.out.println("task "+ finalI+" start @"+new Date().getTime());
                     try {
    
    
                         Thread.sleep(10000L);
                     } catch (InterruptedException e) {
    
    
                         e.printStackTrace();
                     }
                     System.out.println("task "+ finalI+" end @"+new Date().getTime());
                 }
            );
        }
    }
}

Output result:
task 0 start @1610080727040
task 1 start @1610080728041
task 4 start @1610080731042
task 5 start @1610080732043
task 0 end @1610080737041
task 2 start @1610080737041
task 1 end @1610080738041
task 3 start @1610080738041
task 4 end @1610080741043
task 5 end @1610080742043
task 2 end @1610080747041
task 3 end @1610080747041 task 3 end @1610080748042 The
result is beyond my expectation. It turns out that when the number of core threads and the task queue are full (assuming the maximum number of threads is greater than the number of core threads), a task is submitted at this time. After the new thread is generated, it will be used directly for this new task, instead of the task that has been queued in the queue.

p153 Is the definition of CPU intensive here? :

When the performance of an activity is limited by availability of a particular resource, we say it is bound by that resource: CPU bound, database bound, etc.

It seems that the translation is not right, because CPU intensive appears later in the book, it should be CPU intensive

p159 Memory fence:

The visibility guarantees provided by synchronized and volatile may entail using special instructions called memory barriers that can flush or invalidate caches, flush hardware write buffers, and stall execution pipelines. Memory barriers may also have indirect performance consequences because they inhibit other compiler optimizations; most operations cannot be reordered with memory barriers.

Escape analysis:

More sophisticated JVMs can use escape analysis to identify when a local object reference is never published to the heap and is therefore thread local.

Lock elision; lock coarsening

Lock striping (lock striping), used in jdk1.7 concurrenthashmap

A section of p165 is useful for tuning. such as:

I/O bound. You can determine whether an application is disk bound using iostat or perfmon, and whether it is bandwidth limitedbymonitoringtrafficlevelsonyournetwork.

p200 I summarized: Because of the following three possibilities, the Object.wait method should be used in the loop :

  1. False wake:

wait is even allowed to return “spuriously” not in response to any thread calling notify

  1. The thread is awakened, but the condition predicate of the loop judgment has not changed:

maybe it hasn’t been true at all since you called wait. You don’t know why another thread called notify or notifyAll; maybe it was because another condition predicate associated with the same condition queue became true.

Because multiple threads could be waiting on the same condition queue for different condition predicates, using notify instead of notifyAll can be dangerous, primarily because single notification is prone to a problem akin to missed signals.

  1. When the thread is awakened, the condition predicate judged by the loop is true, but before the thread gets the lock and resumes operation, the condition predicate is false again:

A thread waking up from wait gets no special priority in reacquiring the lock; it contends for the lock just like any other thread attempting to enter a synchronized block.

It might have been true at the time the notifying thread called notifyAll, but could have become false again by the time you reacquire the lock. Other threads may have acquired the lock and changed the object’s state between when your thread was awakened and when wait reacquired the lock.

Guess you like

Origin blog.csdn.net/qq_23204557/article/details/112117173