[Reading Notes] Chapter 1: Entering the Parallel World-"Practical Java High Concurrency Programming"

Chapter 1: Entering the Parallel World

1.1 The history of parallel computing

1.2 Concept

  • Synchronization: Once the synchronization method starts, the caller must wait until the method call returns before continuing the subsequent behavior;

  • Asynchronous: Asynchronous methods are usually executed "real" in another thread, and the whole process will not hinder the work of the caller.
    Insert picture description here

  • Concurrency: Multiple tasks are performed alternately (but to external observers it seems that multiple tasks "simultaneously");

  • Parallel: Multiple tasks are carried out at the same time.
    Insert picture description here

  • Critical section: common resources (shared data).

  • Blocking: A thread is suspended, waiting for other threads to release the critical region resources, and it is blocked while waiting;

  • Non-blocking: In contrast to "blocking", only a certain thread can be executed without any hindrance.

  • Deadlock: Multiple threads holding resources wait for each other to release resources;

  • Starvation: Refers to a certain thread or multiple threads unable to obtain the required resources for various reasons, resulting in inability to execute all the time;

  • Livelock: Multiple threads actively release their own resources to each other at the same time, causing resources to constantly jump between multiple threads, but no thread has obtained the complete resource.

1.3 Concurrency level

The levels of concurrency are: blocking, no hunger, barrier-free, no lock, no wait.

1.4 Two important laws about parallelism

1.4.1 Amdahl's Law

Speedup definition:

Speedup = System time before optimization / System time after optimization
Insert picture description here
Speedup = System time before optimization / System time after optimization = 1 / (F+1/n(1-F))

Conclusion: According to Amdahl's law, use a multi-core CPU to optimize the system. The optimization effect depends on the number of CPUs and the proportion of serialized programs in the system. The more CPUs and the lower the serialization ratio, the better the optimization effect. Only increasing the number of CPUs without reducing the serialization ratio of the program cannot improve system performance.

1.4.2 Gustafson's Law

Gustafson's law also attempts to explain the relationship between the number of processors, serialization ratio, and speedup, but Gustafson's law and Amdahl's law have different perspectives.
Insert picture description here
Conclusion: From Gustafson's law, we can find that if the serialization ratio is small and the parallelization ratio is large, then the speedup is the number of processors. As long as you keep accumulating the processors, you can get faster speeds.

1.4.3 Are the two formulas contradictory

Amdahl's law and Gustafson's law have different conclusions. This is the result of these two laws looking at the same objective fact from different angles, and their focus is different.

  • Amdahl emphasized: When the serialization ratio is constant, the speedup ratio has an upper limit. No matter how many CPUs you stack to participate in the calculation, the upper limit cannot be exceeded!
  • Gustafson's law is concerned with: if the proportion of code that can be serialized is large enough, then the speedup can increase linearly with the number of CPUs.

1.5 JMM

concept

  • Atomicity: Refers to an operation that cannot be interrupted;
  • Visibility: When a thread modifies the value of a shared variable, whether other threads can immediately know the modification;
  • Orderliness: It means that instructions may be rearranged when the program is executed, and the order of the rearranged instructions may not be consistent with the original instructions.

1.5.4 Happen-Before Rules

Rules that cannot be violated in order rearrangement:

  • Principle of program sequence: guarantee the serialization of semantics within a thread;
  • Volatile rules: The writing of volatile variables occurs before the reading, which ensures the visibility of volatile variables;
  • Locking rules: unlocking (unlock) must happen before subsequent locking (lock);
  • Transitivity: A precedes B, B precedes C, then A must precede C;
  • The thread's start() method precedes each of its actions;
  • All operations of the thread precede the end of the thread (Thread.join());
  • The interruption of the thread (interrupt()) precedes the code of the interrupted thread;
  • The object's constructor executes and ends before the finalize() method.

These principles are to ensure that instruction rearrangement will not destroy the original semantic structure.

Guess you like

Origin blog.csdn.net/qq_43424037/article/details/113651621