Common terms concurrent programming

Original please indicate the source: https://www.cnblogs.com/agilestyle/p/11409772.html

 

Concurrent programming field can be abstracted into three core issues: the division of labor, synchronization, mutual exclusion.

 

Concurrency: refers to two or more events occur within the same time interval , these events occur simultaneously macroscopically, but microscopically occur alternately. For example: A day in the morning to go out to play about girls, about girls and then went out to play B

Parallel: Refers to two or more events occur simultaneously at the same time . For example: One day in the morning about the same time the two girls went out to play

 

Program: refers to a sequence of instructions.

Process control block PCB: it refers to a data structure of system configuration for each running a program for various kinds of information (e.g., the storage location of the program code) description of the process.

 

Process Entity: a three-part program segment, the data segment, PCB composition is static.

Process: the process of running the process entity, the system is an independent unit of resource allocation and scheduling is dynamic.

Thread: CPU is a basic execution unit, is the smallest unit of the program execution flow. After the introduction of the thread, the process is the basic unit of resource allocation, thread is the basic unit of scheduling.

 

Exclusive: the same time, allowing only one thread to access shared variables.

Sync: Before threading how to communicate, collaborate

 

Critical section: a mutex piece of code execution

 

Visibility: one thread to shared variables modified, another thread can see immediately

 

Atomicity: during one or more operations performed by the CPU is not interrupted properties. In fact, not the nature of atomic inseparable, indivisible just external manifestations, the nature of consistency between the requirements of multiple resources, intermediate states are not visible outside operation. So to solve the atomic problem, it is to ensure that intermediate states are not visible outside.

 

Ordered: the program in the order of code execution, the compiler to optimize performance, sometimes changing the order of the program statement

 

Happens-Before: subsequent operations are visible in front of a result of the operation. A Happens-Before B means that A is visible on the B event for the event, regardless of whether the A and B-Events occur in the same thread. Happens-Before constrained optimization compiler's behavior, although allows the compiler to optimize, but requires the compiler to optimize certain Happens-Before comply with the rules.

 

Wait - notification mechanism: a complete wait - notification mechanism, thread first acquire the mutex when the thread does not meet the required conditions, release the mutex lock, enter the wait state; when the required conditions are met, notify waiting threads, reacquires the mutex.

 

Security thread: in a multi-thread call, the performance is still normal, according to the program we expect to perform.

 

Data Race: When multiple threads simultaneously access the same data, and at least one thread will write this data, if you do not take protective measures, it will cause concurrency bug 

 

Race Conditions: Results of the implementation of a thread of execution order dependencies

 

Deadlock: a group of threads compete for resources with each other waiting for each other, leading to "permanent" blocking phenomenon

Livelock: the thread is not clogged, but there would still be the case do not go on

Hunger: the case of the thread due to inability to access the required resources can not execute down

 

Throughput: refers to the number of requests that can be processed per unit of time. The higher the throughput, the better the performance.

Delay: refers to the time from the receipt of a response to the request. The smaller the delay is, the better the performance.

Concurrency: refers to the number of requests that can be processed simultaneously. Generally concurrent with the increase in the amount of delay will increase. So delay this indicator, usually based on the amount of concurrent terms of. For example: when the concurrency is 1000, the delay is 50 msec.

Concurrent programming in the field, is to enhance the performance improvement essentially hardware utilization, more specific point, is to enhance the I / O utilization and CPU utilization

 

The tube: refers to the process of managing shared variables and shared variables operation, so that they support concurrent. Translated into the language areas of Java, it is the member variables and member of the management class method, so this class is thread-safe.

 

Common thread life cycle: the initial state, run state, running state, dormant state, end state

Java language threading a total of six clock status: NEW (initialization state), RUNNABLE (runnable), BLOCKED (blocked), WAITING (no wait time), TIMED_WAITING (waiting time-bound), TERMINATED (end state)

 

Call stack: refers to the CPU to find the parameter method calls and return address stack register.

 

Stack frame: each method in the call stack has its own independent space. Each stack frame there is a corresponding need for methods and parameters return address. When you call the method creates a new stack frame, and onto the call stack; when the method returns, corresponding stack frame is automatically ejected, that is, the stack frame is the same method and the total death. The scope of local variables is the internal method, so local variables is also placed in the call stack; local variables and the way is with the total death, and a variable method if you want to cross the border, it must be created on the heap.

 

Thread closed: two threads can call the same method with different parameters simultaneously, each thread has its own separate call stack, local variables stored in the call stack inside each thread, not shared, so naturally there is no concurrency issues .

 

Guess you like

Origin www.cnblogs.com/agilestyle/p/11409772.html