[Essential foundation of concurrent programming process --2019-08-06 16:02:14]

Original link: http://106.13.73.98/__/10/

table of Contents

1. Operating system background

2. What is the process

3. Process Scheduling

4. parallel and concurrent processes

5. Synchronous Asynchronous blocking nonblocking

6. The process of creation and end


 

1. Operating system background

As the name implies , a process that is the process of being executed, the process is an abstraction of a running program. The concept originated in the process of the operating system, is the core concept of the operating system, one of the oldest but also the most important abstraction provided by the operating system, all other operating systems are all revolves around the concept of the process.

So you want to really understand the process, you should first understand the operating system, click to enter the operating system introduced

PS: Even if you can only use a cpu (early computer is true), but also to ensure the ability to support the (pseudo) concurrently. Will become a separate multiple virtual cpu cpu (multi-channel technologies: time division multiplexing + + support spatial multiplexing separation hardware), if the process is not abstract, modern computers will cease to exist.

  • The necessary theoretical basis

1. The role of the operating system:
1) to hide the complexity of the hardware interface, providing good abstract interface.
2) management, scheduling processes, and multiple processes to snatch the hardware becomes orderly

2. Multi-channel technology
1) Background: For monocytes, concurrency
2) spatial multiplexing: multiple channels simultaneously as a program memory
3) multiplexed in time: a time-multiplexed slice of cpu, stressed: cut encountered io, cpu occupied for too long also cut, the core is cut before the state will save the download process, so as to ensure to continue operating position based on the last cut away the next time switch back.

PS: Now multicore cpu are generally the same, each core will be used in multi-channel technology. For example, there are four core, run a program on the core 1 encounters a blocked io, io will wait until the end of the re-scheduling, this time may be four core scheduling tract of any specific scheduling by the operating system algorithm decides.


2. What is the process

Process consists of three parts: a code segment, the data segment, PCB (Process Control Block process control block)

Process (Process) is a computer program on a running activity on a set of data is the basic unit of system resource allocation and scheduling, is the underlying operating system architecture, computer architecture early process-oriented design, the process is a program the basic execution entity in computer architecture contemporary thread-oriented design, the process is the thread of the container. Program is instruction, organization of data and its description of the process is a solid program.

Narrow definition: the process is an instance of the program running.

Broad definition: the process is a function of a certain program run independent activity on a set of data (a program) is. It is the operating system dynamic execution of a basic unit , in the traditional operating system, the process is both a basic allocation unit , is the basic unit of execution.

  • The concept of process

First, the process is an entity, each process has its own address space, in general, includes a text area (text region), the data region (data region), the stack (stack region). Text area codes executed by the processor; dynamically allocated memory area stores data used during the process variables and execution; stack area stores instructions and local variables of the calling procedure activities.

Second, the process is an "execution procedures." Program is a lifeless entity, only the processor attached to the program of life (the operating system to perform), it can become an active entity, we call for the process.

Operating system process is the most basic and important concept, is a multi-channel programming system a concept later appeared, in order to characterize the dynamics of internal systems arise, the activities described in the internal law system of the Road program introduced, all design-based multi-channel program operating systems are based on the process.

The reason operating system introduces the concept of the process: from a theoretical point of view, is an abstract procedure is running; from the implementation point of view, is a data structure, aimed at clearly portray the inherent laws of dynamic systems, effective management and scheduling enter the computer system main memory to run the program.

  • Process features

1. dynamic: the essence of the process is the first execution of the program in a multiprogramming system, the process is dynamically generated, dynamic extinction.

2. Concurrency: any process can be executed concurrently along with other processes.

3. Independence: the process is a basic unit can operate independently, but also the system of allocation of resources and scheduling of independent units.

4. Asynchronous line: due to the interaction between processes, so that the process has intermittent execution of that process by independent, unpredictable pace to move forward.

The structural characteristics: the process by the program, data, and process control block of three parts.

Different processes can comprise a plurality of the same program, a program in a different data set constitute a different process, different results can be obtained, but during the execution, the program changes can not occur.

  • The difference between the process and procedures

1. The procedure is an ordered collection of instructions and data, which operation does not have any meaning, is a static concept.
2. The process is on a program at a processing machine, it is a dynamic concept.
3. The program may be as a long-standing software information, and the process there is a certain life cycle.
4. the program is permanent, the process is short-lived.

Note: Perform the same program twice, at least two processes occur in the operating system, so we can usually run a software at the same time, each doing different things will not be confused.


3. Process Scheduling

Want to run multiple processes alternately, the operating system must be scheduled for these processes, the scheduling is not done randomly, but needs to follow certain rules, thus there is a scheduling algorithm process.

  • FCFS scheduling algorithm

First-come, first-served (FCFS) scheduling algorithm is a simple scheduling algorithm that can be used for job scheduling, it can also be used for process scheduling. FCFS algorithm is more conducive to a long job (process), to the detriment of short job (process). It can be seen, this algorithm is suitable for heavy cpu type operations, to the detriment of i / o-type heavy work (process).

  • Short operating priority scheduling algorithm

Short job (the process) scheduling algorithm and (SJ / PF) refers to short or short process priority job scheduling algorithm can be used for job scheduling, the scheduling process may also be used. But its long job unfavorable, can not guarantee the urgency of the job (process) to be in time, the length of the job of just being estimated.

  • Round-robin method

Round-robin method (Round Robin, RR) method basic idea is to have each process in the ready queue waiting time and enjoy the service time is directly proportional. In the round-robin method, cpu processing time needs to be divided into fixed size time slices (several tens of milliseconds to several hundred milliseconds), if a predetermined process time slice is exhausted after being scheduled in the selected system, but did not fulfill the request the task, then it will be the release of their self-occupied cpu side by side to the end of the ready queue, waiting for the next scheduled time. At the same time, the process scheduler went Scheduling a process currently in the queue ready.

Clearly, round-robin scheduling can only be used to allocate some resources may seize, can seize these resources can always be denied, and they can be reassigned to another process. a cpu preemptible resources, but are not preempted resources such as printers, since the scheduling operation is assigned to all systems except cpu hardware resources, which contains non-preemptive resource, job scheduling method does not use rotation .

In the rotation method, the length of the time slice selected is important. First, the length of the time slice selection will directly affect the cost of the system and the corresponding time, if the time slice length is too short, the scheduler preempts the number of processors increases, the number of process context switches which will also greatly increased, thereby increasing the system overhead . Conversely, if the time slice length selection is too long, for example: a time slice can guarantee the longest course ready queue execution is completed, it would mean that the round-robin become a first-come, first-served method. Chip length is selected to determine the system response time requirements of the ready queue and the maximum number allowed in accordance with processes.

In the round robin, adding to the process ready queue There are three situations:
1. give the process time slice is used up, but the process has not been completed, return to the end of the ready queue waiting for the next scheduled execution to continue.
2. points to time slice of the process did not run out, just because the request i / o or due to the mutually exclusive relationship with the synchronization process is blocked, after unblocking return to the ready queue.
3. newly created process added to the ready queue

If these processes are treated differently, giving different priorities and time slice, Intuitively, the system can further improve service quality and efficiency. For example, we can follow the process of reaching the ready queue blocking reasons when the ready queue and the type of process is blocked into different ready queues, each arranged according to the principle of FCFS, process among different priority queues enjoy, but at the same the same priority queue. In this way, every time after a program after the completion of the implementation of its time slice, or wake up from sleep and was created after the ready queue to enter different.

  • Multilevel feedback queue

As a variety of process scheduling algorithm described earlier have certain limitations. Such as short process priority scheduling algorithm only take care of the short process while ignoring the long process, but did not specify if the length of the process, the short process priority-based preemptive scheduling algorithm and the length of the process will not be used.

The multi-polar feedback queue scheduling algorithm is not necessary to know in advance the time required to perform a variety of processes, but also to meet the needs of various types of processes, so it is now being recognized as a better process scheduling algorithm. Specific works as follows:

1. At a plurality of the ready queue, and given different priority for each queue, the first queue of the highest priority, followed by a second queue, one by one to reduce the remaining priority queues. The algorithm gives each queue process execution time slice size can vary, for example: second queue time slice than the first time slice twice the length of a queue, the first queue i + 1 time slices than the first i queue time slice twice as long.

2. When a process enters the memory after it is first placed in the first end of the queue, according to the principle of FCFS queue waiting to be scheduled. When it came time to perform the procedure, as it can be completed within the time slice, the system can be ready to evacuate, if he has not been completed at the end of a time slice, the process scheduler then transferred to a second end of the queue, and then the same principle according to FCFS scheduling execution wait, if he is running in the second queue after a time slice has not yet completed, then turn it into the third end of the queue, ......, and so on, when a long job (process) sequentially from the first queue to the queue after the n-th, n-th queue at a time will be taken run round-robin

3. Only when the first queue is empty, the scheduler will not schedule the second queue process runs, only if the first 1 ~ [i-1] are empty queue, the scheduler will process the i-th run queue. If the processor is the i-th process in the queue for a service, but also a new process into the high priority queue (a queue in any of 1 ~ [i-1]), the new process at this time will preempt running processor process, namely the process by the scheduler running back into the end of the first i queue, processors allocated to high-priority process new arrivals.


4. parallel and concurrent processes

Parallel: both execution , such as race, two people are kept ahead of him (enough resources, such as three threads, quad-core cpu)
concurrent: limited resources, both take turns using resources , such as section of the road (single core cpu resources) only had one person at the same time, some go after a, B let go, let a after B goes a walk, used interchangeably, aims to improve efficiency.

Parallel and concurrent difference:
parallel from, i.e. at a precise moment of time, there are different procedures performed microscopically, which requires the existence of a plurality of processors
concurrently from the macro, in a time period It can be seen are performed simultaneously, such as a server process multiple session.


5. Synchronous Asynchronous blocking nonblocking

  • Process State
! [Insert Picture description here] (http://106.13.73.98/media/ai/2019-03/0442793f-4fba-4bc4-9c80-83f1e829cb40.png)

Before understanding other concepts, we first need to understand a few state of the process, the program is running, since the scheduling algorithm is the operating system of control, the program may enter several states: ready, running and blocking.

1. Ready (Ready) state: When the process has been assigned to all the necessary resources other than the cpu as long as a processor can execute, then the process of state to the ready state.

2. Run / Run (Running) state: When the process has been the processor, the processor on which the program is being implemented at this time process execution state called status.

3. blocked (Blocked) state: When executing process can not be performed due to waiting for an event occurs, then give up the processor and is blocked, causing the process to block a variety of events, for example, waiting for i / o completion, application buffer is not met, wait letter (signal) and the like.

! [Insert Picture description here] (http://106.13.73.98/media/ai/2019-03/b7d527d7-923e-48f3-8af7-047fe86fd7d0.png)
  • Asynchronous and synchronous

Synchronization: the completion of a task need to rely on another task, only waiting to be dependent on the completion of the task, the task can be considered complete dependence, which is a reliable task sequence, either all succeed or fail, the two tasks state consistent.

Asynchronous: no need to wait to be dependent on completion of the task, only to be relied upon notification of tasks to complete what work will depend on the tasks executed immediately, as long as they completed the task even if completed, as to be dependent on whether the final task to complete the task , tasks that depend on it can not be determined, so it is unreliable task sequence.

For example, you go to the bank to conduct business, there may be two ways:

The first: Choose queuing
second: choose to take a small piece of paper with your number above, wait until you are discharged when you notice that the number of people counter

The first: the former (waiting queue) is waiting for the synchronization message notification, that is, you have to wait for the bank to transact business situation;
second: the latter (waiting for others to notice) is waiting for asynchronous message notification, asynchronous message processing, the wait message notifier (in this case is waiting for you to conduct business) tend to register a callback mechanism, through a mechanism (here write in the trigger mechanism (in this case, the counter person) while waiting for the event to be triggered number on a small piece of paper, calling number) to inform people waiting for the event.

  • Blocking and non-blocking

About the status of blocking and non-blocking when these two concepts and procedures (thread) to wait for message notification (nothing is synchronous or asynchronous). That blocking and non-blocking mainly from the program (thread) wait state for the angle at which the message notification.

To continue the example above, either queuing or use numbers, if in the waiting process, in addition to those waiting outside to wait for a message notification can not do other things, then the mechanism is blocked, the performance in the program, that is, the program has been blocked can not continue down this function is called at.

Instead, you prefer to send messages to chat while when the bank while waiting for these services, such a state is non-blocking, because you have been blocked in the wait for notification, but while doing their own thing, while waiting for the counter Notice.

Note: Non-blocking synchronous form is actually inefficient, imagine that you still day while chatting little while observing team has not exhausted you, chat and observe if the team as two operating procedures, then this programs need to switch back and forth between these two acts, you can think and efficiency is low. The asynchronous non-blocking forms had no such problem, because the phone is your thing you notice is the counter (news trigger mechanism) things, the program will not switch back and forth between two different operations.

  • Synchronous / asynchronous and blocking / non-blocking

1. synchronous blocking form

Minimum efficiency! Take the example above, you must pay attention to line up, what else can be done

2. Asynchronous blocking form

Asynchronous operation can be blocked to live, but he is not blocked while processing the message, but the message is blocked while waiting for notification.

Or the above example, in the bank waiting for a man of business uses asynchronous way to wait for the message is triggered (wait for notification counter), which is picking up a piece of paper. Now suppose: During this time you can not leave the bank to do other things, then obviously, you are blocked waiting for this operation.

3. Synchronize the form of non-blocking

It is actually inefficient.

Ah example above, imagine, days even while observing team has not exhausted you, if this chat and observation team as two operating programs you while chatting, this program back and forth between the two operating switching, can think and efficiency is low.

4. A non-blocking asynchronous form

higher efficiency!

Chat is because you (ready process) thing, and you notice is the counter (news trigger mechanism) things, the program will not be back and forth between the two operating switches.

For example, this time you suddenly want to smoke, so he told the lobby manager, said this number routed to you when you call out the trouble to look. Then you have not been blocked in the above operation of waiting, this is the natural way of non-blocking asynchronous +.


6. The process of creation and end

  • Creation process

But all the hardware, the operating system needs to have to manage, as long as the concept of an operating system, there is a process, you need to have a way to create a process, a number of operating system designed for only one application, such as air conditioning in the controller, once started air conditioning, all processes already exist.

As for the general system (you can run many applications, such as windows), you need to have the ability to create and revocation processes in the system is running in.

Create a new process, there are four ways:

1. System Initialization

2. A process opens a child process during operation

3. The user interactive requests (e.g., the user opens a browser)

4. initialize a batch job (only applications in a mainframe batch system)

Whether it is the process of creation of which way we need to have a process already exists to perform system calls for creating process.

  • Create a process on UNIX and Windows

1.UNIX: call is fork, fork and creates a copy of the parent process exactly the same, both have the same memory map, the same environment and the same string to open the file (in the shell interpreter process, the implementation of a command will create a child process).

2.Windows: call is CreateProcess, CreateProcess namely handles the creation process, is also responsible for the correct program into the new process.

Both the same point: After the process is created, the parent and child processes have different address spaces (technical requirements for multi-channel physical level to achieve isolation between process memory), a process any modification in its address space will not affect Another process.

Two different points: in UNIX, the initialization address space of the child process is a copy of the parent process (child process and the parent process can have read-only shared memory area), and Windows systems from the outset child and parent processes the address space is different.

  • End of the process

1. normal exit : voluntary , such as the user clicks crosses interactive page, or calling program initiated after the completion of normal exit system, Linux system using exit, using the Windows system ExitProcess.

2. Error exit: voluntary , such as sh test.py, and test.py does not exist.

3. Critical Error: involuntary , illegal instruction execution, such reference does not exist in the memory address , i / o, you can catch the exception.

4. killed by another process: involuntary , such as kill -9

Guess you like

Origin www.cnblogs.com/gqy02/p/11309603.html