The concept and process characteristics 2.1.1

Topic Source: Wang 2020 computer operating system Source: Wang 2020 computer operating system 

Device assignment is done automatically by the kernel, you do not need to create a new process

Leading to a process creates another process has typical time:

1, a user logs

2, job scheduling

3, to provide services

4, application request


1. The process concept
in multiprogramming environment that allows concurrent execution of multiple programs, they lose this case closed, and having intermittent characteristics and irreproducible. To this end the process of introducing the concept of concurrent execution in order to better describe and control program, concurrency and sharing operating system.
In order to participate concurrently executing program (including data) can operate independently, something that must be arranged a special data structure called a process control block (Process Control Block, PCB). The basic system uses to describe the PCB and operational status of the process, then control and management process. Accordingly, constitute the process image (process entity) consists of three blocks, and the data PCB. The so-called the process of creating, in essence, is to create a process image of the PCB; and the process of revocation, the revocation process is essentially the PCB. Process image is static, it is dynamic process. PCB is the only sign of the presence of the process
defined typical processes:
1) the process is a process of implementation of the program.
2) a process is active when executing program and its data processor sequentially occurring.
3) during operation of the process is an independent function in a data collection program, which is a separate unit of the system resource allocation and scheduling.
2. The process features
1) dynamic. The dynamics of the process is the most basic features.
2) concurrency. The purpose of introducing the process is to make the program can be complicated by other processes and procedures implemented to improve resource utilization.
3) independence. Refers to a process entity is able to operate independently, independent access to resources and independence to accept the basic unit of scheduling.
4) asynchrony. Asynchrony can result in non-reproducibility of the results, do this, you must configure the synchronization process in the operating system.
5) structural. Each process to configure a PCB be described. From a structural point of view, the process entity is made up of sections, the process control section and data section of three parts.
State and conversion processes 2.1.2
1) operating status: Process is running on the processor. In the single processor environment, every moment at most one process is running.
2) Ready: The process is in a state ready to run, that is the process of obtaining all the necessary resources in addition to the processor, Once processor to run. (Process passive behavior)
3) blocked state, also known as the wait state: the process is waiting for some event to suspend operation, such as waiting for a resource is available or wait for input / output is completed. Even if the processor is idle, the process can not be run. (Process proactive behavior)
4) Create a state: the process is being created, not yet ready to state. Create a process usually requires multiple steps: First, apply a blank PCB, and to fill in some information to control and manage the process of the PCB; then by the system for the process to allocate the resources necessary to run; finally ready to put into the process status.
5) end state: the process of disappearing from the system, this could be the end of the normal process interruption or other reasons out of operation. When the process needs to end of the run, the system must first set the process to end the state, and then further processed to release resources and recycling work.
Ready state refers to the process lacks the processor only, as long as access processor resources on the implementation immediately; wait state refers to the process requires additional resources (except processor) or wait for an event.


Ready → operating state: after the process is in the ready state is scheduled to obtain processor resources (time slot assignment processor), then converted by the process is ready to run state.
Operating status → Ready: The process is running in the time slice is exhausted, we had to give up the processor, which processes run by the state to the ready state. In addition, the deprivation of the operating system, when a higher-priority process is ready, the scheduler will execute the process of being converted to a ready state, so higher-priority process execution.
Run state → blocked state: when a process requests the use and allocation of resources or waiting for some event to occur when it is converted from a running state to a blocking state. Process requests in the form of system calls of the operating system to provide services, which is a special form of the operating system kernel processes invoked by running user mode programs.
→ blocking state ready state: When a process of waiting for the arrival of an event, such as when the I / O operation or end the interruption is over, the interrupt handler must state the appropriate process by blocking state to ready state.
A process running from state to state is an active blocking behavior, and changes from blocking state to the ready state is a passive activity, need the assistance of other relevant processes.
2.1.3 Process Control
The main function is to implement a process control system for all processes of effective management, it has created a new process, the process has been revoked, the state of progress towards conversion. In the operating system, the general process control program segment called primitives, primitive characteristics that must not be interrupted during execution, it is an indivisible basic unit.
1. Create a process
to allow a process creates another process. At this point known as the creator of the parent process, called the child process is created in the process. The child can inherit the resources owned by the parent process. When the child process is revoked, it should be returned from the parent process where resources available to the parent. In addition, at the time of revocation of the parent process, we must also withdraw all of its child processes.
The operating system creates a new process is as follows (created primitive):
1) the process of assigning a unique identification number for the new process, and apply a blank PCB. If the application fails to create a PCB fails.
2) allocate resources for the process, for the new process of program and data, and the user stack allocate the necessary memory space (reflected in the PCB). Note: If this lack of resources, not creation fails, but out of "wait state", or "blocked."
3) the PCB initialization, including initialization flag information, status information, and the initialization processor initialization processor control information, and the priority setting process.
4) If the process ready queue capable of receiving the new process, a new process will be inserted into the ready queue, waiting to be scheduled to run.
2. The process of terminating
event caused the process to terminate are: normal end, the process indicates that the task has been completed and ready to quit running. Ends abnormally, it indicates that the process is running, if something unexpected happens, the program can not continue to run, such as storage area out of bounds, fault protection, illegal instruction, privileged instructions wrong, I / O failure. Outside intervention refers to the process at the request of the outside world and terminate the operation, such as operating system or operator intervention, parent request and terminate the parent process.
Terminating operation of the system follows the process (undo primitive):
1) the process is terminated based on the identifier, to retrieve the PCB, which reads the status of the process.
2) if they are to terminate the process of being executed, to terminate immediately the implementation of the process, the processor resources allocated to other processes.
3) If the process as well as the child, it should terminate all of its child processes.
All resources 4) owned by the process, or returned to the parent process or returned to the operating system.
5) removed from the PCB where the queue (list) in.
3. The process of blocking and wake-up
process is executed, as some expected event has not occurred, such as system resource request fails, waiting for some operation is completed, the new data is not yet reached or no new work to be done, etc., automatically by the system perform blocking primitives (Block), run by the state to make their own blocked. Visible, blocked processes is the process itself is an active behavior, and therefore only in the process of running state (get CPU), in order to convert it to a blocking state. Execution blocking primitives are:
1) find the process to be blocked identification number corresponding to the PCB.
2) If the process is running, the protection of its site, its blocking state to state, stop running.
3) The PCB is inserted into the queue to the corresponding event.
When the expected course of events appears blocked, wake up call about the process by primitive (Wakeup), the process will wait for the wake-up event. Wake execution primitives are:
1) to find the corresponding PCB processes in the queue waiting for the event.
2) it is removed from the waiting queue, and set its state to the ready state.
3) the PCB is inserted into the ready queue, the scheduler waits to be scheduled.
Block Wakeup primitives and primitives are a pair of opposite effect primitives must be used in pairs. Block primitive is invoked by the self-realization process is blocked, and Wakeup primitive is a phase of cooperation with the wake-up process or other related process scheduling implementation.
4. The process of switching
the process of switching from the processor means runs a process to another process running this process, the operating environment of the process produces a substantial change. The process of switching the process as follows:
1) Save the context handler, including the program counter and other registers.
2) Updating PCB information.
3) The process of PCB into the appropriate queue, such as Ready, obstruction of an event queue.
4) Select another process execution, and update its PCB.
5) data structure updated memory management.
6) recovery processor context.
The process of switching to switch the processor mode is different, mode switching, the same process may still run on the processor logic. If a process interrupt or exception entry into kernel mode, user mode after the implementation went back to the interrupted program has just been running, the operating system only needs to enter the recovery process CPU cores stored on-site, without changing the current process of environmental information . However, if the switching process, the current running process is changed, the current process of environmental information also needs to change.

Difference between switching and scheduling: scheduling refers to behavior which determine the allocation of resources to the process, is a decision-making behavior; switching refers to the act actually allocated, it is to perform the behavior. In general, prior scheduling of resources, and then have the process of switching


Organization 2.1.4 process
process is the basic unit of resource allocation and operate independently of the operating system. She generally consists of the following three components:
1. The process control block
when the process is created, the operating system creates a new PCB configuration after it is on the permanent memory, at any one time can be accessed, deleted at the end of the process. PCB is part of the process entity, it is the only sign of the presence of the process.


PCB includes process descriptions, process control and information management, resource allocation, inventory and processor-related information. The main portion of each as follows:
1) Process description
process identifier: flag various processes, and each process has a unique identifier.
User ID: The process belongs to the user, the user identifier is mainly sharing and protection services.
2) Process control and management information of
the current process status: description of the process state information, as the basis for processor allocation schedule.
Process Priority: Describe the process of preemption priority processor, a high-priority process priority access processor.
3) a list of resource allocation, the state of the address space or virtual address space for description; the list of open files and input used / output device information.
4) the processor-related information, mainly referring to each register value processing machine, when the process is off, processor status information must be stored in the corresponding PCB in order, to continue from a breakpoint at which the process re-executed.
In a system, usually there are many processes, some in a ready state, some reason is blocked, and blocking the same ministries. In order to facilitate the scheduling and management processes, the processes of the PCB needs to be organized by a suitable method. Currently, there are links to two common ways of indexing and organization. Link state PCB same manner as a link queues, the different states corresponding to different queues, the blocked state can be a PCB process, depending on the reason for blocking thereof, arranged in a plurality of blocking queue. The indexing process is organized in the same state table, an index, an index table entry points to the PCB respective, different states corresponding to different index tables, the index table as ready and blocking index table and the like.
2. Block
Block is scheduled to be process scheduler program code executed by the CPU. Programs can be shared by multiple processes, that is, multiple processes can run the same program.
3. Data segment
Data segment of a process, the process may be raw data corresponding to the processing program, may be intermediate or final results generated during program execution.
2.1.5 Communication process
Process Communication is the exchange of information between processes. PV communication operation is lower, advanced communication scheme refers to a more efficient transfer of large amounts of data communication. Advanced communication methods are mainly about three classes.
1. The shared memory
of shared spaces directly accessible to a process communication between the read operation is achieved by writing this piece of shared space / exchange of information between processes. When shared space write / read operation, the synchronization required mutex tool (such as P operation, V operation), the shared space write / read control. Shared memory is divided into two types: shared low-level approach is based on a shared data structure; advanced way is based on a shared storage area. The operating system is only responsible for providing shared storage space for communication and synchronization process mutual exclusion tools, and data exchange by the user's own arrangements for admission / instruction is completed.
Users are generally independent process space, generally can not access the space during the processes running other processes, to get two user processes to share space must be achieved through a special system call, and threads within a process is a natural process of sharing space.
2. The message passing
in the messaging system, the data exchange between processes is reformatted message units. If no shared spaces directly accessible to the communication between processes, the process must be implemented using the messaging communication method of the operating system. Send messages and receive messages provided by the system processes two primitives for data exchange.
1) direct communication mode: sending process to send the message to the receiving process, and hanging it in a message buffer of the receiving process on the queue, the receiving process to obtain from the message buffer queue.
2) indirect communication mode: sending process to send a message to the intermediate entity, the receiving process to obtain information from the intermediate entity. Such intermediate entities generally referred to as mail, such communication is also called mail communication. The communication method is widely used in a computer network, a communication system is called a corresponding e-mail system.
3. The pipe communication
The communication pipe is a special way messaging. The so-called "pipeline" means for connecting a reading process and a writing process in order to achieve a shared file communication between them, also known as pipe file. Provides input to pipeline transmission process (shared files) (i.e. writing process), in a stream of characters into a large amount of data (write) pipeline; receiving process receives the output of the pipe (i.e. the reading process), received from the pipeline (read) data. In order to coordinate communication between the two sides, the pipeline must provide a mechanism for coordination of the following three areas: mutual exclusion, synchronization and determine each other's existence.
Read data from the pipe is a one-time operation, once the data is read, it is discarded from the pipeline, in order to free up space to write more data. Pipe can only take half duplex communication, i.e. the transmission in only one direction at a time. To achieve both the parent and child interactive communication, we need to define two pipes.

The processes to be mutually exclusive access pipeline, process 1 process 2 was written can be read, written process 1 process 2 can be read
2.1.6-threaded and multi-threaded model concepts
Basic concepts threads
into the process aim is in order to make better use of multi-channel programming concurrently, in order to improve resource utilization and system throughput, increased concurrency Chengdu; thread is introduced, it is to reduce the time and memory program, when executed concurrently paid to improve the concurrent operation of the system performance.
The most direct appreciated thread is "lightweight processes", which is the minimum unit of a basic execution unit CPU, the program execution flow is, by the thread ID, the program counter, a register set and stack components. A thread is a physical process, the system is the basic unit of independent scheduling and allocation of system resources does not own its own thread, with only a little in the operation of essential resources, but it can be shared with other processes belong to the same process of thread We have all the resources. A thread can be created and destroyed another thread can execute concurrently across multiple threads in the same process. Due to the interaction between threads, resulting in the thread showing a discontinuity in the operation. There is also a thread in place, blocking and run three basic states.
2. Comparison of threads and processes of
1) scheduling. In the traditional operating system, the basic unit has the resources and independence scheduling is a process. In the introduction of the operating system thread, the thread is the basic unit independent scheduling process is the basic unit of own resources. In the same process, thread switching does not cause the process of switching. When a thread switch in different processes, such as the switching from the thread to the thread within a process in another process, can cause the switching process.
2) have the resources. Whether traditional operating system or operating system thread is provided, the process is the basic unit has the resources, and the thread does not have the system resources (too little essential resources), but threads can access the system resources of its membership process.
3) concurrency. Not only between processes can be executed concurrently, and between multiple threads can execute concurrently, so that the operating system has better concurrency, improve system throughput.
4) overhead.
5) address space and other resources. Independent of each other between the process address space, among the threads of the same process of resource sharing processes, threads within a process is not visible to other processes.
6) communications, inter-process communication (IPC) needs to process synchronization and mutual exclusion auxiliary means to ensure data consistency, but between threads can be directly read / write process data segment to communicate.
3. Thread properties
in a multi-threaded operating system, the thread runs as an independent basic unit (or scheduling). The main properties of the thread as follows:
1) is a lightweight thread entity, it does not own the system resources, but each thread should have a unique identifier and a thread control block, the thread control block registers and stack the recording of a thread of execution, etc. site status.
2) different threads can perform the same procedure, that is, when the same service is invoked different users, the operating system creates a different thread for them.
3) the individual threads in the same process share resources owned by the process.
4) scheduling threads are independent processor units, a plurality of threads can be executed concurrently. In the single-CPU computer system, each thread may occupy alternately CPU; multi-CPU computer system, each thread may simultaneously occupy different CPU, if the CPU at the same time for the individual threads within a process can be shortened service process processing time.
5) a thread is created after the beginning of its life cycle, until the termination of the thread in the life cycle experience blocking state, the various states ready state and run state and other changes.
4. threads manner
to achieve the thread can be divided into two categories: user-level thread (ULT) and kernel-level threads (KLT). Also known as kernel-level threads supported by the kernel threads.
In the user-level threads, thread management of all work-related by the application is complete, the existence of the thread's kernel awareness. Applications can be designed by using multithreaded programs thread library. Typically, single-threaded applications from the start, began to run in this thread, its run at any time, you can create a new thread running in the same process by calling the thread library derived routines.
In kernel-level threads, thread management is not all the work performed by the kernel thread management code is complete, the application is only one thread of the kernel-level programming interface. The kernel maintains context information for the process and each of its internal thread, the thread scheduling is completed on the basis of the framework in the kernel.
In some systems, using a combination of multithreading. Thread creation done entirely in user space, thread scheduling and synchronization are carried out applications. A plurality of user threads in the application are mapped to the number (less than or equal to the number of user-level threads) kernel-level threads.

Kernel-level threads is assigned a basic processor scheduling unit
5. The multithreaded model
Some systems support both user and kernel threads thread, thereby producing a different multi-threading models, i.e., connection to achieve user-level threads and kernel-level threads.
1) Many-to-model. Maps multiple threads to a kernel-level threads, thread management done in user space. In this mode, user-level threads are not visible to the operating system.
Pros: thread management is done in user space, and thus more efficient.
Cons: When a thread is blocked in the use of kernel services, then the entire process will be blocked; multiple threads can run in parallel on multiple processors.
2) one model. Each user thread is mapped to a kernel-level threads.
Pros: When a thread is blocked, allowing another thread to continue, so strong concurrency.
Cons: Each user-level thread to create a need to create a corresponding kernel-level threads, so that the cost of creating a thread is relatively large, it will affect the performance of the application.
3) many to many models. The n user-level threads are mapped to the m-th kernel-level threads, it requires less than m equal to n.
Features: Take in the many-to-one model and model a compromise to overcome the many-concurrency model is not high shortcomings, but also to overcome a one-user process models take up too much kernel-level threads, too costly disadvantage. But also has a many-to-one model and model their own advantages.

 

Thread the field contains the CPU can execute the program independently, threads share the process address space of the same process

 

 

Published 313 original articles · won praise 64 · views 90000 +

Guess you like

Origin blog.csdn.net/PriestessofBirth/article/details/104799990