Operating system ---- recruit school often test the contents of the written interview summary (Continued)

1. The process of five basic state

(1) five state model
created state: When creating a process to apply for a blank PCB, which fill in the information and control management process to complete the allocation of resources. If you create a job can not be completed, such resources can not meet, can not be scheduled to run, this time to create a process in which the state is called state
ready state: ready process, has been assigned to the resources required, as long as the CPU can be immediately assigned to operational
execution state: the process in the ready state is scheduled to enter the process execution state
blocking state: the process is being executed due to some events (I / O requests, application cache failure) while temporarily unable to run, the process is blocked. Ready state waiting for the system to call upon to satisfy the request
to terminate state: end of the process, or an error occurs, or the system terminated and termination status. Can no longer perform
Here Insert Picture Description
(2) The figure shows the state transition process leading to the type of event, it may be converted as follows:
empty -> New : create a new process to execute a program, possible events include: a new batch job, interactive logon ( Log on to the end-user system), because the operating system to provide a service created, derived from the existing process and so on.

New -> Ready : The operating system is ready to accept a longer process, a process from the new state to the ready state.

Ready -> Run : need to select a new process to run, the operating system scheduler or a distributor in a ready state, according to a scheduling algorithm selection process.

Run -> Exit : terminate the process causes are: normal completion, over time, the system can not meet the needs of process memory space, the process attempts to access memory unit (cross-border) access is not permitted, arithmetic errors (such as division by zero or greater than storage hardware can accept digital), the parent process is terminated (the operating system may automatically terminate the process all descendants of the process), the parent process a request to terminate the process and so future generations.

Run -> Ready : The most common reason is that the process is running to reach the maximum period of time "to allow uninterrupted execution", which is the release of resources to other processors in the process of using a ready state; there is a reason for the possible Because change is a priority ready state process to seize the resources of the process, it is interrupted converted to the ready state.

Run -> obstruction : If the process must wait for its request certain events, such as one can not get an immediate resources (such as I / O operation), only after obtaining resources waiting to perform the process to continue, then enter the wait state (blocking state).

Blocked -> Ready : While waiting for the event, the process of conversion in blocking state to the ready state.

Ready -> Exit : no indication of such a conversion in the figure above, in some processes, the parent process can terminate a child process at any time, if a parent process terminates, all related sub-processes are terminated.

Obstruction -> Exit : keep up a similar reason.

2. The difference between processes and threads

A process is defined as a resource allocation unit and a protected unit, associated with the process associated are:
(1) storing process image (program, data, stack, and the process control set of blocks defined attributes) of the virtual address space
(2) protected against processors, other processes (for interprocess communication), files, and IO resources (devices and channels) access

In one process, there may be one or more threads, each thread has:
(1) the thread execution status (running, ready, etc.)
(2) stored in the thread context is not running
(3) an independent execution stack
( 4) static memory space for each thread-local variables
to access shared memory and resources (5) with the other threads in the process of the process
(6) separate thread control block register for containing a value, priority, and other associated with the thread status information.

In most operating systems, the communication between the kernel needs a separate process step in to provide protection and mechanisms needed to communicate. However, since the same process in the thread shared memory and files, they do not need to call the kernel can communicate with each other.

3. The process of communication in several ways

Interprocess communication including piping, the IPC system (including message queues, semaphores, shared memory), SOCKET.

  • 1. Pipeline

    Conduit (pipe) : the pipe is half-duplex communication mode, data flows only one way, but can only be used in a process having a genetic relationship between. Kinship process usually refers to the process of parent-child relationship.
    Duct includes three :
    1) the PIPE common conduit, generally a kind of limit, one half-duplex, only one-way transmission; the second is used only between parent and child processes.
    2) flow conduit s_pipe: in addition to a first limit ., can be two-way transmission
    3) named pipes: name_pipe, in addition to the second limitation, can communicate between many unrelated processes.
    named pipe (namedpipe)
    communication is half-duplex mode named pipes, but it allows communication between unrelated processes.

  • 2. IPC system

    Semaphore (semophore) : semaphore is a counter, multiple processes can be used to control access to shared resources. It is often used as a locking mechanism to prevent access to shared resources is a process, other processes can also access the resource. Therefore, the main as well as inter-process synchronization means between the different threads within the same process.

    Message queue (MessageQueue) : message queue is a linked list of messages, by message queue identifier stored in kernel. Signaling message queue overcome the less information, only the carrier pipe plain byte stream buffer size is limited, and other shortcomings.

    Signal (Sinal) : Signal is a more sophisticated means of communication, an event used to notify the receiving process has occurred.

    Shared Memory (SharedMemory) : Shared memory is memory mapped a period that can be accessed by other processes, this shared memory created by a process, but can be accessed by multiple processes. IPC shared memory is the fastest
    way, it is for the other inter-process communication running low efficiency specifically designed. It is often associated with other communication mechanisms such as semaphores, used in conjunction to achieve synchronization and communication between processes.

  • 3. Sockets

    Socket (Socket) : Solutions sleeve port is an inter-process communication mechanism, with various other communication mechanism is that it can be used for communication between processes on different machines.

4. thread synchronization in several ways

(Be sure to write producer, consumer issues, understanding fully digest)
four common ways to synchronize threads in the process:

  • (1) critical region (CriticalSection)
    when multiple threads access a shared resource exclusive, you can use a critical section object. Has a critical section of the thread can access the protected resources or code segments, other threads if you want to visit, it is suspended until the owning thread critical section of abandoned critical section. Application specific ways:
    (1, define the critical section object
    (2, before accessing shared resources (Code or variable), first obtain the critical section object
    (3, access to shared resources, then give up the critical section object

  • (2) Event (Event)
    event mechanism, allows a thread after processing a job, take the initiative to wake up another thread to perform tasks. For example, in some network applications, such as a thread listens for a communication port A, a further thread is responsible for updating the user data B, using the event mechanism, the thread A thread B can inform the user when to update the data.

- (3) mutex (Mutex)
is very similar to a mutex and critical section object, which allows the use of only between processes, but only restrict the use of the critical area between threads of the same process with, but save more resources and more effectiveness.

(4) semaphore (Semphore)
when it is desired to use a counter to limit the number of threads can be a shared resource, can use the "semaphore" object. Semaphore class object holds the count value of the current thread to access a specified resource, the current count value is the number of threads that resource can also be used. If the count reaches zero, all attempts to access the resource controlled by the object class Semaphore have been put into a waiting queue, or until a timeout counter value is not zero.

Producer - consumer model:

To understand the production of consumer issues should first be established PV operation meanings:
PV operation is performed by the operation primitives P and V operations primitives (primitive process is not interrupted), the amount of operation of the signal, defined as follows:
P (S) :
① a minus value of the signal S 1, i.e., S-1 = S;
② if S³0, then the process continues; or the process is set to a wait state, waiting queue.
V (S)
① value signal plus an amount S, i.e. S + 1 = S;
② If S> 0, then the process continues; otherwise, the first process release queue wait semaphore.

P operation is equivalent to the application resources, and release the resources corresponding to V operation.

Producer - Consumer problem is a typical process synchronization problem, the producer - consumer issues, also known as bounded buffer problem, the two processes share a common buffer of fixed size. Wherein a producer is used to put the message buffer; the other is a consumer, for taking the message from the buffer. The problem occurs when the buffer is full, at a time when producers want to put in the situation where a new data item, or when the buffer is empty, consumers also removed from the data entry problems. In order to ensure this does not happen, we typically use semaphores, and messaging to solve the producer - consumer issues.

(1) use of semaphores solve the producer - consumer issues

A semaphore value can be 0 (indicating that no wakeup operations preserved) or a positive value (indicating one or more wake-up operation).
And the establishment of two operations:
Down and up (also general said on textbook P / V vector).
Down to perform a semaphore operation, check whether it represents a value greater than 0, if the value is greater than 0, its value is decremented by 1 (i.e., a wake-up signal stored spent) and continue; if 0, then the sleep process, and At this time down the operation has not ended. Further, the value is checked, and modifying variable values are sleep operation may occur as a single, indivisible atomic operation is completed.

The following began to consider using semaphores to solve the producer - consumer problem.

#define N 100                           // 缓冲区中的槽数目
typedef int semaphore;               // 信号量一般被定义为特殊的整型数据
semaphore mutex = 1;               // 控制对临界区的访问
semaphore empty = N;               // 计数缓冲区中的空槽数目
semaphore full = 0;                 // 计数缓冲区中的满槽数目
/* 生产者进程 */
void proceducer(void)
{
        int item;
        while(1)
        {
               item = procedure_item();       // 生成数据
               down(&empty);                              // 将空槽数目减 1
               down(&mutex);                              // 进入临界区
               insert_item(item);                       // 将新数据放入缓冲区
               up(&mutex);                                      // 离开临界区
               up(&full);                                      // 将满槽的数目加 1
        }
}
/* 消费者进程 */
void consumer(voi)
{
        int item;
        while(1)
        {
               down(&full);                              // 将满槽数目减 1
               down(&mutex);                              // 进入临界区
               item = remove_item();               // 从缓冲区中取出数据项
               up(&mutex);                                      // 离开临界区
               up(&empty);                                      // 将空槽数目加 1
               consumer_item(item);               // 处理数据项
        }
}

This solution uses three semaphores :
one is full, the buffer tank filled to the number of records.
A to empty, and the total number of records in the buffer tank empty.
As a mutex, to ensure that producers and consumers will not access the buffer simultaneously. The initial value of a mutex, the amount of the signal for two or more processes used to ensure that the same time only one process can enter the critical section, referred to the amount of binary flag (binary semaphore). If each process before entering the critical section perform a down (...), just quit executed when a critical section up (...), it is possible to achieve mutually exclusive.

In addition, usually the down and up as the operating system calls to implement, and requires OS only when the following temporary ban on all interrupts: Test semaphore, semaphores, and the update of a process to sleep when needed.

As used herein, an amount of the three signals, but they are not the same object, wherein the full and empty for synchronization (Synchronization), and used to achieve mutual exclusion mutex.

(2) use messaging solution producer - consumer issues

This way the use of two IPC primitives send and receive, but also the system call.
Such as:
the send (dest, & msg) // msg to send a message to the target (process) dest in
receive (src, & msg) // src received by over msg, if no message is available, the recipient may be blocked

The messaging system will be faced with the situation of the communication processes on different machines in the network, it will be more complicated. Such as: network messages may be lost, typically using an acknowledgment (ACK) message. If the sender does not receive an acknowledgment message within a certain time period, retransmit the message.

If the message itself is received correctly, it returns ACK message is lost, the retransmission of the message sender, and the recipient will receive the same message in duplicate. Usually a successive sequence numbers embedded in the header of each of the original message to solve this problem.

In addition, messaging systems also need to address the issue of naming process, specified in the send and receive system call process must not be ambiguous. There are other issues, such as performance issues, authentication and so on, but that will pull more, or to see if the solution to this producer - consumer issues it:

#define N 100                              // 缓冲区中的槽数目
/* 生产者进程 */
void proceducer(void)
{
        int item;
        messagemsg;                       // 消息缓冲区
        while(1)
        {
               item = procedure_item();       // 生成数据
               receive(consumer, &msg);       // 等待消费者发送空的缓冲区
               build_msg(&msg, item);               // 创建待发送消息
               send(consumer, &msg);               // 发送数据项给消费者
        }
}
/* 消费者进程 */
void consumer(voi)
{
        int item,i;
        messagemsg;
        for(i=0;i<N; i++)
               send(producer, &msg);               // 发送给生产者 N 个空缓冲区
   
        while(1)
        {
               receive(producer, &msg);       // 接收包含数据项的消息
               item = extract_item(&msg);       // 解析消息,并组装成数据项
               send(proceduer, &msg);               // 然后又将空缓冲区发送回生产者
               consumer_item(item);               // 处理数据项
        }
}

In this solution, a total of N messages, somewhat similar to a shared memory buffer of N slots, the consumer process here first sends the N empty messages to the producer through a for loop. When a data item is transmitted producer to the consumer, is received through an empty, remove each message, and then back filled with the contents of a message to the consumer. In this manner, the total number of messages in the overall messaging system (including an empty message == N + saved message data item) is constant.

If the process is running, the speed is faster than consumer producer processes, all messages will eventually be filled, then the process will wait for consumers to producers (even if the calling procedure is blocked receive at), until the consumer returns an empty message; and vice versa.

Next, look at two variants of the mode messaging :

  • One is: to assign a unique address for each process, so that messages addressed to the address of the process. That is, send and
    first parameter receive calls designated for a specific process address.
  • The other is: the introduction of the mailbox (mailbox), you can mail is like a box, which installed a lot of letters, this letter is a message we want to deliver, of course, has a limited capacity of the mailbox. When using the mail, send and
    receive system call parameter is the address of the mailbox address instead of the address of the process. When a process attempts to send mail messages to a full capacity, it will be suspended until there are messages in the mailbox to be removed.

5. threads implementation.

(That is, the difference between user threads and kernel threads)

Implement threads can be divided into two categories :
user-level threads (user-levelthread, ULT) and kernel-level threads (kernel-levelthread, KLT). The latter is also known as kernel support threads or lightweight processes.

(1) user-level threads

In a pure software user-level threads, thread management of all work-related by the application is complete, the existence of the thread's kernel awareness.
The advantage of using user-level threads are :

  • Thread switching does not require kernel mode privileges, thread management process does not need to switch to kernel mode
  • Scheduling algorithm can be tailored for the application without disturbing the underlying operating system scheduler
  • User-level threads can run on any operating system, no need to modify the underlying kernel to support user-level threads

User-level thread has two significant drawbacks :

  • Many system calls can cause obstruction, when the user-level threads execute a system call, not only this thread is blocked, all threads in the process is blocked
  • In pure user-level thread strategy, a multithreaded applications can not take advantage of multi-processing technology

(2) kernel-level threads

In a pure software kernel-level threads, all work-related thread management is done by the kernel, no part of the application thread management code, there is only one kernel thread facility to the application programming interface (API).

The method overcomes the two fundamental flaw in the method of user-level threads :

  • The kernel can simultaneously multiple threads in the same process of scheduling multiple processors;
  • If a thread is blocked in the process, the kernel can schedule another thread in the same process.

Compared user-level threads its main drawback :

  • When the transfer control from one thread to another process, it is necessary to switch the state of the core.

Some operating systems using a combination of methods user-level threads and kernel-level threads, multiple threads of the same application in parallel on multiple processors to run, will lead to a blocking system call does not block the whole process. If designed properly, which will combine the advantages of both threads, while reducing their disadvantages.

Guess you like

Origin blog.csdn.net/u013075024/article/details/93298855