.NET interview preparation (1)-process and thread

1. Concept

1. Process

  • An independent unit association for resource scheduling and allocation by the operating system
    : suppose that the CPU is a factory, there are multiple workshops (processes) in the factory, and the factory has limited power, and only one workshop can run at a time.
  • The process has an independent address space. In the protected mode, a process crash will not affect other processes.
    Lenovo: A failure of workshop A will not affect workshop B. Different applications occupy different processes. When Excel crashes, it will not affect the use of Google browser
  • The address space of the process: text area, data area, stack. The text area stores the code executed by the processor; the data area stores variables and dynamically allocated memory used during process execution; the stack area stores instructions and local variables called during the activity.

2. Thread

  • A thread is an entity of a process. It is composed of thread ID, current instruction pointer (PC), register set and stack. It is the basic unit of CPU resource scheduling and allocation. A process can include multiple thread
    associations: a thread is a worker in the workshop. There can be multiple workers in a workshop
  • The resources and memory of the same process are shared, but threads have independent stack space and independent execution sequence
    association: The workshop space is shared by workers, but each worker works independently.

3. Context switch (thread)

  • CPU switches from one process to another, or from one thread to another.
    Lenovo: A thread is a worker in the workshop, and there can be multiple workers in a workshop

4. Mutex

  • Some shared memory can only be used by one thread, and other threads must wait for it to end before they can use
    Lenovo: The toilet in the workshop can only be used by one person. When using it, add a lock, after using it, open the lock, and others who arrive later can open it. Lock and go in

5. Semaphore

  • Some shared memory only supports a fixed number of threads to use

6. Process scheduling

The basic state of the process

Insert picture description here

  • Ready
  • carried out
    • Waiting/blocking/suspending

Scheduling type

  • Advanced scheduling: also known as job scheduling, transfer the jobs in the backup queue in the external memory to the memory
  • Low-level scheduling: also known as thread scheduling, the processor is assigned to a process by the dispatcher
  • Intermediate scheduling: transfer the temporarily unavailable processes to the external memory, at this time the state is called suspended; when the memory is free, then transfer them back to the memory and put them on the ready queue, at this time the state is called ready

Non-preemptive scheduling and preemptive scheduling

  • Non-preemptive: Once the scheduler allocates resources to a process, the resources will not be released until the process ends
  • Preemptive: The scheduler forcibly suspends the ongoing program and allocates CPU resources to other ready programs

Priority of the process

  • Ordinary process: A process with a low priority and a longer execution time. For example, text compiler, batch document, graphics rendering
  • Real-time process: a process that has a high priority and needs to be executed as soon as possible, and cannot be blocked by ordinary processes, such as video playback, various monitoring systems

Scheduling strategy

  • CPU utilization:
  • System throughput: the number of jobs completed by the CPU per unit time
  • Turnaround time: the time from the start of the task to the end of the task. Including the total time spent waiting for jobs, queuing in the ready queue, running on the processor, input and output.
  • Response time: the time it takes from the time the user submits the request to the first response of the system

Scheduling Algorithm

  • First come first served (FCFS):
    • According to the order in which tasks arrive in the ready queue
    • Non-preemptive, simple algorithm, bad for short jobs, bad for I/O busyness
  • Short job (process) priority scheduling (SJ§F):
    • Priority scheduling of short jobs: select one or more jobs with the shortest estimated running time from the backup queue and transfer them to the memory to run
    • Short process priority scheduling: select a process with the shortest estimated running time from the ready queue, assign the processor to it, and execute it immediately, and release the processor until it is completed or an event is blocked
  • Priority scheduling: select the job with the highest priority from the backup queue and transfer it to the memory
    • Non-deprivation priority scheduling: When a process is running on the processor, even if a more urgent process enters the ready queue, let the currently running process continue to run until the processor is actively released (task completion or waiting for an event), The processor is assigned to the more urgent process.
    • Deprivation priority scheduling: When a process is being processed by the processor, if a more urgent process enters the ready queue, the current process is immediately stopped and the processor is assigned to the more urgent process
  • Time slice-based round-robin scheduling

reference

[1]: A simple explanation of Ruan Yifeng's process and thread
[2]: Tencent interview questions

Guess you like

Origin blog.csdn.net/hhhhhhenrik/article/details/91398407