Operating System Learning (1) -- Understanding Operating System Design Requirements from the History of Development

This is Part 1 of the Operating System series.

Although the development history of operating systems is not the focus of studying operating systems, many important concepts related to operating systems have been derived during this development process. In the later study, you will not feel that some concepts appear abruptly. In addition, understanding the development history of operating systems and understanding design requirements will help us think about problems from the perspective of computers.

ENIAC and Serial Processing

The development of computers can be traced back to 1946. The world's first general-purpose computer "ENIAC" was born on February 14th of this year (which happened to be Valentine's Day).

ENVAC

The picture shows ENIAC

> ENIAC is 30.48 meters long, 6 meters wide, 2.4 meters high, covers an area of ​​about 170 square meters, has 30 operating platforms, weighs 30 tons, consumes 150 kilowatts of electricity, and costs $480,000.

From this year until the mid-1950s, the operating system did not exist. After all, there was no concept of an operating system back then. If a programmer wants to run a program, he has to punch the machine code on a paper tape with a hole punch (this is not only intellectual work, but also a careful job, punching a wrong hole and you have to start all over again. Imagine, Let you write an article without the backspace key, let alone insert from the middle... it was too difficult for programmers at that time), and then load it into the computer through an input device, such as a paper tape reader. The computer runs step by step. After running, print the result.

PS: In the past ten years, programming languages ​​have also developed a lot. In the later stage of this stage, there is already a high-level language - FORTRAN. The concepts of compilation , linking , and function libraries have also been implemented. So computers during this time period are not as backward as you might think!

During this period, if users need to use the machine, they must reserve a time period in advance before they can go to the machine (the work of putting paper tapes in the machine and controlling the machine is also done by themselves. As a programmer, you have to work with hardware dealing with). In this mode, the problem arises:

  • If a user claims 1 hour, but his task runs in only 35 minutes, the extra 25 minutes is wasted.
  • If an hour has passed and the user's program has not finished running, the program will be forcibly stopped - which is equivalent to wasting a full hour of computing resources. But it is impossible to extend the time, there are still people queuing behind, and what if your program is in an infinite loop.

Simple batch system

At a time when computing resources were scarce , the above serial processing caused a huge waste of resources, which made it difficult for scientists to accept that the utilization rate of the computer had to be improved.

Thus, the batch system was born.

IBM7090

The picture shows the IBM 7090, which runs the most famous batch processing system IBSYS. This is also the world's first all-transistor computer

The central idea of ​​a batch system is to use a piece of software called a monitor . Just mentioned, serial processing requires users to access the machine themselves, and the time period is fixed, but now they only need to submit jobs to the computer operator, the operator will organize these jobs into batches in order, and then put the whole batch of jobs. Placed on the input device for use by the monitoring program.

The monitoring program is already a bit of an operating system, and its working process is well understood:

  • Most monitoring programs are always resident in memory, and this part is called a resident monitor.

  • At the beginning, the monitoring program has the control of the computer (nonsense, the user job has not been loaded at this time), it will read a job from the input device, and after reading in, the job is placed in the user program area. , and gain control. When the job is complete, control is returned to the monitor again.

With the monitoring program, the utilization of the computer is improved - the next job will start immediately after a job is completed, there is no idle time, and there are few cases where the job is terminated before it is completed (basically solves the problem of serial processing. The problem).

The correct operation of the monitoring program is dependent on the hardware. During this period, for the reliability of the system, computer manufacturers provided several important functions for the computer:

  • Memory protection : It is well understood that the memory space of the monitoring program cannot be arbitrarily changed by the user program - whether intentionally or not. * However, the group of hackers had not yet developed at that time. After all, computers were scarce and expensive, and it was impossible to "fly into ordinary people's homes". *Once the hardware detects that a user program is trying to make a mistake, it will directly transfer control to the monitoring program and cancel the operation.
  • Timer : This function is to prevent a job from monopolizing the system, and the timer is automatically turned on after the job takes over control. If the timer expires and the job has not finished running, the program will be killed.
  • Privileged instructions : Some machine instructions will be set as privileged instructions (such as I/O instructions), which can only be executed by the monitoring program . User programs cannot use these instructions directly. Of course the user program can request the monitoring program to do this for itself. Privileged instructions are set up to limit the "power" of user programs. After all, bosses and employees cannot have the same right to speak.

Monitor program memory layout

The memory layout of the monitoring program, the blue part is the protected memory area

Among these functions, memory protection and privileged instructions introduce the concept of operating modes . We know that these two functions are still preserved in modern operating systems - enough to see their status.

The simple batch system already has basic task scheduling capabilities, but it still has a lot of room for improvement. Although simple batch systems provide machines with an automatic sequence of jobs, processors are often idle because I/O devices are much slower relative to processors, and processors need I/O operations to complete before they can resume work .

for example:

Simple batch system CPU utilization

CPU utilization = 1/31 = 3.2%

CPU utilization is too low. Is there any way to solve this problem?

Multi-channel batch system

IBM_system360

IBM System360, equipped with a multi-channel batch operating system OS/360, recognized as an epoch-making operating system

We just said that the main reason for low utilization is that the CPU needs to wait for I/O operations, so can we keep the CPU busy?

Multiprocessing systems are the secret to keeping the CPU busy. The method sounds simple - put a few more user programs in memory, and once a job needs to wait for I/O, immediately switch to another job that may not need to wait for I/O. This process is called multiprogramming or multitasking.

Let's see how this approach improves CPU utilization:

  • Figure a: Only program A is running

    Multi-channel batch processing_one program

  • Figure b: There are user programs A and B in the memory. When A is waiting for an I/O operation, B starts running. (For the convenience of understanding, we assume that the competing IO resources of the two programs A and B are different)

    Multi-channel batch_two programs

  • Figure c: User programs A, B, C are stored in memory at the same time.

    Multi-channel batch processing_Three programs

We can intuitively see that in the same time, the CPU running time is greatly improved, which meets our expectations.

Like simple batch processing systems, multiprogrammed batch processing systems must rely on certain computer hardware capabilities. One of the most notable features is support for I/O interrupts (Interrupt) and direct memory access (Direct Memory Access, DMA). (DMA also needs interrupt support)

The word "interrupt" may feel a little mysterious at first listening. If it is translated into "interrupt", it will be easier to understand (it just doesn't sound very pleasant). When a job starts an I/O operation, the CPU switches to another job, so how does the operating system know when the I/O operation ends?

The answer is the I/O interrupt . After the I/O operation is completed, the DMA module (which module depends on the system implementation) will send a signal to the CPU, and the CPU must stop the current thing to process this signal. In batch processing systems, control is transferred to the interrupt handler of the operating system. In this process, the I/O device interrupts what the CPU is doing and turns to do another thing.

Therefore, interrupts are the premise for the operating system to complete various complex operations.

The multi-channel batch processing system is obviously much more complicated than his predecessors. From this operating system, several more interesting topics are derived:

  • Job management : The memory space is limited, which means that the number of programs that can be loaded into the memory at one time is also limited, so how to select a suitable job from the alternative jobs to load into the memory is a problem, which is job management.
  • Memory management : If you select a job, you need to allocate space for the job. Then which part of the free area to allocate space for the job is what memory management needs to solve.
  • Process scheduling : A process is an ongoing program. Generally, we call a job loaded into memory a process to distinguish it from an unloaded job. Process scheduling means that when process switching is required, a suitable process is taken out of the process queue through a certain algorithm to obtain the execution of the CPU.

In modern times, due to the increase in memory capacity, there are rarely cases where jobs need to be queued in the background, so job management will only take a small amount of ink to introduce. But memory management and process scheduling will be the focus of our later studies.

time-sharing system

UNIX is the most famous time-sharing operating system

The multi-channel batch processing system can be said to be the prototype of the modern operating system, and its utilization of the processor when processing batch jobs is relatively satisfactory .

The emergence of interactive assignments is easy to understand. After all, almost all of our applications are interactive now. You swipe the screen, and the article will slide up and down. Click to share, and various options will appear, and so on.

In interactive operations, it is inevitable to wait for the user to perform an operation, but it is impossible to let the processor stop and wait for you alone. After all, many people are using the same computer, so the time- sharing system came into being.

As the name implies, a time-sharing system is a job of n users, and the operating system controls each user program to be executed alternately in a short period of time. Because the human reaction is much slower than the machine, so if you control it properly, you will feel like you own the computer.

One more thing to mention, the time-sharing system switching process relies on the interrupts we emphasized in the batch system. The difference is that the interrupts here are clock interrupts - an interrupt signal is sent to the CPU as soon as the time is up.

If you think of interactive programs run by multiple users as multiple interactive programs run by one user, just like we use personal computers now, it is easy to understand modern operating systems:

  • Multiple processes share a processor, and each process is assigned a time slice , and in front of the computer, it seems to you that multiple processes are parallel .
  • When a process performs an I/O operation, it will be blocked by the operating system, and it will wait in the blocking queue for the end of the I/O operation to have the opportunity to use the CPU.
  • Multiple processes are stored in memory, and the operating system needs to prevent processes from writing information to the memory space of other processes. In particular, it is necessary to protect the memory space of the operating system itself.
  • The user program runs in user mode and has no right to use privileged instructions , and needs to make a request to the operating system.

Speaking of this, we have already understood the development of operating systems. In fact, there are some other operating systems, such as real-time operating systems, network operating systems, distributed operating systems, etc., but these operating systems are not related to our lives. Large (real-time operating system is still very important for embedded), so I will skip it in this article, and those who are interested can consult relevant information.

I hope after reading this article, you can have a simple impression of the design philosophy of the operating system, if this article piques your interest in the operating system, all the better.

Thanks for reading, we'll have an appointment later!

Disclaimer: Original article, unauthorized reproduction is prohibited

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324271622&siteId=291194637