Computational Abstraction: Computable Theory, Model and Computer

In fact on this article is to explore the content, in as early as 19 years on computable, Turing machines and CPU performance are of the sort. So this article briefly connects the logic, and the rest of the content can be found in the previous article.

The computer and the operating system kernel is natural for the 计算service, then what's the problem may be to calculate it? This is the question discussed by the computability theory.

But the theory is too abstract, and Turing machine can be considered the theoretical model, namely calculation model . Turing machine 可以等价于任何有限逻辑数学过程的终极强大逻辑机器. In addition to Turing machines, there are other calculation models, such as 寄存器机.

For the content here, see Wikipedia's definition . You can find a lot of related concept indexes along the Turing machine on Wikipedia, it's worth a try.

So computational model how to become reality? Von Neumann structure .

This structural idea is very closely related to the knowledge points we have learned before, mainly:

  • Separation of storage and calculation: Do you remember the plusar message queue? Of course, the separation of storage and calculation of plusar is an idea arising from the historical evolution of message queues.
  • Furthermore, the separation of logic and control: various design patterns and programming ideas, the focus is on the separation of control flow and business logic.

The five major parts of the structure: controller, arithmetic unit, memory, input, and output. Algorithms pay attention to space complexity and time complexity. Why? Compared with the five major items, it is natural to draw inferences:

  • Memory: Corresponding space
  • Execution unit composed of controller and arithmetic unit: corresponding time

A program occupies a long time in the execution unit, and the natural time complexity is high. If you use more, don't other programs use less? The more memory a program occupies, the higher the space complexity. Although there is a virtual memory mechanism, it seems that the size of the memory (storage unit) is relatively unlimited, but excessive memory usage will cause frequent page faults, increase the disk load, and in turn affect the execution of the execution unit.

Hypothesis: The execution unit is no longer realized by transistors, but has become a quantum computer, and the computing power of the quantum calculator is endless. So, by then, will time complexity still be an important criterion?


Back to the calculation , the calculation is an important indicator of the speed stage, how to evaluate the speed of it? Two dimensions:

  1. Response time: the time to complete the unit task
  2. Throughput: tasks completed per unit time

These two dimensions are great, the response time is biased towards the individual, and the throughput is biased towards the whole.

Then the execution time of a single task, here is the execution time of the CPU, CPU execution time = number of CPU clock cycles * cycle time.

When the CPU is regarded as a machine that requires the slogan 121 121 to command, it works when the slogan is shouted (the faster you shout, the faster you run), and it stops when you don't shout the slogan. Then the total time for this machine to complete a task is naturally equal to the number of slogans * the frequency of slogans.

It is more recommended to read "Coding: The Language Hidden Behind Computer Software and Hardware" to understand why the CPU needs clock cycles, that is, the underlying hardware structure: in fact, it is composed of combinational logic circuits and sequential logic circuits, and requires a unified slogan (heartbeat ) To coordinate work.

At the same time, it is recommended to read the first chapter of "Computer Composition".

CPU execution time = number of CPU clock cycles * cycle time, and number of CPU clock cycles = number of instructions * average execution time of each instruction.

So: CPU execution time (response time) = number of instructions * average execution time of each instruction * cycle time.

Here you might want to ask, why is the execution time of each instruction on average it? In fact, the answer is also in "Code". Different instructions correspond to different hardware units, and different hardware units correspond to different circuit components, so the total execution time is also different. Similarly, which one is faster to open a text document or open IDEA on a computer?

Why is there a formula for calculating the response time? The answer is still an important indicator of calculation: speed , that is, shorten response time and speed up execution.

Therefore, we see that the main frequency of the computer is much higher than before, which corresponds to the reduction of the cycle time, but it is also subject to constraints. Please refer to the previous article for this content.

On the one hand, the decrease in the number of instructions is the optimization of the program itself, and on the other hand, the introduction of new hardware structures to support more complex instructions.

Finally, the average execution time (CPI) of each instruction corresponds to the following aspects:

  1. Pipeline technology: a single CPU adopts multi-stage pipelines to execute multiple instructions simultaneously, which reduces CPI and increases throughput
  2. Parallel technology: multiple CPUs execute multiple instructions at the same time, which reduces the CPU and increases throughput
  3. Cache technology: reduce CPI by adding multi-level cache between memory and CPU

That's all about computational abstraction. Through the analysis of this article, the following instruction set is also derived. The instruction set is an abstraction of the overall CPU hardware, and the pipeline technology is an important optimization for instruction execution.

For other content, see: Computability, Turing machine and CPU performance , which complement this article.

Guess you like

Origin blog.csdn.net/zhou307/article/details/114950959