About MIMD Computers

I received an email today and asked me such a question: Can multiple CPUs handle multiple processes at the same time?
insert image description here
We all know that each core in the CPU has the ability to process resources independently, and in the operating system of a multi-core CPU, processes are executed concurrently on a microscopic level. Therefore, we have no way to use the process control method of the single-core CPU to control.
That is to say, if we use conventional PV operations on multi-core CPUs to achieve mutual exclusion and synchronization of processes, many problems may arise.
Common solutions are RPC, pipes, shared buffers or Sockets, but the efficiency of Sockets is too low, and remote procedure calls are generally chosen to solve mutual exclusion and synchronization problems.

And according to the well-known Flnny taxonomy, we should have the following knowledge points in our minds
insert image description here

I don't know why you asked me such an indescribable question. Maybe it's because I haven't learned computer architecture well or I can't understand Baidu and Google.

In MIMD and SIMD computers, there are multiple processors, which are called multi-core CPUs in the figure.

In modern multi-core hardware structure (intel x86), memory is shared among multiple CPU cores, and CPU cores are generally symmetrical, so multi-core belongs to symmetric multi-processor with shared storage.

In a multi-core hardware structure, if you want to give full play to the performance of the hardware, you must use multi-threading (or multi-process) execution, so that each CPU core has threads executing at the same time.

Different from multi-threading on a single core, multiple threads on a multi-core are physically executed in parallel, which is a true parallel execution. There are multiple threads executing in parallel at the same time. And multi-threading on a single core is a kind of multi-threaded interleaved execution, in fact, only one thread is executing at the same time.

MIMD computers include: parallel vector processors, multiprocessors, massively parallel processors, working clusters, and distributed shared storage systems.
For distributed systems in the actual sense, I personally feel that they cannot be completely classified as MIMD computers. The concept of a cluster, in the current cloud computing environment, may be more realistic to call it a container cloud, but the difference is that the concepts of a container and a virtual machine are different.

Related UMA

The processors are all connected to "globally available" memory by means of software or hardware. The operating system generally maintains its memory consistency. From a programmer's perspective, this memory model is better understood than the distributed memory model. Also, memory coherency is managed by the operating system rather than by the writer. Of course, the shared memory model also has obvious shortcomings: when the number of processors exceeds thirty-two, it is very difficult to process; the shared memory model is not as flexible as the distributed memory model.

multiprocessor

Shared memory multiprocessors have two or more CPUs all sharing access to a common RAM. Therefore, when the multiprocessor adopts the UMA mechanism, it is necessary to implement a crossbar network to realize resource control.
The cross network can be updated iteratively to a multi-level cross network, which involves the concept of shuffling.

vector processor

Oriented to vector parallel computing, it is a parallel processing computer based on pipeline structure. The use of parallel processing structures such as look-ahead control and overlapping operation technology, operation pipeline, and cross-access parallel memory plays an important role in improving the operation speed. However, the parallel processing potential cannot be fully exploited in actual operation. Vector operations are well suited to the structural characteristics of pipelined computers. The combination of vector-type parallel computing and pipeline structure can to a large extent overcome the disadvantages of large instruction processing volume, uneven storage access, serious related waiting, and poor pipeline structure in general pipeline computers, and can give full play to the advantages of parallel processing structure. potential to significantly increase the computing speed.

Guess you like

Origin blog.csdn.net/qq_27180763/article/details/123773538