A text to let you get to know senseless circle supercomputer: CPU really is not a heap on the line

Super is 500, the number of supercomputers China reached 219 units, the number of highest in the world, of which 173 units from Lenovo, the wave of 71 units from 63 units from dawn.

June 18, the International Supercomputing Conference (ISC) held in Frankfurt, Germany, announced the top 500 supercomputers list of the latest one.

The entry threshold is very high supercomputer, for the first time all over 1PFlops (1,000,000,000,000,000 floating point operations per second), which is a historic breakthrough in the world's top 500 supercomputer list released in 26 years.

In this top 500 list, the number of supercomputers China reached 219 units, the number of highest in the world, of which 173 units from Lenovo, the wave of 71 units from 63 units from dawn.

For ordinary people, the usual few have access to supercomputers, and therefore also for understanding supercomputer in smattering of state. To this end, Tencent Technology know more than the removal of almost professional content A. Lord, and a little finishing, through an article to let you get to know thoroughly supercomputer.

What is the core technology supercomputer? - know almost A proposition Yang Jing

CPU supercomputer is an important part, but not all of supercomputer technology. Some people on the network technology is equivalent to the CPU supercomputer technology, that if there is no independent CPU, there is no master the core technology.

This statement is not accurate. In fact, the key technology for the prestigious CPU, the structural design of the system is unknown has been neglected in the public and the media. For example, using a self-developed nebula dawn of massively parallel processing architecture, blue prowess supercomputer massively parallel processing architecture, using multiple Milky No. 1 can be configured together in parallel array architecture, using a self Tianhe-2 create a new type of heterogeneous multi-state architecture.

System architecture design capacity How important is it? Figuratively, if the supercomputer is an army, then the architecture design is the strategic command level military historical heritage of the glorious tradition, organization and management level, the level of military training, weaponry, logistical capacity, high-ranking officers and lower officers tactical command ability, but the CPU army soldiers.

In addition, the supercomputer is not a simple CPU stuffing. Because the CPU stack is a live technical, architectural design is not good, do not do high-speed Internet, system software do well, storage arrays do not work, even if no amount of CPU heap, but also on the performance of supercomputers is not go with. Simple and crude concoction of CPU impossible to create a supercomputer, not to mention the energy and the No. 2 rival Tianhe supercomputer.

In addition, even mastered the correct way to pile CPU, nor is it solely on the number of CPU heap can get a supercomputer, Tianhe can and No. 2 rival. Why? Because supercomputer building is not a simple building-block-like pile CPU-- even piled up massive computing card, but subject to other areas, such as high-speed internet technology which led to not reach the theoretical computing performance.

In particular, the difficulty lies in the high-speed Internet a huge amount of data transferred between computing nodes of supercomputers, the delay demanding, when the lack of efficiency of the Internet, it will lead to congestion data, dramatically reducing overall system efficiency supercomputer. And the more compute node supercomputer, the higher the demands on the Internet. Therefore, even if he wanted to enhance the computing power through a pile of CPU, will be subject to performance limitations of the Internet, resulting in a super computer to enhance the performance of such an approach does not uncontrolled - is limited by bottlenecks such as the Internet and other aspects of the whole machine efficiency is low, resulting in actual performance and not because of piling up more CPU and improved.

In addition, there are still piling up too much CPU power is too large, too large chassis and other issues, is not conducive to the future operation and maintenance and use of supercomputers in the market basically does not have the market competitiveness.

In software systems, a small amount of calculation control node and a control node requires a lot of computing software system near to poles do not. Software system must guarantee the performance of every supercomputer computing nodes is to maximize the ability to fully tap the potential of the hardware. Otherwise, it will affect the overall efficiency of supercomputers.

Therefore, without a good architecture, the CPU performance will not play all out, and the more the number of CPU pile, the more complex the system, high-speed Internet, storage arrays, control systems, cooling systems and software the higher requirements, and improve the overall efficiency of the more difficult. The next level architecture design capability is not high enough, the simple piling up the number of CPU, but will reduce the overall efficiency, can not improve overall performance.

And general computer supercomputer What are the differences? - A master know almost fly deuterium Xiao Hou

Supercomputer is not so mysterious, is a calculation tool. You enter the calculation conditions, it gives you output the results. And grocery shopping with a calculator is the same as the same, different only in scale.

Usually only a home computer CPU (GPU Similarly), generally only 2 to 8 pieces per physical core CPU. The general there are thousands of supercomputers teeth CPU, typically dozens of physical particles per core CPU.

CPU is certainly not used to so much heating, but in order to pass parallel computing, complete heavy computing tasks. For example, in the field of aircraft manufacture, often calculated air flow near the aircraft, and the forces of the aircraft itself. The most common method is to calculate the air, dividing the body into a small tranches, were calculated for each small block motion and force, and then integrate to obtain motion and force the overall situation.

Generally, the more finely divided, each patch smaller, the more accurate the calculation. The fish and can not have both, the more finely divided, the greater the amount of calculation.

If you want a cube is divided into small blocks of 1 1, then it is necessary for the small blocks 1000000000 calculated. If a single CPU core, need to do it 10 billion operations, considered a complete process could take a day. And if there are 10 core CPU, it can put one billion square divided into 10 parts, each CPU core as long as the calculation of 100 million boxes, and then integrate the results obtained on the line. Such can be about 10 times faster, two hours can be considered finished.

In scientific and engineering field, there are so many computing tasks, such as basic properties of quantum calculation of atomic, molecular dynamics simulation of drug reactions, collision prediction relativistic analog black holes, air movement and changes in the weather, the design of the bridge these complex issues ...... the stress calculation, if calculated using a single CPU core, it may take several months or even years to get results. Such a long calculation time is unacceptable, so we need to use more than one CPU core parallel computing to improve efficiency, integrated a lot of CPU-in-one on-demand supercomputer natural birth.

Supercomputers have been used to do? - A master know almost history

A few examples:

1, "Nuclear analog" need HPC

The reaction is a nuclear chain reaction, then atomic fission affect the surrounding atoms, atoms around would affect their surrounding atoms, these atoms calculations required to simulate the behavior of magnitude, quickly becomes exponential relationship.

This requires very powerful supercomputers to simulate. Further, to know that such an analog computational power requirement for there is no ceiling. The stronger computing power, the simulation can be more precise, it is possible to discover the deeper laws.

Climate prediction is a great application of high performance computing direction.

2, the same need climate predictions HPC

Climate prediction, forecast widespread view is that the global air stream, ocean currents, etc., is to look at the limitations of weather forecasts. The basic principles of weather forecasting, is to capture every point clouds and airflow trajectory on the map by weather satellites, and then deduce their future direction through a large number of calculations.

You know, even today, for the weather forecast accuracy difficult to achieve more than 80%. However, you will be able to feel, and now the weather forecast has been much more accurate than your child. This is because we are now high-performance computing capability significantly increased.

For example, before the meteorological size calculation is a latitude and longitude, about 111 kilometers, and now we have fine weather accuracy of the calculation to three kilometers, meteorological scientists have increased the accuracy of 1 km. Enhance this accuracy, but the computing power required for the index level.

3, video rendering is the high-performance computing needs, "big"

Do you remember the "Avatar" do? This 2009 release of the movie, special effects scenes proportion reached 70%. Since Avatar, it has become a standard movie special effects, or even two actors to complete a high-quality space science fiction movies in front of a green cloth. Support these effects, no doubt is a huge, high-performance computing power.

4, in addition to high-performance computing can also be calculated astrophysics, earthquake prediction, materials science, computing, genome sequencing, traffic analysis, research and so much human tissue system.

Guess you like

Origin blog.csdn.net/ctrigger/article/details/94183989