How efficient high-performance computing for parallel operation?

Today's data-driven world, high-performance computing (HPC) platform of choice for enterprises. Move to cloud computing, on-demand HPC provides a cost-effective and highly flexible.

HPC generally refers to polymerization to provide practical computing power than higher performance mode. It can efficiently operate in parallel, in the calculation process, each of the nodes work together.

Classified based on high performance computing angle parallel tasks can be divided into two categories: high throughput distributed computing and cloud computing.

High throughput computing

High throughput calculating a task can be divided into several parallel sub-tasks and subtasks is no correlation between each other. A common feature of this type of application is the search mode on certain mass data. The so-called Internet computing fall into this category.

High throughput calculated (Single Instruction / Multiple Data, single instruction stream - multiple data stream) belonging to the category of SIMD.

Distributed cloud computing

Distributed cloud computing merger solve the task distribution, and the results to help cope with demand for lightweight local business interactions.

The problem which is divided into a plurality of individual sections, each addressed by different computers, as long as the computer network, they can communicate with each other to exchange large amounts of data to solve problems. If done correctly, the computer will run as a single entity.

The ultimate goal of cloud computing is distributed to maximize economy by efficient, transparent and reliable way to connect users and IT resource performance. It also ensures fault tolerance, and enable resources accessible when one component fails.

The advantage of using a distributed cloud computing brings

1) scalability and modular growth

The inherent scalability of distributed systems, because they work across different machines and can be extended horizontally. This means that users can add another computer to handle the increasing workload without having to again and again to update a single system.

The user can expand the range of practically no upper limit. The great demand system can run its full capacity on each computer, and the computer is offline at lower workloads.

2) fault tolerance and redundancy

In essence, than the stand-alone distributed system has a higher fault tolerance.

Use across two data centers run by the company's multiple computers clustered said that even if a data center offline, their applications can run.

This translates into higher reliability, as in the case of a single machine, all faults will follow. Even if one or more nodes / sites stop working (performance requirements of the remaining nodes will rise), distributed systems will remain unchanged.

3) low latency

Since the user may have a plurality of nodes on the location, so distributed system allows traffic to the closest node, thereby reducing the latency and improve performance.

4) Cost-effectiveness

Compared with large centralized systems, distributed systems more cost-effective. Their initial cost is higher than the stand-alone system, but then only to a certain extent, they did more economies of scale. Distributed systems composed of many small computers may be more cost-effective than mainframes.

5) Efficiency

Distributed systems can be complex issue data into smaller portions, and having a plurality of parallel processing computers, which helps to reduce the time required to solve these problems is calculated.

Development of the network gave birth to the emergence of distributed computing, parallel computing new model derived for cloud computing technology to lay a solid cornerstone of the network.

Guess you like

Origin blog.51cto.com/1086869/2462546