Why choose FPGA accelerated AI computing?

reference: http://bbs.elecfans.com/jishu_1634668_1_1.html

Why Microsoft chose FPGA?

       Because Microsoft and Google are two different genes, Google likes to try new technology, it is natural to use TPU to pursue the highest performance. However, Microsoft is a commercial company culture heavy, choose a program to see that the cost and commercial value. Do ASIC although stunning, but it really worth it?

       The main drawback of the chip is made large investment, long period of time, do a good job inside the chip logic can not be modified. Artificial intelligence algorithm has been in rapid iterations, and make chips for at least a year or two, meaning that only support the old architecture and algorithms. If you want to support the new chip algorithms, it is necessary to make general-purpose, providing instruction set to the user program, versatility will reduce performance, increase power consumption, because some of them are wasted. So many chip companies spite of chips, but many products or FPGA to do, such as bits of mining machine continent, chips do not support it has been a new mining algorithm.

CPU, GPU, ASIC and FPGA horizontal comparison

       We are now a common hardware computing platforms, including CPU, GPU, ASIC and FPGA. CPU is the most versatile, mature set of instructions, such as X86, ARM, MIPS, Power and so on, as long as the user will be able to develop software based on the instruction set using the CPU to complete a variety of tasks. However, the versatility of the CPU determines the computing performance is the worst in modern computers, many calculations require a high degree of parallel and pipelined architectures, however, despite the CPU pipeline is very long, the computational core number only a few dozen at most, the degree of parallelism not enough. Such as watching a high-definition video, so many pixels to be parallel rendering, CPU on the drag.

       GPU CPU parallel insufficient to overcome the shortcomings, the hundreds of thousands of parallel computing core stack to a chip, the user with the GPU programming language, such as CUDA, OpenCL the like can be used GPU to accelerate the development process application. However, GPU also has serious shortcomings, is the smallest unit is a computing core, it is still too big. In computer architecture, there is a very important concept is the finer the particle size, particle size, the more it means that the user can play space. Gailou just the same, if a little brick, you can make a lot of beautiful shape, but building a long time, if using precast concrete pieces, and soon cover the building, but the building's style is limited. The following diagram, is to preform levy, as building blocks, quickly, but all look alike, want to cover other floor can not.

Here Insert Picture Description
        ASIC GPU too coarse to overcome the shortcomings, allowing the user starts from the transistor-level custom logic chip foundry to finally produce a dedicated chip. Regardless of performance or power consumption, much better than the GPU, after all, designed from the ground up, no wasted circuit, and the pursuit of the highest performance. But ASIC also has significant drawbacks: large investment, long development cycle, the chip logic can not be modified. Now do a large-scale chip, at least tens of millions to hundreds of millions of investment, the time period of about a year or two, especially AI chip, there is no universal IP, much to their own development, a time period longer. After the chip to do, if there is a big problem or feature upgrades (some small problems can be modified by reserving the logic metal layer wiring resolved), it can not be modified directly, but to re-edit the layout, delivery factory taped.

        So, finally returned to the FPGA: calculating both the advantages of fine grain size and ASIC programmable GPU. FPGA calculated very fine particle size, you can go to the NAND gate level, but can also modify the logic is programmable.

Published 27 original articles · won praise 20 · views 1539

Guess you like

Origin blog.csdn.net/qq_39507748/article/details/104864580