Huawei open from AI research framework MindSpore! Automatic differentiation, parallel blessing, once trained, can deploy multi-scene ...

Temple dry out from the bottom of the recessed non- 
qubit reports | Public number QbitAI

Huawei's open-source AI framework, come!

Just, Huawei announced the official opening of the depth of learning from research framework MindSpore, the code has been on the line.

MindSpore depth learning and training is a support frame end edge cloud reasoning whole scene, AI is mainly used in the field of computer vision, natural language processing, data for the crowd of scientists, engineers and other algorithms, designed to provide a friendly, efficient operation of development experience.

As part of the overall AI Huawei solution, MindSpore rising AI processors provide native support for hardware and software co-optimization, also supports a general purpose CPU and GPU.

August 2019, Huawei's rotating chairman Xu Zhijun introduced, MindSpore can achieve a unified architecture, once trained, multiple deployments. Moreover, by implementing AI algorithms i.e. codes, MindSpore models can significantly reduce development time.

Why MindSpore to achieve these capabilities? With the open source, it is more characteristic of being revealed.

MindSpore four functions

Huawei MindSpore chief scientist, IEEE Fellow Chen Lei introduction, the current open source MindSpore mainly with automatic differentiation based on a common source conversion feature automatically distributed parallel training, data processing, and Figure execution engines.

Overall architecture as shown below:

He said first automatic differentiation , it refers to the computer program automatically determined by means of a number of general method Function. Depth study, usually automatic derivation of the network model, by gradient guidance to optimize network weights.

The current mainstream deep learning framework, there are three main Automatic Differentiation:

First, based on the conversion TensorFlow represented by static data flow diagram, the network performance may be optimized, but the expression in the form of subject data flow graph, the control flow can not be expressed using a flexible static compiler technology;

Second Pytorch represented based on the converted dynamic graph, although the user can use the flexibility of the control flow. The disadvantage is the high cost of operation, and can not use static compiler technology to optimize performance computing map.

The third source is converted based on the universal automatic differentiation, i.e. MindSpore technology used.

In this way, as a function of source programming framework, in time compilation (JIT) manner to make an automatic differential transformation on the intermediate representation, support for complex scenes flow control, higher-order functions and closures.

Since the support automatic differentiating the control flow, can have both advantages of the two techniques above techniques, i.e., support flexible native expression control flow, but also can make the network optimization statically compiled before execution, efficient computing map generation, whereby improve execution performance.

Automatic parallel connection, said Chen Lei, MindSpore can use the serial algorithm code, automatically distributed parallel training, the same pain points Watch the current model development process.

In general, the model development process, in addition to its own logic design models, training needs to design a distributed parallel configuration.

This is a hard living.

Developers not only to analyze the amount of data, the amount of parameters, cluster group of network topology and other factors to determine the model segmentation strategy; also consider sub-model binding equipment after slicing, etc., to achieve Jiaoyou performance of distributed parallel training.

But in fact, the details of the optimization model with these parallel training to achieve business goals unrelated, finally allows the developer has thought of everything, "lost all his hair."

In particular logic complexity, the huge amount of model parameters, we want to manually find the optimal parallel strategy unlikely.

Huawei wants to solve this problem. MindSpore proposed a new distributed parallel training mode, the integration of data-parallel model and parallel hybrid parallel.

Specifically, MindSpore constructed amount of data, number of model parameters, network bandwidth cost model cluster topology information (Cost Model) based on automatically selected minimum cost model segmentation mode, and bind to the device model, implement automatically distributed parallel training.

The whole process, almost no developers involved, they only need to focus on the development of model logic on the line.

MindSpore the data processing functions called MindData, a pipeline processing is completed during the training data includes data loading, data enhancement, training import function, easy to use and provides a programming interface, and a cover CV / NLP so the whole rich data processing capabilities of the scene.

In the process of the data processing, MindSpore further enhanced in conjunction with rising chip, chip consumption data for the rising velocity calculation process, provides an optimized.

Chen Lei said that this is the key to ensuring rising chip to play a greater performance.

In addition, MindSpore also supports distributed training.

In a distributed data parallel mode, after each batch BATCH, model information is calculated and distributed to the work object, and provides a method of "slicing and resampling of" Two iterations and adjusting the divided data.

Finally, bear MindSpore front-end and low-level hardware interaction task is drawing engine module .

It is a block diagram MindSpore internal processing, responsible for management engine passed down a series of diagrams showing the operation, eventually converted to run directly on the underlying hardware diagram, and all the figures are used for dispensing operator and management.

In the process diagram, Figure engines uniform definition of each plug-in module interfaces need to provide specific plug-ins provided by different functional modules according to their ability, it will be based on the ability to provide different plug-ins to achieve optimal execution option, to ensure performance.

In addition to the above functions, MindSpore there depth optimization model fair Model Zoo, visualization tools, model evaluation tools.

Model Zoo bazaar model in the fourth quarter of this year, will be on the line 30+ model, and support rising MindSpore collaboration will optimize the depth of personalization model.

Visualization tools provide a single training process visualization, and traceability repeatedly trained model comparison feature that allows developers to alchemy more convenient.

Model supports a variety of assessment tools against sample generation algorithms, including 13 kinds of white-box, 7 black box attack algorithm to help developers evaluate samples against offensive and defensive capabilities.

In the blessing of these functions, MindSpore achieve simultaneous development capabilities to enhance the model, it is also quite easy to use.

With automatic differentiation, easy to train the neural network

Ease of use is directly reflected in the operation.

Said Chen Lei, MindSpore Python programming paradigms to provide users, the user can be described in the form of modular neural networks.

With automatic differentiation based SCT, users can also use native Python syntax control and other advanced API, such as tuples (Tuple), list (List) and Lambda expression.

He said that to avoid confusion, MindSpore reduce the introduction of new interfaces and concepts as possible. On a stand-alone platform training simple neural network, users only need to know the tensor operator, unit, model, etc. on the line.

Specific process is as follows:

Starting input tensor, it can be a constant input parameter tensor or tensor. MindSpore then configured to provide a different operator unit. Finally, the packaging unit using the model train the neural network.

Alternatively, the user may directly input to the data transfer unit perform reasoning tasks.

So Xu Zhijun said last year announced MindSpore will be open source, MindSpore for more than just deep learning developers, but also for the experts, mathematicians, experts etc. in AI algorithms increasingly important role crowd.

At the same time, ease of use is not only reflected in the model development, model deployment process is also very convenient - after a training session, you can deploy many, which is why MindSpore is a scene full frame.

Training on reasoning, MindSpore not only supports CPU, GPU, more optimized for Huawei rising chip. Meanwhile, MindSpore only support frames rising chip.

This means that the deployment of AI applications on Huawei rising series chip devices, all with MindSpore is a better choice than other frameworks.

Open source framework, Huawei foster the development of eco-AI

2018, Huawei join in the whole conference, the first disclosure of the complete AI solution, MindSpore is an important part.

August 2019, Huawei released 910 rising, the whole scene and launched AI computing framework MindSpore, and pledged to open in the first quarter of 2020.

Xu Zhijun said at the event, which marks Huawei has completed construction of the whole scene AI full-stack solutions, Huawei AI implementation of the strategy has entered a new stage.

Today, Huawei open source important measure of cash, and build a developer community, but also to cultivate Huawei Huawei AI ecology, promoting AI implementation of the strategy.

According to Chen Lei introduction, MindSpore current release is the first open source 0.1.0-alpha version, the license is Apache 2.0, the next will gradually improve.

At the same time, Huawei also disclosed governance structure MindSpore open source community - a community organization IT governance committee, special interest groups and working groups and other components.

Which is governed by a technical committee of representatives from 14 different companies / schools / institutions from the composition, it will open governance in accordance with the community charter to promote healthy and orderly development of the community.

Special Interest Group, the developer of the main characteristics of the module spontaneous, is responsible for the development MindSpore related development work.

Working Group members need more work on special interest group cooperation, spontaneously formed by the developer, and is responsible for related development.

Now, MindSpore code already on the line, the open source community has been open.

If you are interested, may wish to put away the following portal, to experience something, then give rating:

MindSpore the open source community:
https://www.mindspore.cn
MindSpore code hosting:
https://gitee.com/mindspore

· Department of Netease news Netease number "have attitude" signed on

- Ends -

How to focus, learn, make good use of artificial intelligence? 

Every weekday, qubit AI internal reference the latest developments in the global technology and research selection, a summary of new technologies, new products and new applications, combing the date the hottest industry trends and policies, search for valuable papers, tutorials, research and so on.

Meanwhile, AI internal reference group as we provide a platform for the exchange and sharing, to better meet the information we obtain AI, AI technology learning needs. Scan code to subscribe:

Understand the status of AI development, development of the industry to seize the opportunity

AI Communities | communicate with good people

Qubit  QbitAI · headlines on the signing of

վ'ᴗ 'ի track new dynamic AI technologies and products

Like to point "looking at" it! 

Published 223 original articles · won praise 288 · views 130 000 +

Guess you like

Origin blog.csdn.net/QbitAI/article/details/105171861