Why does MCU need AI too?

Artificial intelligence (AI) is a branch of computer science that attempts to understand the essence of intelligence and produce a new type of intelligent machine that can react in a similar way to human intelligence. Research in this field includes robotics, language recognition, Image recognition, natural language processing and expert systems, etc. Since the birth of artificial intelligence, the theory and technology have become increasingly mature, and the field of application has continued to expand. It can be imagined that the technological products brought by artificial intelligence in the future will be the "containers" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like humans and may exceed human intelligence.

However, with the development of AI from the cloud to the edge, this view is rapidly changing. The AI ​​computing engine enables MCUs to break through the possible limits of embedded applications, and embedded design has been able to improve the real-time response capability of network attacks and device security.

MCU
cloud computing that supports AI promotes the demand for MCUs with AI functions; it reduces the bandwidth required for data transmission and saves the processing power of the cloud server, as shown in the figure below.
Insert picture description here

MCUs equipped with AI algorithms are applying applications that include object recognition, enabling voice services, and natural language processing. They also help improve the accuracy and data privacy of battery-powered devices in the Internet of Things (IoT), wearable devices, and medical applications.

So, how does the MCU implement AI functions in edge and node design? The following briefly introduces three basic methods that enable MCUs to perform AI acceleration at the edge of the IoT network.

Three MCU + AI occasions The
first method (probably the most common method) involves model conversion of various neural network (NN) frameworks (such as Caffe 2, TensorFlow Lite and Arm NN) for deploying cloud training on the MCU Model and inference engine. There are software tools that can obtain pre-trained neural networks from the cloud and optimize them for MCUs by converting them into C code.

The optimized code running on the MCU can perform AI functions in voice, vision and anomaly detection applications. Engineers can download these toolsets to the MCU configuration and run inferences that optimize the neural network. These AI toolsets also provide code examples of AI applications based on neural networks.

The AI ​​execution model conversion tool can run optimized neural network inferences on low-cost and low-power MCUs, as shown in the figure below.
Insert picture description here
The second method is to bypass the need for pre-trained neural network models borrowed from the cloud. Designers can integrate AI libraries into microcontrollers and incorporate local AI training and analysis functions into their code.

Developers can then create data models based on signals acquired from sensors, microphones and other embedded devices at the edge, and run applications such as predictive maintenance and pattern recognition.

The third method is that the availability of AI-specific coprocessors enables MCU vendors to accelerate the deployment of machine learning functions. Coprocessors such as Arm Cortex-M33 make use of popular APIs such as CMSIS-DSP to simplify code portability, so that the MCU and the coprocessor are tightly coupled to speed up AI functions such as co-processing related and Matrix Operations.

The above-mentioned software and hardware platforms demonstrate how to implement AI functions in low-cost MCUs through an inference engine developed according to embedded design requirements. This is critical because MCUs that support AI are likely to change the design of embedded devices in IoT, industrial, smart buildings, and medical applications.

Guess you like

Origin blog.csdn.net/NETSOL/article/details/111959611