Robot Daniela Rus leads! MIT's new algorithm realizes "proprietary perception" of soft robots

This article is reproduced from Leifeng.com. If you need to reprint, please go to the official website of Leifeng.com to apply for authorization.

Speaking of software robots, many people may not feel unfamiliar.

The development of software robots is inseparable from the progress of multiple disciplines including materials science, robotics, biomechanics, sensing and control. In recent years, related disciplines have developed rapidly, and various types of software robots have also begun to emerge.

At the ICRA 2017 venue of the International Conference on Robotics and Automation, Samuel Au, associate professor of the Department of Mechanical and Automation Engineering of the Chinese University of Hong Kong, who participated in the invention of the Da Vinci surgical robot, said:

The application of software robots in the medical field is very extensive, and it will even change the paradigm of medical robots. Software robots are the ultimate goal of surgical robots.

Of course, in addition to the medical field, there is a broad market for soft robots-toys.

In this market, Disney pays much attention to software robots. Just last year, Disney Research gave soft robots the ability to "proprietally perceive" based on algorithms and a special stretch sensor.

Recently, under the leadership of Professor Daniela Rus, one of the world’s leading robotics experts, MIT CSAIL has also made similar results: Based on the algorithm they developed, the sensors in the soft robot body have been optimized, so that it can better operate in the environment. Feel yourself and interact with the environment.

The related paper is entitled Co-Learning of Task and Sensor Placement for Soft Robotics, which will be exhibited at the IEEE International Conference on Soft Robotics in April 2021. .

Let the software robot answer "Where am I"

Many people have the impression that the robot has a hard shell, full of metal feeling, this is a traditional rigid robot. Usually, the limited array of rigid robot joints and limbs makes the calculation easy to control through the algorithm of control mapping and motion planning.

Unlike rigid robots, soft robots are non-linear in both structure and material, and have multiple degrees of freedom. Therefore, their motion tasks are more complex, so the requirements for algorithms are very high.

As the paper introduces:

Soft robots must make inferences in an infinite-dimensional state space, and mapping this continuous state space is not simple (especially in the case of working based on a limited set of discrete sensors, after all, the sensor position has the richness of the robot task learning model. deep influence).

Generally speaking, the above paragraph is that in order for software robots to reliably complete the tasks set by the program, they need to know the positions of all their body parts, and since the software robots can deform in almost infinite ways, this task is equivalent. difficult.

In order for the soft robot to answer the "Where am I" question, the previous strategy of scientists was to use an external camera to map the position of the robot and feed the information back to the robot's control program.

But the idea of ​​MIT CSAIL is to create a soft robot that is not helped by outsiders.

In the view of the research team:

It is impossible to install countless sensors on the robot. The real question is: how many sensors and where should the sensors be placed in order to have the greatest price-performance ratio?

Because of this, MIT CSAIL focuses on deep learning.

They developed an algorithm that can help engineers design software robots that collect more useful information about the surrounding environment.

Specifically, this new collaborative learning method of sensor placement and complex task representation can process airborne sensor information, thereby learning prominent and sparse position selection, optimizing the position of the sensor in the robot body, and ensuring that the robot obtains the optimal task performance.

Alexander Amini, one of the co-authors of the paper, said:

This system can not only learn a given task, but also learn how to design the robot in the best way to solve the task. The placement of the sensor is a very difficult problem, so this solution is very exciting.

The paper shows that since many software robots are nodes in nature, the new architecture uses point-cloud-based learning and probability sparseness. Their approach treats sensor design as a dual process of learning, combining physical and digital design in a single end-to-end training process.

In the paper, the researchers called this architecture PSFE network (ie point sparsification and feature extraction network, point sparsification and feature extraction network).

The PSFE network can simultaneously learn the reading representation of the sensor and the location of the sensor. As shown in the figure below, the PSFE network is the core of all the demonstrations and applications made by the research team-the demonstration includes object grasping prediction (B), learning proprioception (C) and control (D).

It turns out that in terms of placing sensors, the performance of the algorithm greatly exceeds human intuition!

In summary, the main contributions of this achievement are:

  1. Measurement of strain and strain rate: the neural structure used to reason about the state of soft robots;

  2. A minimum set of sparse probabilistic sensor representations suitable for downstream tasks, and an algorithm that surpasses automation and manual baselines;

  3. Demonstration of collaborative design of task learning and sensor placement in two tasks (tactile perception and proprioception of 7 soft robot forms).

Andrew Spielberg, one of the co-authors of the paper, said:

Our work contributes to the automation of robot design. In addition to developing algorithms to control the movement of robots, we also need to consider how to perceive these robots and how to interact with other components of the robot. If it is applied in industry in the future, the impact may be immediate.

About the author

The authors of the paper are three MIT CSAIL PhD students including Andrew Spielberg and two MIT professors Daniela Rus and Wojciech Matusik.

Among the five authors, the most famous is Professor Daniela Rus.

[Image source HyperAI Super Neural]

Daniela Rus is the Director of MIT CSAIL, Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, IEEE Fellow, AAAI Fellow, and Fellow of the National Academy of Engineering. He received a PhD in Computer Science from Cornell University. The main research areas cover robotics, mobile computing and data science.

Not long ago, Forbes AI columnist and venture capital firm Highland Capital Partners venture capital expert Rob Toews wrote an article listing 8 representative female leaders in the AI ​​field. These 8 female leaders include Li Feifei, NVIDIA Vice President of Engineering, The founder of Coursera, also has the name Daniela Rus.

In 2016, the editor of Leifeng.com had an in-depth conversation with Daniela Rus.

When asked "Whether machine learning or deep learning can ultimately help us create general artificial intelligence (AGI)", the AI ​​expert said that it is still impossible to judge whether deep learning can finally achieve AGI.

In her view, deep learning can be said to have great potential, but there are also some problems:

  • Deep learning requires a lot of data to train, which means that it requires a deep understanding, and the general-purpose intelligent learning method should be more "universal".

  • Deep learning still makes mistakes.

  • In fact, we still don't know how deep learning works, or why it performs so well.

In other words, only when we have a deeper understanding of deep learning and even ourselves, can we answer this question.

At that time, Daniela Rus also admitted that the research field he was most interested in was robotics:

I, we are studying how to make a better automation system, which can profoundly change the world. Changing the way people complete tasks, and allows us to understand each other better.

If we can build a machine that behaves like living things. Then the inner principle of this machine may be more similar to the inner principle of biology, and we may be able to deepen our understanding of ourselves through this kind of research.

It now appears that MIT CSAIL under the leadership of Daniela Rus has taken another step towards automation.

Guess you like

Origin blog.csdn.net/weixin_42137700/article/details/115206212