Summary crawling algorithm based robot RGB image

Author: Tom Hardy
a Date: 2020-2-23
Source: inventory | robot-based image capture in RGB

Foreword

Recently read some of the latest in robot-based RGB image capture paper, here to share ideas under.

1、Optimizing Correlated Graspability Score and Grasp Regression for Better Grasp Prediction

This paper presents a new depth convolution network structure that by introducing new loss amount, to improve the use of regression crawl crawl quality evaluation. Jacquard + released addition, it is a Jacquard extended data set, allowing simulation scenarios are placed on a plurality of objects in a variable gripping evaluated decorative detection model. Jacquard + created by physical simulation, allow testing under conditions completely reproducible. Experimental results show that the proposed method of detecting gripping in both the Jacquard or Jacquard + datasets are much better than the conventional method of detecting crawler;

Network structure:
Here Insert Picture Description
The results:
Here Insert Picture Description
Here Insert Picture Description

2、Real-time Grasp Pose Estimation for Novel Objects in Densely Cluttered Environment

Fetch fetch prior mainly from the centroid of the object and the gripping along the long axis of the object, but such embodiment the complex shape of the object often fail. This paper presents a new target for robotic pick and place real-time crawling pose estimation strategy. The method of estimating the target contour point cloud, and the target posture and gripping skeleton prediction on the image plane. The main object being tested spherical container, tennis, and even objects of complex shape, such as a blower (non-convex). The results show that the strategy good gripping effect of complex shape of the object, and compared with the above strategy, the predicted effective gripping configuration. Show the effectiveness of the capture technique in both cases, i.e., objects to be clearly and placed object is placed in dense clutter. Crawl accuracy rate was 88.16% and 77.03%, respectively. All experiments were performed using a real robot and UR10 WSG-50 performed a two-finger gripper.
Here Insert Picture Description
Here Insert Picture Description

3、GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments

本文提出了一种两阶段的生成性鲁棒推理与感知(GRIP)方法,以探索在生成对抗环境中进行物体识别和姿态估计。生成鲁棒推理与感知(GRIP)作为一个两阶段的目标检测与姿态估计系统,目的是结合CNN的可区分相对优势和生成推理方法来实现鲁棒估计。在GRIP中,将推理的第一阶段表示为基于CNN的识别分布。CNN识别分布用于第二阶段的生成性多假设优化,这种优化是作为一个静态过程的粒子滤波器来实现的。本文证明,GRIP方法在不同光照和拥挤遮挡的对抗场景下,相对于最先进的姿态估计系统PoseCNN和DOPE,达到了SOTA。使用密歇根进度抓取机器人演示了抓取和目标定向顺序操作在对象拾取和放置任务中的兼容性。

Here Insert Picture Description
Here Insert Picture Description

4、Domain Independent Unsupervised Learning to grasp the Novel Objects

本文提出了一种基于无监督学习的可行抓取区域选择算法,监督学习在没有任何外部标签的情况下推断数据集中的模式。论文在图像平面上应用k-均值聚类来识别抓取区域,然后用轴指派方法。除此之外,定义了一个新的抓取决定指数(GDI)概念来选择图像平面上的最佳抓取姿势,并在杂乱或孤立的环境中对Amazon Robotics Challenge 2017 和Amazon Picking Challenge 2016的标准物体进行了多次实验。论文还将结果与基于先验学习的方法进行比较,以验证提出的算法对于不同领域中的各种新对象的鲁棒性和自适应性。

Here Insert Picture Description
Here Insert Picture Description

5、Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter(代码开源)

Selecting camera viewpoint is an important aspect of visual detection gripping, especially in the presence of occlusion in many cases clutter. Using conventional methods still camera or fixed location data collection routine, herein multiview pickup (MVP) controller by using a direct sensing method based on real-time active crawling pose estimation selected information distribution point of view, reducing noise caused by occlusion and crawl posture uncertainty. In the target 20 from the gripping clutter experiment, the controller obtains the MVP 80% success rate crawl, the pick up properties than single viewpoint detector is improved by 12%. The paper also proved that the proposed method is more accurate and efficient than considering multiple viewpoints fixed method.

Here Insert Picture Description
Here Insert Picture Description

6、ROI-based Robotic Grasp Detection for Object Overlapping Scene

This paper presents a robot based on a region of interest (ROI) detection algorithm gripping ROI-GD. Wherein the ROI detection using ROI-GD grab, instead of the entire scene. It is divided into two phases: the first is to provide a ROI in the input image, the second stage is based on the ROI feature detector gripper. Marked by visual relational data sets, the paper also built a Bikangnaier grab a much larger set of data to crawl multi-object data set. Experimental results show that, ROI-GD algorithm has better performance in the scene objects overlap, while the most recent Cornell gripping gripping comparable detection algorithm on the data set and Jacquard Dataset. Robot experiments show, ROI-GD can help the robot to crawl in a single target and multi-target scene target scene, the overall success rate of 92.5% and 83.8%, respectively.

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

Published 270 original articles · won praise 303 · views 420 000 +

Guess you like

Origin blog.csdn.net/qq_29462849/article/details/104505732