Summary visual servo algorithm for high accuracy in the field of

Author: Tom Hardy
a Date: 2020-2-14
Source: visual servo algorithm for highly accurate summary of the field

Foreword

Visual servo is a very important industrial areas in the automatic assembly, high-precision registration application very much. For the past two years common algorithms mode, where a simple summary.

1、Predicting Target Feature Configuration of Non-stationary Objects for Grasping with Image-Based Visual Servoing

When this paper, the RGB-D depth camera can not provide valid information, closed-loop approach to crawl the last stage of the problem, which is necessary for a robot to crawl under the current controller failure crawl non-stationary objects. In the last gripping attitude, this prediction image feature observed image coordinates and an image-based visual servoing to guide the robot to reach the pose. , It can move the camera in the three-dimensional space image based visual servoing control is a mature technology, so that the driving characteristics of the image plane is arranged to a certain target state. Previous work, it is assumed target feature configuration is known, but for some applications, it may not be feasible, for example, the first performance sports scene. The proposed method is robust to scene motion gripping final stage and the robot motion control errors.

Here Insert Picture Description
Here Insert Picture Description

2, Camera-to-Robot Pose Estimation from a Single Image (Carnegie Mellon University, open source code)

This paper presents a method relative to the robot pose estimate from a single camera image. The neural network method using depth camera RGB image processing, two-dimensional detection of the key points on the robot, and the network using the full training simulation method of randomizing region. Assumed known joint structures of the robot manipulator, generally using PnP mode to restore an external camera. Hand-eye calibration with conventional systems, the methods described herein does not require off-line calibration procedure, a single frame can be calculated by the outer camera parameters, thus opening the possibility of online calibration. This paper presents three different camera sensor results proved that the method can be achieved than the traditional hand-eye calibration offline multiframe better accuracy under conditions of a single frame. By additional frames, the accuracy is further improved.

Here Insert Picture Description
Here Insert Picture Description

3、Learning Driven Coarse-to-Fine Articulated Robot Tracking(ICRA2019)

This paper presents a robotic joint tracking method relies on visual cues only the color and depth images to estimate the state of the robot or the interaction with the ambient environment of the occlusion. Article assumes only establish a precise correspondence between the sub-pixel level between the state and the estimated state observations, joint modeling method to achieve accurate tracking. It depends only on the previous work of identifying color edges or depth information corresponding to a tracking target, and requests from the joint encoder initialization. This paper presents a coarse to fine joint state estimator, the estimator depends only on the edge of the color depth and learn key visual cues, the depth image predicted by the robot initialization state distribution. Evaluation methods in paper four RGB-D sequence, showing the arm and the KUKA-LWR Schunk-SDH2 hand interacting with the environment, and proof of this combination and the edge of the key points may be used to track the target without any joint encoder averaging the position error estimate of the palm sensing 2.5cm case.

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

4、CRAVES: Controlling Robotic Arm with a Vision-based Economic System(CVPR2019)

训练机器人手臂来完成现实世界的任务已经引起学术界和工业界越来越多的关注。本文讨论了计算机视觉算法在这一领域中的作用并专注于没有传感器的低成本机械臂,因此所有的决策都是基于视觉识别,例如实时三维姿态估计。然而,这就需要标注大量的训练数据,不仅费时而且费力。基于该原因,本文提出了一种新的解决方案,即利用三维模型生成大量的合成数据,在该虚拟域中训练一个视觉模型,并在域自适应后应用于真实图像。为此,论文设计了一个半监督方法,充分利用了关键点之间的几何约束,并采用迭代算法进行优化。该算法不需要对真实图像进行任何标注,具有很好的推广性,在两个真实数据集上得到了不错的三维姿态估计结果。本文还构建了一个基于视觉的任务完成控制系统,在虚拟环境中训练了一个强化学习agent,并将其应用于现实世界。

Here Insert Picture Description
Here Insert Picture Description

5、Robot Arm Pose Estimation by Pixel-wise Regression of Joint Angles(ICRA)

为了用机械臂实现基于视觉的精确控制,需要良好的手眼协调。然而,由于来自关节编码器的噪声读数或不准确的手眼校准,了解手臂的当前配置可能非常困难。提出了一种以手臂深度图像为输入,直接估计关节角位置的机器人手臂姿态估计方法。这是一种逐帧的方法,它不依赖于前一帧解的良好初始化或来自联合编码器的知识。为了进行估计,本文使用了一个随机回归森林,它基于综合生成的数据进行训练。论文比较了随机森林的不同训练目标,并分析了手臂的先验分割对训练精度的影响。实验表明,这种方法提高了先前的工作,无论是在计算复杂性和准确性方面。尽管只对合成数据进行了训练,但这种估计也适用于真实的深度图像。

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

发布了263 篇原创文章 · 获赞 292 · 访问量 41万+

Guess you like

Origin blog.csdn.net/qq_29462849/article/details/104335625