Investigation and expanded introduction of distance sensing sensors based on close range

The first goal I want to achieve is: (There are two, the first is distance detection; the second is the edge position detection of the finished palletizing box.)

1. Install sensors on three sides of the palletizing robot to realize the distance perception of the front, left and right. (The environment is: the robot needs to enter the large container in the terminal to realize the stacking inside the container. The inside of the container is similar to a sawtooth structure, so the left and right points cannot be used for distance detection, at least it should be a line laser. What needs to be detected in the front is the distance position from the completed palletizing box.)

2. Find the edge position of the partially palletized box to guide the position correction in the next step of palletizing.

---------------------1 Research on the first question:

First communicate with Weijing Company, they recommend using tof principle to realize the perception of left and right distance:

(1) TOF: OF is one of the schemes of 3D depth camera, and it is the fellow of structured light. TOF is short for "time-of-flight", also called "time-of-flight ranging method".

The principle is to measure the distance by lighting the target object and measuring the transmission time of the light between the lens and the object. Using these data to judge how far the object is from us, and then know the distance of each object in the picture, so as to realize the depth map , And finally you can directly draw a stereo image to achieve 3D stereo depth sensing.

(2) Light source classification based on TOF:

Different types of TOF cameras use different light sources, including LEDs and lasers.

(3) Introduction:

The core component of the TOF 3D module is the TOF chip, which integrates many functions, including driving the projector, receiving the reflected light, and then generating the raw image, which is then sent to the software for processing into depth information.

(4) TOF and structured light comparison

  • The working distance of TOF is much longer than structured light , so it is more suitable for use in the rear camera of mobile phones for somatosensory games.

       The working distance of structured light is very short , so it is generally used in the front camera of the mobile phone for face recognition, such as the new mobile phone of iPhone X.

  • The TOF technology emits not speckles or coding patterns, but surface light sources . There will be no large attenuation of light information within a certain distance. With the back-illuminated and large pixel size design of the TOF chip, the light collection rate and Ranging speed makes long-distance applications possible. This is one of the reasons why TOF can be used as a rear camera for mobile phones, but structured light cannot be used as a rear camera.

       3D structured light projects speckles or coded patterns , and the receiving module needs to capture clear patterns to calculate the depth. As the distance increases, the projected pattern may appear blurred, or the brightness energy may be attenuated, resulting in incomplete depth maps, holes, or even failure. Therefore, 3D structured light is not suitable for long-distance depth information collection. .

❓ in the table below means that you don’t understand the reason well?

  COOL Structured light
Basic principles Infrared light reflection time difference Single camera and projection fringe spot coding
Response time fast slow
Low light environment performance Good (infrared laser) Good, depending on the light source
Bright light environment performance medium weak
Depth accuracy low medium
Resolution low medium
Recognition distance Medium (1-10m), limited by the intensity of the light source Short, affected by spot pattern
Software complexity medium medium
Material costs medium high
Power consumption low medium
Disadvantage Good overall performance, low plane resolution? Easily affected by light?

The following is a comparison of 3D vision solutions:

3D vision scheme comparison
Program 3D structured light solution TOF program Binocular vision solution
Basic principles Speckle structured light flight duration Parallax algorithm
light source 15,000 speckles Uniform surface light source None (passive)
Working distance 0.2m to 1.2m 0.4m to 5m \ leq 2m
Depth accuracy?

high

Error 0.5%-0.1%

in

Error 0.5%-1%

difference

Error 5%-10%

XY resolution? high low in
Low light performance high high low
Outdoor performance (daylight) low in high
Power consumption in in high
Scope of application Face recognition, face payment 3D modeling, AR applications, somatosensory games blur background

What is active and passive:

First of all, give an empirical conclusion without verification (not guarantee the correctness of the whole scene), and any positioning or obstacle avoidance solutions we currently use inevitably use waves . To extend it, we want to observe a physical object. If we are not allowed to touch it (even if we touch it, we have to use EEG, ECG), it seems that only waves can meet our needs. Active or passive, active That is, the device actively sends out waves, and receives the feedback waves, and analyzes (time, pattern, phase, carrying information...) to obtain detection information or relative position information . The passive one is not to emit waves, but to receive the waves (cameras) inherent in nature to obtain relevant information.

-------Extension-----

What we usually use should be:

  • Ultrasonic Ranging
  • Millimeter wave radar
  • Lidar
  • Solid state radar
  • RGBD camera
  • Binocular camera (covered above)
  • Monocular camera
  • TOF flight time ( covered above )
  • Triangular Ranging
  • Structured light ( involved above )

Although these words appear frequently together, in fact, when they are used before, it is often impossible to determine what the technical details of a certain solution are. For example, which radar is used by the sweeping robot, and this radar is used. What technology was used?

Whether we want to avoid obstacles or locate, we cannot do without obtaining the relative positional relationship between the measured object and the measuring device. According to the relative positional relationship of any point of the measured object, we can obtain the whole of the measured object The location information even composes a three-dimensional structure.

(1) TOF time-of-flight ranging method

According to the knowledge of junior high school physics, since the speed of the detectable wave is certain, we can easily determine the distance of the object based on this time. At present, there are not a few devices that use this kind of scheme to measure, and according to the different types of waves used (different types or different wavelengths), there can be different implementation methods such as ultrasonic, millimeter wave radar, lidar and so on.

  • Ultrasound

The ultrasonic transmitter emits ultrasonic waves in a certain direction, and starts timing at the same time as the transmitting time. The ultrasonic waves propagate in the air and return immediately when encountering obstacles on the way. The ultrasonic receiver immediately stops timing when it receives the reflected waves. The propagation speed of ultrasonic waves in the air is 340m/s. According to the time t recorded by the timer, the distance (s) between the launch point and the obstacle can be calculated, namely: s=340t/2.

Technical key points Typical value (different technical details, the value range may be larger)
Detection distance 10 meters or 100 meters, most of them are within 10 meters
accuracy Centimeter level, very low resolution, tens of degrees
cost Very low, within one hundred yuan
defect Is the reflecting surface flat (less applicable scenes)? ? ?
defect Slow speed, sound wave speed 340 m/s
defect Poor adaptability to environment such as dust
  • Millimeter wave radar

Millimeter wave radar refers to a radar that works in the millimeter wave band. Generally, millimeter waves refer to electromagnetic waves in the frequency domain of 30 to 300 GHz (wavelengths of 1 to 10 mm). Millimeter wave radars are divided into long range radar (LRR) and short range radar (SRR). Because millimeter waves have weak attenuation in the atmosphere, they can The detection can detect farther distances, and the long-range radar can realize the sensing and detection of more than 200m. The many advantages of millimeter wave radar make it currently occupy a relatively large part of automobile collision avoidance sensors. Currently, the mainstream vehicle-mounted millimeter wave radars used in the market can be divided into two types according to their frequencies: 24GHz millimeter wave radars and 77GHz millimeter wave radars. Generally, the detection range of 24GHz radar is medium and short distance, which is used to realize BSD (BlindSpotDectection, blind spot detection system), and the 77GHz long-range radar is used to realize ACC (Adaptive Cruise Control, adaptive cruise system).

PS: The main limitations of the application of millimeter wave in radar are: the attenuation of high humidity environments such as rain, fog and wet snow, and the influence of high-power devices and insertion loss reduce the detection distance of millimeter wave radar; the penetration ability of trees is poor. Compared with microwave, it has low penetration into dense trees, high cost of components and relatively high requirements for processing accuracy.

Technical key points Typical value (different technical details, the value range may be larger)
Detection distance Maximum 200 meters, typically around 10 meters
Frequency band 24GHZ (civilian); 60GHZ; 77GHZ (car)
accuracy Cm level
cost Low, thousand yuan or less
Advantage Not affected by light and dust
defect Transmission loss is large, easy to be absorbed by human body etc.
defect The resolution is not high, above 3 degrees
  • Lidar (waves above, shi here)

激光雷达是目前无人驾驶中最重要的传感器,原理是激光器发射一个激光脉冲,并由计时器记录下出射的时间,回返光经接收器接收,并由计时器记录下回返的时间。两个时间相减即得到了光的“飞行时间”,而光速是一定的,因此在已知速度和时间后很容易就可以计算出距离。

PS:在TOF方案中,距离测量依赖于时间的测量。但是光速太快了,因此要获得精确的距离,对计时系统的要求也就变得很高。一个数据是,激光雷达要测量1cm的距离,对应的时间跨度约为65ps,导致激光雷达的价格较高。

     激光雷达的另一个重要的指标是线数。按线数分类的话有常见的有单线,4线,16线,32线,64线等

     单束激光发射器在激光雷达内部进行匀速的旋转,每旋转一个小角度即发射一次激光,轮巡一定的角度后,就生成了一帧完整的数据。因此,单线激光雷达的数据可以看做是同一高度的一排点阵。

     单线激光雷达的数据缺少一个维度,只能描述线状信息,无法描述面。也就无法得到物体垂直于激光雷达发射平面的高度信息。

     多线雷达是目前自动驾驶最主要使用的雷达,但售价极高。见介绍链接:https://blog.csdn.net/m0_37957160/article/details/108793973

    另外,目前低成本单线激光雷达(淘宝上几百块到几千块的)的并不是基于Tof方案,而是采用了三角测距方案。

技术关键点 典型值(技术细节不同,值范围可能会较大)
探测距离 200米
频段 3.846×10^14 Hz到7.895×10^14 Hz
精度 毫米(近距离)-厘米级
成本 高,价格根据线数不等 16线国产 2.8万 国外4.0万
优势 精准,分辨率高,速度快
缺陷 阴雨天,浓雾等天气无法工作

-------3D成像方法总结:以下要介绍的是真正的3D成像,得到物体三维的图形,是立体的图像。而不是利用人眼视觉差异的特点,错误感知到的假三维信息。

在原理上分为以下几类:

  • 双目视觉(双目立体视觉法(Stereo Vision)
  • 激光三角(激光三角法(Laser triangulation)
  • 结构光(结构光3D成像(Structured light 3D imaging)
  • ToF(飞行时间法ToFTime of flight
  • 光场(光场成像法(Light field of imaging)
  • 全息(全息投影技术(Front-projected holographic display)

而激光雷达不是3D成像原理上的一个分类,而是一种具体方法。

激光雷达的3D成像原理有:三角测距法、飞行时间ToF法等。

激光雷达按照实现方式分类有:机械式、混合固态、基于光学相控阵固态 、基于MEMS式混合固态、基于FLASH式固态等。

(Structured light and laser: Laser is a kind of light source, which is determined by the luminescence mechanism. In a broad sense, it can be said that any light source, including laser, LED light, mercury lamp, fluorescent light and even sunlight, etc., has been modulated to have A certain structure of light can be called structured light)


 

 

 

 

 

Guess you like

Origin blog.csdn.net/m0_37957160/article/details/109144817