Specific applications of how to identify and track colors in unmanned driving

We introduced the camera’s color recognition in detail in the previous article, and introduced some common knowledge points in OpenCV. Here we will make a specific application of color recognition in unmanned driving .

If you are interested, you can first watch a video I shot: Unmanned vehicles recognize colors and track them  

Through the video, we can see that the unmanned vehicle will follow the color set by itself, including the realization of turning. So how does the unmanned vehicle recognize the color and track it? How is the turning done? Let's look at the code first, the code is a good explanation:

1. Unmanned driving code

from jetbotmini import Camera
from jetbotmini import bgr8_to_jpeg
import cv2
import numpy as np
import torch
import torchvision
import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbotmini import Robot

# 实例化摄像头
camera = Camera.instance(width=300, height=300)

# 蓝色上下限数组
color_lower = np.array([100,43,46])
color_upper = np.array([124, 255, 255])

# 初始化无人车驱动电机实例
robot = Robot()

# 图片、速度进度条、转弯进度条组件的显示
image_widget = widgets.Image(format='jpeg', width=300, height=300)
speed_widget = widgets.FloatSlider(value=0.4, min=0.0, max=1.0, description='speed')
turn_gain_widget = widgets.FloatSlider(value=0.5, min=0.0, max=2.0, description='turn gain')
display(widgets.VBox([widgets.HBox([image_widget]),speed_widget,turn_gain_widget]))

width = int(image_widget.width)
height = int(image_widget.height)

center_x = 0
def execute(change):
    # -----这块属于对图像的处理与颜色检测,上篇文章有详细介绍------
    frame = camera.value
    frame = cv2.resize(frame, (300, 300))
    frame_=cv2.GaussianBlur(frame,(5,5),0)
    hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
    mask=cv2.inRange(hsv,color_lower,color_upper)
    mask=cv2.erode(mask,None,iterations=2)
    mask=cv2.dilate(mask,None,iterations=2)
    mask=cv2.GaussianBlur(mask,(3,3),0)
    cnts=cv2.findContours(mask.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]
    # ------------------------------------------------------------------------

    # 检测到了目标
    if len(cnts)>0:
        cnt = max(cnts,key=cv2.contourArea)
        (color_x,color_y),color_radius=cv2.minEnclosingCircle(cnt)
        if color_radius > 10:
            # 将检测到的颜色标记出来
            cv2.circle(frame,(int(color_x),int(color_y)),int(color_radius),(255,0,255),2)
            # 偏离中心位置来判断是否转弯
            # [-1,1]
            center_x = (150 - color_x)/150
            robot.set_motors(
                float(speed_widget.value + turn_gain_widget.value * center_x),
                float(speed_widget.value - turn_gain_widget.value * center_x)
            )
    # 没有检测到就停止
    else:
        pass
        robot.stop()
    # 更新图像显示到Image组件
    image_widget.value = bgr8_to_jpeg(frame)

Initialize the camera and define a method that can detect the color. In addition to recognizing the color, we also made an offset of the color target position on the left and right of the camera. This is used to update different speed values ​​​​for the left and right motors. This is why the unmanned vehicle can turn. We add the deviation value of the center position of the target from the center position of the image to the speed of the motor. One of the left and right motors is added and the other is subtracted, thus forming a differential speed!
Next we call this method: execute({'new': camera.value}) .
Of course, this call can only detect one frame. What we need is to update the camera image in real time and track the changing scene in real time.
We use the observe method for real-time processing:

camera.unobserve_all()
camera.observe(execute, names='value')

In this way, the blue color target can be detected in real time. If the target is detected, it will follow the target. If it is not detected, the unmanned vehicle will stop.

2. Unmanned vehicles stop

Although the above code can automatically stop the unmanned vehicle when no target is detected, sometimes we also want to force the unmanned vehicle to stop, as follows:

import time
camera.unobserve_all()
time.sleep(1.0)
robot.stop()

We can see that the above code is similar to the color recognition chapter as a whole. The difference is that an unmanned vehicle is added. More precisely, there are two more left and right wheels (motors). These two wheels are differential wheels, which means that they each have a motor drive, and the speed can be different (the same speed means going straight), so that operations such as turning and drifting can be performed. The speed of the left and right motors is updated according to the center position of the recognized color, so that there is a tracking effect .

For more detailed introductions on how to drive unmanned vehicles, please refer to: Jetson Nano drives the left and right motors of the robot

3、camera.observe

observe(handler, names=traitlets.All, type='change')

Set a handler that is called when the trait changes. Handler is a callback function that calls the execute function defined above, handler(change) , where change is a dictionary , so this is why when calling execute, the parameter is a dictionary. {'new': camera.value} has the following keys when the type is change:

owner : HasTraits instance
old : the old value of the modified trait attribute
new : the new value of the modified trait attribute
name : the name of the modified trait attribute

So here is an effect similar to an infinite loop, which can update each frame of the camera.

4、camera.unobserve_all

camera.unobserve_all(name=traitlets.All)

Removes any type of trait change handler with the specified name. If no name is specified, removes all trait notifiers. Or in layman's terms, it means closing the video stream and releasing resources.
So we see that before camera.observe and robot.stop(), we do a step to release resources and stop the unmanned vehicle.

5. CSI camera

Here is an additional explanation for the camera. With the development of artificial intelligence, applications such as autonomous driving and smart home are inseparable from the camera, and a low-power, low-cost, high-definition camera is particularly important. Generally, the camera you see is a USB interface, but here is a camera using the CSI interface protocol, as shown in the figure:

It can be seen that this is a 15-pin cable, not a common USB interface. The CSI interface is a high-speed serial interface between the host processor and the camera module. The key point is low energy consumption, which has also led to the popularity of high-definition cameras in mobile phones.

Guess you like

Origin blog.csdn.net/weixin_41896770/article/details/131778946