Python robot programming - differential AGV machine, tracking based on vision and predictive control, automatic driving (Part 2)

I. Introduction

Based on recent tests, a rough control algorithm has been obtained. Its control effect is suitable for single-line and sharp turns. I think it can be slightly improved and suitable for monocular vision tracking operation of complex trajectories such as intersections. This article is Complement the unfinished part of python robot programming—control of differential robot car, control model, trajectory tracking, trajectory planning, automatic parking (middle) , and also provide a relatively reliable and new idea—we put it Temporarily named "Predictive Automatic Differential Car Tracking Control Based on Deviation Error Between Trajectory and Road Center of Gravity".
insert image description here
As shown in the figure above, the algorithm can roughly automatically turn a sharp turn greater than 90 degrees at a speed of 360m/h. Although the line is not very appropriate, we think this method is actually more feasible. The method is described in detail below:

2. Predictive automatic differential car tracking control strategy based on deviation error between trajectory and road surface center of gravity

In this method, the control of the differential trolley still uses the predictive control method mentioned above. The program is a process of continuously predicting the future trajectory of the trolley in the forward and reverse directions according to the received image with the step size dt, and in the boundary condition The speed corresponding to the left and right wheels of the car whose error is closest to the image acquisition track is the optimal car speed setting value of the current step. The specific algorithm flow pseudo code is as follows:

  1. while True:
  2. 	记录预测间隔累计时间con_Hz=con_Hz+1
    
  3. 	读取摄像头图像,并进行处理(具体方法见第三节)获得路径的重心图像坐标cu、cv
    
  4. 	 if 当时间间隔到达预测控制的步长时con_Hz=300毫秒:
    
  5. 			con_Hz=0
    
  6. 			进入预测控制模块best_move(处理过的轨迹,当前的速度状态clv,重心坐标):			
    
  7. 					定义最佳速度设定值bestv=clv
    
  8. 					定义预测的微增速度dv=0.001
    
  9. 					定义预测的边界范围vmax=10
    
  10.    				for i in vmax:#速度增加的方向预测vmax个dv
    
  11.    						左轮的速度sl_n=当前速度+i*dv
    
  12.    						根据恒定速度计算右轮速度
    
  13.    						计算当前速度下未来2.5秒后(可以调整)的轨迹线存在pv、pu的list中
    
  14.    						将当前预测轨迹按照摄像头内参转化到图像坐标
    
  15.    						计算预测轨迹的重心图像坐标(cpu、cpv)
    
  16.    						计算重心距离dei=sqrt((cpu-cu)**2+(cpv-cv)**2)
    
  17.    						if dei<de:
    
  18.    								更新误差de=dei
    
  19.    								更新bestv=sl_n
    
  20.    				for i in vmax:#速度减小的方向预测vmax个dv
    
  21.    						左轮的速度sl_n=当前速度-i*dv
    
  22.    						根据恒定速度计算右轮速度
    
  23.    						计算当前速度下未来2.5秒后(可以调整)的轨迹线存在pv、pu的list中
    
  24.    						将当前预测轨迹按照摄像头内参转化到图像坐标
    
  25.    						计算预测轨迹的重心图像坐标(cpu、cpv)
    
  26.    						计算重心距离dei=sqrt((cpu-cu)**2+(cpv-cv)**2)
    
  27.    						if dei<de:
    
  28.    								更新误差de=dei
    
  29.    								更新bestv=sl_n
    
  30.    			reurn  bestv#返回最优控制速度设定值
    
  31.    	将当前最优控制速度值下发至小车
    
  32.    	进入下一个循环。
    

The above is the main algorithm description.
insert image description here
According to this algorithm, in the simulation environment, after running for a long time, there is no major deviation from the track. As shown in the figure above, in the black picture in the middle: the yellow line is the processed path, the blue point is the position of the center of gravity of the path, and the red line is the predicted future driving trajectory at the best speed setting value under the current boundary. The trajectory is closest to the blue center of gravity, and the car always runs under this strategy.

3. Main points of trajectory image processing

For the acquired trajectory image processing, the main process is as follows:

  1. Get the current image img of the camera
  2. According to the color selection algorithm, the yellow track part information image bimg is left
  3. Binarize the bimg image, leaving the pixel of the track as 1, and obtain the image binary
  4. Remove the pixels of the small car part of the binary (the small lower half of the image) to get a new binary
  5. Calculate the center of gravity points cu and cv of the current trajectory for input to the predictive control module.
    The code example of the center of gravity calculation of the image is as follows:
import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load the binary image
img = cv2.imread('binary_image.png', 0)

# Calculate the moments of the image
M = cv2.moments(img)

# Calculate the centroid
cx = int(M['m10'] / M['m00'])
cy = int(M['m01'] / M['m00'])

# Print the centroid coordinates
print(f'Centroid: ({
      
      cx}, {
      
      cy})')

# Draw a circle at the centroid
img_with_centroid = cv2.circle(img.copy(), (cx, cy), 5, (255, 0, 0), -1)

# Display the image with the centroid
plt.imshow(img_with_centroid, cmap='gray')
plt.show()

4. Part of the core control strategy python code in this article:

def best_move(sl,Pathimg,vc=0.1,dt=0.5,barycenter=(256,256)):
    #控制增量
    dvl=0.001
    #要预测的控制量     
    sl_n=sl

    #最优的控制量
    best_sl=sl
    mod="G"
    #方差
    de=500
    dei=e+100
    Gbest_sl=sl
    
    for i in range(10):#逐次减小控制量
        sl_n=sl_n-i*dvl
        #待入预测函数获取预测序列        
        xx,yy,tt=predicate(0,0,math.pi/2,sl_n,step=5,dt=dt,vc=vc,dspeed=True)
        #中心轨迹转化为车头轨迹        
        pv=[]
        pu=[]

        #将小车坐标转化到图像坐标
        for i in range(len(xx)):
            puv1=realxy2uv(xx[i],yy[i],Pathimg,xymax=0.5,zero="self_define",dzero=(256,286))
            pu.append(puv1[0])
            pv.append(puv1[1])

        cpu=np.average(pu)
        cpv=np.average(pv) 
        dei= np.sqrt((barycenter[0]-cpu)**2+(barycenter[1]-cpv)**2)   
        if dei<de:
            de=dei
            Gbest_sl=sl_n   

    sl_n=sl  
    for i in range(10):#逐次增大控制量
        sl_n=sl_n+i*dvl
        xx,yy,tt=predicate(0,0,math.pi/2,sl_n,step=5,dt=dt,vc=vc,dspeed=True)    

        #将小车坐标转化到图像坐标
        pv=[]
        pu=[]
        for i in range(len(xx)):
            puv1=realxy2uv(xx[i],yy[i],Pathimg,xymax=0.5,zero="self_define",dzero=(256,286))
            pu.append(puv1[0])
            pv.append(puv1[1])

        cpu=np.average(pu)
        cpv=np.average(pv)     
        
        dei= np.sqrt((barycenter[0]-cpu)**2+(barycenter[1]-cpv)**2)

        if dei<de:
            de=dei
            Gbest_sl=sl_n   
            print("Gbest_sll:",best_sl)
    #返回最优控制量

    if mod=="G":
        return Gbest_sl
    else:
        return best_sl,np

V. Conclusion

In the follow-up, the processing of complex trajectories needs to be improved. If you need the simulation environment and source code of this article, you can join the official account or leave a private message.
insert image description here
Changing another path, we can see that the robustness of this method is still good. As long as the road surface is damaged, as long as it tends to be uniform, automatic driving will also perform well:
insert image description here

Guess you like

Origin blog.csdn.net/kanbide/article/details/131329963