OpenCV Experiment Box---Manual for Development and User Manual of Robotic Arm Free Grasping Routine

Table of contents

1. Project introduction

        Background of the project:

        Camera model:

2. Robotic arm free grasping demonstration 

        Precautions for use:

         Steps:

3. Core code explanation

① Robotic arm related motion driver code

② BGR image conversion pseudo color depth image code

③ Target following and grabbing action code


Developer: Juzheng

If you are interested in this project, you can comment or message me privately, and join the group for communication. At the same time, you can also pay attention to the 3D vision developer community.

1. Project introduction

        Background of the project:

        Based on the Obi Zhongguang OpenCV experimental box, it is combined with a six-degree-of-freedom robotic arm to achieve 360-degree free grabbing of objects. At the same time, two postures are set to cooperate with desktop grabbing and free grabbing, so that the mechanical claw can accurately grab the target.

OpenCV experiment box set
OpenCV experiment box set

        Camera model:

        The OpenCV experiment box is equipped with the latest depth camera Gemini2 from OBI. It is very compact and easy to embed into any device. At the same time, it collects image depth information very accurately. After testing, Gemini2 can support ultra-long distance object recognition of more than 2 meters. with tracking.

2.Robotic arm free grasping demonstration 

Robotic arm free grabbing demonstration video

        Precautions for use:

        Because the length of the robotic arm is limited, when using the medicine bottle to freely grab, try to keep it as level as possible with the camera. It will not be able to grab accurately if it is too high or too far. Due to angle restrictions, when large medicine bottles are used for desktop grabbing, the claw will slip. Therefore, try to use small medicine bottles for desktop grabbing. Large medicine bottles can only be grabbed freely, and there are no restrictions on small medicine bottles.

Robotic arm grabbing range
scope \ target type big medicine bottle small medicine bottle
high 30---40cm 30---35cm
distance With itself as the center of the circle, the radius is within 31cm With itself as the center of the circle, the radius is within 30cm

         Steps:

        First, find the folder Code202308 where the routine is running, open the terminal (Ctrl+alt+T), enter the command cd Desktop/, then cd Robotic_arm_apps, run the routine file python3 free_pickup.py

        displayed on the interface shown above represent that the running routine is successful and the experiment can be carried out.

3. Core code explanation

        ​​​​I divided this into three parts, which are ① Robotic arm related motion drive code ② BGR image conversion pseudo-color depth image code ③ Target following and grabbing action code.

① Robotic arm related motion driver code

        The following code is used to control the servo's control of the grabbing action position. x is the grabbing distance from the central copper pillar to the target, and y is the grabbing height.

import math

def step(x = None, y = None):

    pi = 3.14

    L1 = 105     # L1
    L2 = 110     # L2
    L3 = 110     # L3

    if x is None:
        x = int(input("x:"))
    if y is None:
        y = int(input("y:"))
    theta = math.radians(0)

    Bx = x - L3 * math.cos(theta)
    By = y - L3 * math.sin(theta)

    lp = Bx**2 + By**2
    alpha = math.atan2(By, Bx)
    tmp = (L1*L1 + lp - L2*L2) / (2*L1*math.sqrt(lp))
    if tmp < -1:
        tmp = -1
    elif tmp > 1:
        tmp = 1
    beta = math.acos(tmp)
    q1 = -(pi/2.0 - alpha - beta)
    tmp = (L1*L1 + L2*L2 - lp) / (2*L1*L2)
    if tmp < -1:
        tmp = -1
    elif tmp > 1:
        tmp = 1
    q2 = math.acos(tmp) - pi
    q3 = - q1 - q2 - pi/2

    step_5 = int(2047 + int(math.degrees(q1) * 11.375))
    step_4 = int(2047 + int(math.degrees(q2) * 11.375))
    step_3 = int(2047 - int(math.degrees(q3) * 11.375))

    return step_3, step_4, step_5

        L1, L2, and L3 are the custom link lengths (calibration values, measured in advance), then define the joint attitude theta coordinates (x, y), and then calculate the intermediate position Bx, By, which is the position of the second joint. Here we use trigonometric functions and the position and attitude of the end joints to solve the problem, including all subsequent actions, which are basically the inverse solution of trigonometric functions.

        Calculate the angles q1, q2 of the first and second joints, using the inverse kinematics formula of the two-link manipulator. Here the math.acos and math.atan2 functions are used to solve for arc cosine and arc tangent, and the math.sqrt function is used to find the square root.

        Calculate the angle q3 of the third joint by subtracting the first two angles from the attitude of the end joint (predefined, theta). The math.degrees function converts radians to degrees.

        Then use the underlying servo control library to transmit the servo attitude and return the servo attitude.

② BGR image conversion pseudo color depth image code

        The following code converts the depth information collected by the camera and the BGR image into a pseudo-color depth image.​ 

ret_bgr, bgr_image = orbbec_cap.retrieve(None, cv.CAP_OBSENSOR_BGR_IMAGE)
ret_depth, depth_map = orbbec_cap.retrieve(None, cv.CAP_OBSENSOR_DEPTH_MAP)
            
if ret_depth:
    depth_map_8U = depth_map * 255.0 / 5000 
    depth_map_8U = np.clip(depth_map_8U, 0, 255) 
    depth_map_8U = np.uint8(depth_map_8U) 

    color_depth_map = cv.applyColorMap(depth_map_8U, cv.COLORMAP_JET) 
    cv.imshow("Depth: ColorMap",  color_depth_map) 

        ​ ​ First, normalize and average the image, force the type to be converted, and display the image.

 ③ Target following and grabbing action code

         The code here is the core movement and grabbing action, including target following code and rotation synchronization grabbing action code.

def track_arm(packetHandler, bbox, frame_size,pid_errorx,pid_errory):
    c = frame_size[0] + 25
    s = frame_size[1] + (frame_size[1]/2)

    centerx = bbox[0]   
    centery = bbox[1] 
    stepx = int((centerx - c /2))
    stepy = int((centery - s /2))
    
    P_x = 0.2
    P_y = 0.3

    I_x = 0.05
    I_y = 0.06

    errorx = stepx - pid_errorx
    errory = stepy - pid_errory

    scs_present_position_3, scs_comm_result, scs_error = packetHandler.ReadPos(SCS_ID_3)
    scs_present_position_6, scs_comm_result, scs_error = packetHandler.ReadPos(SCS_ID_6)

    ACC_X = int (abs(P_x * stepx) + I_x * abs(errorx))
    if ACC_X > 256:
        ACC_X = 255
    elif ACC_X < 0:
        ACC_X = 0
    ACC_Y = int (abs(P_y * stepy) + I_y * abs(errory))
    if ACC_Y > 256:
        ACC_Y = 255
    elif ACC_Y < 0:
        ACC_Y = 0

    if scs_present_position_3 + stepy >= 3070:
        scs_comm_result, scs_error = packetHandler.WritePosEx(SCS_ID_5, 2480, SCS_MOVING_SPEED, SCS_MOVING_ACC)
        #packetHandler.WritePosEx(SCS_ID_2, 2048, SCS_MOVING_SPEED, SCS_MOVING_ACC)
    if scs_present_position_3 + stepy <= 2550:
        scs_comm_result, scs_error = packetHandler.WritePosEx(SCS_ID_5, 2900, SCS_MOVING_SPEED, SCS_MOVING_ACC)
    else:
        scs_comm_result, scs_error = packetHandler.WritePosEx(SCS_ID_3, scs_present_position_3 + stepy, SCS_MOVING_SPEED, ACC_Y)
        scs_comm_result, scs_error = packetHandler.WritePosEx(SCS_ID_6, scs_present_position_6 + stepx, SCS_MOVING_SPEED, ACC_X)

    pid_errorx = stepx
    pid_errory = stepy

    return pid_errorx, pid_errory

        The above is the target following code. The implementation principle is to always fix the target to the center of the image, and use the PID algorithm to move the servo to eliminate the jitter of the servo.​ 

        The following is the servo follow-up rotation code. The principle is to use the mask target coordinates to perform inverse trigonometric function calculations to find the target rotation angle, and then transmit the angle to the servo for synchronous rotation.

def track_set_2(contours, x, y, w, h):
    arg = 0.0
    X_1 = x
    Y_1 = y
    X_2 = x + w - 1
    Y_2 = y + h - 1
    l = len(contours)
    for i in range(l - 1):
        if contours[i][0][0] == X_1:
            L1 = contours[i][0][1]
            break
    for i in range(l - 1):
        if contours[i][0][0] == X_2:
            L2 = contours[i][0][1]
            break
    for i in range(l - 1):
        if contours[i][0][1] == Y_1:
            R1 = contours[i][0][0]
            break
    for i in range(l - 1):
        if contours[i][0][1] == Y_2:
            R2 = contours[i][0][0]
            break
    '''0--90 AND 90--180 '''
    if R1 > R2 and h / w <= 0.65 * 56 / 22: 
        arg1 = math.acos((R1 - X_1)/math.sqrt((R1 - X_1) * (R1 - X_1) + (Y_1 - L1) * (Y_1 - L1))) / math.pi * 180
        arg2 = math.acos((X_2 - R2)/math.sqrt((X_2 - R2) * (X_2 - R2) + (L2 - Y_2) * (L2 - Y_2))) / math.pi * 180
        arg = (arg1 + arg2) / 2
    if R1 < R2 and h / w <= 0.65 * 56 / 22: 
        arg1 = math.acos((X_1 - R2) / math.sqrt((X_1 - R2) * (X_1 - R2) + (L1 - Y_2) * (L1 - Y_2))) / math.pi * 180
        arg2 = math.acos((R1 - X_2) / math.sqrt((R1 - X_2) * (R1 - X_2) + (Y_1 - L2) * (Y_1 - L2))) / math.pi * 180
        arg = (arg1 + arg2) / 2
    if h / w > 0.65 * 56 / 22:
        arg = 90
    arg = 1024 + int (arg / 180 * 2047)
    return arg

        If you are interested in this project, you can comment or message me privately, and join the group for communication. At the same time, you can also pay attention to the 3D vision developer community.

Guess you like

Origin blog.csdn.net/weixin_69250798/article/details/132187949