Open source solution for robot production | Desktop-level omnidirectional chassis--machine vision

      Machine vision is a rapidly developing branch of artificial intelligence. Simply put, machine vision uses machines to replace human eyes for measurement and judgment. The machine vision system uses machine vision products (i.e., image capture devices, divided into CMOS and CCD) to convert the captured target into an image signal, and transmits it to a dedicated image processing system to obtain the morphological information of the captured target. According to the pixel distribution and Information such as brightness and color is converted into digital signals; the image system performs various operations on these signals to extract the characteristics of the target, and then controls the on-site equipment actions based on the discrimination results.

The basics of machine vision mainly include shape recognition, color detection, color tracking, QR code recognition, etc.

      The most basic feature of the machine vision system is to improve the flexibility and automation of production. In some dangerous working environments that are not suitable for manual work or in situations where artificial vision cannot meet the requirements, machine vision is often used to replace artificial vision. At the same time, in large-volume repetitive industrial production processes, the use of machine vision inspection methods can greatly improve the efficiency and automation of production. Next, we will combine the basics of machine vision and develop applications for shape recognition, color detection, and color tracking based on the open source robot platform.

1. Shape recognition- recognize circles

Implementation ideas

      Use the camera to collect image information to identify circles, and display the center coordinates of the circles on the interface.

Equipment preparation

      Camera, red and green circles (as shown below).

Steps

① Download the reference program visual_experiment_ws\src\shape_detection\scripts\shape_detection_victory.py in the information at the end of the article:

#!/usr/bin/env python

#!-*-coding=utf-8 -*-

import rospy

from sensor_msgs.msg import Image

import cv2

import numpy as np

from cv_bridge import CvBridge, CvBridgeError

import sys

import time

lower_blue=np.array([100,43,46])

upper_blue=np.array([124,255,255])

#lower_blue=np.array([14,143,80])

#upper_blue=np.array([23,255,200])

#lower_red=np.array([0,200,55])

#upper_red=np.array([10,255,130])

#lower_red=np.array([0,150,55])

#upper_red=np.array([10,255,130])

lower_red=np.array([0,43,46])

upper_red=np.array([10,255,255])

lower_green=np.array([40,43,46])

upper_green=np.array([77,255,255])

font = cv2.FONT_HERSHEY_SIMPLEX   # 设置字体样式

kernel = np.ones((5, 5), np.uint8)   # 卷积核

img_pub = rospy.Publisher('webcam/image_raw', Image, queue_size=2)

#cap = cv2.VideoCapture(0)

def areaCal(contour):

    area = 0

    for i in range(len(contour)):

        area += cv2.contourArea(contour[i])

    return area

def image_callback(msg):

           bridge = CvBridge()

           frame = bridge.imgmsg_to_cv2(msg, "bgr8")

           #cv2.imshow("source", frame)

           #ret, frame = Video.read()

           gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)   # 转换为灰色通道

           hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)   # 转换为HSV空间

           mask = cv2.inRange(hsv, lower_red, upper_red)   # 设定掩膜取值范围   [消除噪声]

           #mask = cv2.inRange(hsv, lower_green, upper_green)   # 设定掩膜取值范围 [消除噪声]

           opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)   # 形态学开运算 [消除噪声]

           bila = cv2.bilateralFilter(mask, 10, 200, 200)   # 双边滤波消除噪声 [消除噪声]

           edges = cv2.Canny(opening, 50, 100)   # 边缘识别 [消除噪声]

           mask_green = cv2.inRange(hsv,lower_green,upper_green)

           opening_green = cv2.morphologyEx(mask_green, cv2.MORPH_OPEN, kernel)

           bila_green = cv2.bilateralFilter(mask_green, 10, 200, 200)

           edges_green = cv2.Canny(opening_green, 50, 100)

           images,contourss,hierarchvs = cv2.findContours(edges_green,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

           image,contours,hierarchv = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

           scaling_factor = 0.5

           print "area_red=",areaCal(contours),"area_green=",areaCal(contourss)

           if(areaCal(contours)>50):

             #circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 100, param1=100, param2=20, minRadius=20, maxRadius=500)

             circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 100, param1=100, param2=20, minRadius=0, maxRadius=0)

             if circles is not None:   # 如果识别出圆

               #print "I found the red circle"

               for circle in circles[0]:

                 x_red=int(circle[0])

                 y_red=int(circle[1])

                 r_red=int(circle[2])

                 cv2.circle(frame, (x_red, y_red), r_red, (0, 0, 255), 3)   # 标记圆

                 cv2.circle(frame, (x_red, y_red), 3, (255, 255, 0), -1)   # 标记圆心

                 text = 'circle' + 'x:   '+str(x_red)+' y:   '+str(y_red)

                 cv2.putText(frame, text, (10, 30), font, 1, (0, 255, 0), 2, cv2.LINE_AA, 0)   # 显示圆心位置

           if(areaCal(contourss)>1000):

             #circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 100, param1=100, param2=20, minRadius=20, maxRadius=500)

             circles = cv2.HoughCircles(edges_green, cv2.HOUGH_GRADIENT, 1, 100, param1=100, param2=20, minRadius=0, maxRadius=0)

             if circles is not None:   # 如果识别出圆

               #print "I found the green circle"

               for circle in circles[0]:

                 x_red=int(circle[0])

                 y_red=int(circle[1])

                 r_red=int(circle[2])

                 cv2.circle(frame, (x_red, y_red), r_red, (0, 0, 255), 3)   # 标记圆

                 cv2.circle(frame, (x_red, y_red), 3, (255, 255, 0), -1)   # 标记圆心

                 text = 'circle' + 'x:   '+str(x_red)+' y:   '+str(y_red)

                 cv2.putText(frame, text, (10, 60), font, 1, (0, 255, 0), 2, cv2.LINE_AA, 0)   # 显示圆心位置

           frame = cv2.resize(frame,None,fx=scaling_factor,fy=scaling_factor,interpolation=cv2.INTER_AREA)

           msg = bridge.cv2_to_imgmsg(frame, encoding="bgr8")

           img_pub.publish(msg)

           cv2.waitKey(3)

           '''

           rate = rospy.Rate(20) # 5hz

           scaling_factor = 0.5

           bridge = CvBridge()

           count = 0

           while not rospy.is_shutdown():

             if (True):

               count = count + 1

             else:

               rospy.loginfo("Capturing image failed.")

             if count == 2:

               count = 0

               frame = cv2.resize(frame,None,fx=scaling_factor,fy=scaling_factor,interpolation=cv2.INTER_AREA)

               msg = bridge.cv2_to_imgmsg(frame, encoding="bgr8")

               img_pub.publish(msg)

           rate.sleep()         

           '''

def webcamImagePub():

    rospy.init_node('webcam_puber', anonymous=True)

#    img_pub = rospy.Publisher('webcam/image_raw', Image, queue_size=2)

    rospy.Subscriber("/image_raw", Image, image_callback)

    rospy.spin()

if __name__ == '__main__':

    try:

        webcamImagePub()

    except rospy.ROSInterruptException:

        pass

    finally:

        webcamImagePub()


② Open the first terminal (Ctrl+Alt+T) and enter: roslaunch astra_camera astra.launch command (see the picture below), and wait for the program to run.

Open a second terminal (Ctrl+Shift+T) and enter the command: roslaunch shape_detection shape_detection_experiment.launch and wait for the interface to start.

③ Place the circular image to be recognized (please place the object in an area that can be captured by the camera), and you can see the recognition results on the interface. As shown in the figure below, the outlines of red circles and green circles are respectively identified, and the values ​​of the center coordinates x and y of the identified circle centers are displayed.

Port, terminal, machine vision, open source robot project, artificial intelligence car, desktop omnidirectional chassis, desktop application chassis, four-wheel drive omnidirectional chassis, open source robot, application chassis mechanical design, omnidirectional mobile chassis, open source robot operation System, Mecanum

2. Color recognition (red, green and blue)

Implementation ideas

      When an object is placed in front of the camera, after the camera collects the color of the object, it uses an open source visual library to identify it, and then displays the result of the color recognition on the interface (red, green, and blue items were prepared in this experiment).

The core principle of color recognition algorithm

RGB and HSV color models:

      Digital image processing usually uses two color models: RGB (red, green, blue) and HSV (hue, saturation, brightness). Although the RGB representation is relatively intuitive, there is no direct relationship between the R, G, and B values ​​​​and the three attributes of color. , the model channel cannot well reflect the specific color information of the object, and the HSV model is more in line with the way we describe and interpret color. Using HSV's color description will be more intuitive.

The difference between RGB and HSV:

①. RGB model

Three-dimensional coordinates:

RGB: three primary colors (Red, Green, Blue)

      The central axis from the origin to the white vertex is a grayscale line. The three components r, g, and b are equal, and the intensity can be represented by a three-component vector.

      Use RGB to understand changes in color, depth, and light and shade.

      Color change: The line connecting the maximum component vertices of RGB on the three coordinate axes and the yellow, purple, cyan, and YMC color vertices

      Shade change: the distance from RGB vertices and CMY vertices to the central axis of the origin and white vertices

      Changes in light and shade: The position of the central axis point becomes darker when it reaches the origin, and becomes lighter when it reaches the white vertex.

②. HSV model

Inverted cone model:

This model is described by color, depth, light and shade.

H is color

S is the shade. When S = 0, there is only grayscale.

V stands for lightness and shade, which indicates the brightness of the color, but is not directly related to the light intensity.

③. The connection between RGB and HSV

      From the above intuitive understanding, by erecting the central axis of the RGB three-dimensional coordinates and flattening it, the HSV cone model can be formed.

      But V has no direct relationship with intensity, because it only selects the largest component of RGB. RGB can reflect changes in light intensity (or grayscale).

      v = max(r, g, b)

      Conversion from RGB to HSV:

④.HSV color range

Steps

① Connect the circuit: Please connect the camera to the host.

② Download the reference program visual_experiment_ws\src\color_detection\scripts\color_test_one.py in the information at the end of the article:

#!/usr/bin/env python

#!-*-coding=utf-8 -*-

import rospy

from sensor_msgs.msg import Image

from cv_bridge import CvBridge, CvBridgeError

import cv2

import numpy as np

import serial

import time

import sys

from std_msgs.msg import UInt16

from std_msgs.msg import String

img_pub = rospy.Publisher('webcam/image_raw', Image, queue_size=2)

lower_blue=np.array([100,43,46])

upper_blue=np.array([124,255,255])

lower_red=np.array([0,43,46])

upper_red=np.array([10,255,255])

lower_green=np.array([40,43,46])

upper_green=np.array([77,255,255])

font = cv2.FONT_HERSHEY_SIMPLEX

kernel = np.ones((5, 5), np.uint8)

def areaCal(contour):

    area = 0

    for i in range(len(contour)):

        area += cv2.contourArea(contour[i])

    return area

   

def image_callback(msg):

  bridge = CvBridge()

  img = bridge.imgmsg_to_cv2(msg, "bgr8")

  #cv2.imshow("source", img)

  hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)

  mask_red = cv2.inRange(hsv,lower_red,upper_red)

  res = cv2.bitwise_and(img,img,mask=mask_red)

  scaling_factor = 0.5

#   cv2.imshow("res",res)

  image,contours,hierarchv = cv2.findContours(mask_red,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

#   print   "red=",areaCal(contours)

  if (areaCal(contours)>5000):

#    print "red color"

    text_red = 'the color is red'

    cv2.putText(img, text_red, (10, 30), font, 1, (0, 0, 255), 2, cv2.LINE_AA, 0)

  mask_blue= cv2.inRange(hsv, lower_blue, upper_blue)

  res_yellow = cv2.bitwise_and(img,img,mask=mask_blue)

  image,contours,hierarchv = cv2.findContours(mask_blue,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

  if(areaCal(contours)>8000):

    text_blue = 'the color is blue'

#    print "blue color"

    cv2.putText(img, text_blue, (10, 60), font, 1, (255, 0, 0), 2, cv2.LINE_AA, 0)

  mask_green=cv2.inRange(hsv, lower_green,upper_green)

  res_green = cv2.bitwise_and(img,img,mask=mask_blue)

  image,contours,hierarchv = cv2.findContours(mask_green,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

  if (areaCal(contours)>5000):

    text_green = 'the color is green'

    cv2.putText(img, text_green, (10, 90), font, 1, (0, 255, 0), 2, cv2.LINE_AA, 0)

#    print ("green color")

#   else:

#     print( "NONE COLOR" )

  frame = cv2.resize(img,None,fx=scaling_factor,fy=scaling_factor,interpolation=cv2.INTER_AREA)

  msg = bridge.cv2_to_imgmsg(frame, encoding="bgr8")

  img_pub.publish(msg)

  cv2.waitKey(3)

#   cv2.waitKey(1)

def main():

    rospy.init_node('image_listener',anonymous=True)

    #image_topic = "/camera/color/image_raw"

    #rospy.Subscriber("/camera/color/image_raw", Image, image_callback)

    rospy.Subscriber("/image_raw", Image, image_callback)

    rospy.spin()

if __name__ == '__main__':

    main()

 ③ Open the terminal (Ctrl+Alt+T), enter roslaunch astra_camera astra.launch (see the picture below), and wait for the program to run.

Open a second terminal (Ctrl+Shift+T) and enter the command: roslaunch color_detection color_detectioning.launch and wait for the program to run.

④ After the interface is started, place the object (please place the object in an area that can be captured by the camera), and then the camera will start to recognize and display the recognition results on the interface.

The following takes blue items as an example. When the camera recognizes the blue item, the result (the color is blue) is displayed on the interface.

Machine vision, open source robot project, artificial intelligence car, desktop omnidirectional chassis, desktop application chassis, four-wheel drive omnidirectional chassis, open source robot, applied chassis mechanical design, omnidirectional mobile chassis, open source robot operating system, McKenna Tom

      You can also try to place red and green items, and the results of the respective identifications are as shown in the figure below. When the camera recognizes a red object, the result (the color is red) is displayed on the interface; when the camera recognizes a green object, the result (the color is green) is displayed on the interface.

Red item detected
Green items detected

3. Color tracking

Implementation ideas

      After the camera collects the red object, it releases the message through serial communication. The black hand chassis subscribes to the message and performs corresponding movements.

Equipment preparation

      The equipment needed in the experiment is shown in the picture below (a red fire extinguisher is used to represent the object to be tracked).

Steps

① Download the reference program visual_experiment_ws\src\color_tracking\scripts\ros_arduino_translation_test.py for Ros communication in the information at the end of the article:

#!/usr/bin/env python

#!coding=utf-8

import rospy

from sensor_msgs.msg import Image

from cv_bridge import CvBridge, CvBridgeError

import cv2

import numpy as np

import serial

import time

import sys

from std_msgs.msg import UInt16

from std_msgs.msg import String

#lower_blue=np.array([50,143,146])

#upper_blue=np.array([124,255,255])

#lower_blue=np.array([14,143,80])

#upper_blue=np.array([23,255,200])

lower_blue=np.array([100,43,46])

upper_blue=np.array([124,255,255])

#lower_red=np.array([0,200,55])

#upper_red=np.array([10,255,130])

#lower_red=np.array([0,150,55])

#upper_red=np.array([10,255,130])

lower_red=np.array([0,43,46])

upper_red=np.array([10,255,255])

lower_green=np.array([40,43,46])

upper_green=np.array([77,255,255])

#ser = serial.Serial('/dev/ttyACM0', 57600, timeout=2.0) #2020.8.28 source value=0.5

img_pub = rospy.Publisher('webcam/image_raw', Image, queue_size=2)

data_pub= rospy.Publisher('Color_center_point',String,queue_size=20)

red_flag=0

blue_flag=0

image_flags = 1

cx_string=""

cy_string=""

cx_cy_string=""

send_data = ""

count=0

scaling_factor = 1.0   #2020.8.28 source value=0.5

def areaCal(contour):

    area = 0

    for i in range(len(contour)):

        area += cv2.contourArea(contour[i])

    return area

   

def image_callback(msg):

  global count,send_data

  bridge = CvBridge()

  frame = bridge.imgmsg_to_cv2(msg, "bgr8")

  #cv2.imshow("source", frame)

  hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

  mask_red = cv2.inRange(hsv,lower_red,upper_red)

  res = cv2.bitwise_and(frame,frame,mask=mask_red)

  #cv2.imshow("res",res)

  image,contours,hierarchv = cv2.findContours(mask_red,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)  

  print "mianji=",areaCal(contours)

  if (areaCal(contours)>5000):

    if(areaCal(contours)>80000):

      send_data="back"

      #ser.write("back\n")

      #print "back"

    elif( areaCal(contours)<50000 and areaCal(contours)>5000 ):

      send_data="forward"

      #ser.write("forward\n")

      #print "forward"

    else:

      send_data="state"

    if len(contours) > 0:

      c = max(contours, key=cv2.contourArea)

      #print "c=",c

      cnt=contours[0]

      cv2.drawContours(frame, c, -1, (0, 0, 255), 3)#画轮廓

      #cv2.imshow("drawcontours",frame)

      M = cv2.moments(c)

      if M["m00"] !=0:

        cx = int(M['m10']/M['m00'])

        cy = int(M['m01']/M['m00'])

        center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))

        cv2.circle(frame, (cx,cy), 8, (0, 255, 0), -1)

        print "center=",center,"cx=",cx,"cy=",cy

        cx_string=str(cx)

        cy_string=str(cy)

        #cx_cy_string=(cx_string + "-" + cy_string + "\n").encode('utf-8')

        #cx_cy_string=(cx_string + "-" + cy_string + "-" + send_data + "\n").encode('utf-8')

        cx_cy_string=(cx_string + "-" + cy_string + "+" + send_data ).encode('utf-8')

        print (cx_cy_string)

        data_pub.publish(cx_cy_string)

      else:

        cx=0

        cy=0

    else:

      print "no red color something"

      #cv2.imshow("chanle", img)

#      if cv2.waitKey(1) & 0xFF == ord('q'):

#        #break

#        #continue

  if(image_flags==1):

    count = count+1

  else:

    rospy.loginfo("Capturing image failed.")

  if count == 2:

    count = 0

    frame = cv2.resize(frame,None,fx=scaling_factor,fy=scaling_factor,interpolation=cv2.INTER_AREA)

    msg = bridge.cv2_to_imgmsg(frame, encoding="bgr8")

    img_pub.publish(msg)

  cv2.waitKey(1)

def main():

    rospy.init_node('image_listener',anonymous=True)

#    img_pub = rospy.Publisher('webcam/image_raw', Image, queue_size=2)

    rospy.Subscriber("/image_raw", Image, image_callback)

    rospy.spin()

if __name__ == '__main__':

    main()


Download the reference program visual_experiment_ws\src\color_tracking\arduino_program\Color_Tracking_Arduino_Program\Color_Tracking_Arduino_Program.ino in the information at the end of the article to control the movement of the Blackhand chassis:

/*

* rosserial Publisher Example

* Prints "hello world!"

*/

#include <ros.h>

#include <std_msgs/String.h>

#include <Arduino.h>

#include <stdio.h>

#include <string.h>

#define CTL_BAUDRATE 115200 //总线舵机波特率

#define mySerial Serial2

#define SerialBaudrate 57600

#define RGB_LED_NUMBERS 3

#define Bus_servo_Angle_inits 1500

#define ActionDelayTimes 1500

//

#define wheel_speed_forward 0.08    //car forward speed

#define wheel_speed_back -0.08      //car back speed

#define wheel_speed_stop 0.0        //car stop speed

#define wheel_speed_left 0.1       //car turnleft speed

#define wheel_speed_right -0.1     //car turnright speed

#define wheel_speed_left_translation 0.08   //speed of car left translation

#define wheel_speed_right_translation -0.08 //speed of car right translation

char cmd_return[200];

String receive_string="hello";

ros::NodeHandle   nh;

void messageCb( const std_msgs::String &toggle_msg){

   receive_string=toggle_msg.data;

}

ros::Subscriber<std_msgs::String> sub("Color_center_point", &messageCb );

enum{FORWARD=1,BACK,LEFT,RIGHT,LEFT_TRANSLATION,RIGHT_TRANSLATION,STOP}; //the movement state of the car

void setup()

{

  delay(1100);

  Serial.begin(SerialBaudrate);

  mySerial.begin(CTL_BAUDRATE);  

  nh.initNode();

  nh.subscribe(sub);

}

void loop()

{

  if( (receive_string.length())<5 && (receive_string.length())>15 )

  {

     for(int i=0;i<1;i++){

     break;

     }

  }

    else{

       int SpacePosition[2] = {0,0};

       int Data_one = 0;

       int Data_two = 0;

       int numbers_left=0 ,numbers_right=0;

       char num_left[32] = {};

       char num_right[32] = {};

       String x_data="";

       String y_data="";

       String z_data="";

       String new_string = "";                                                                               

       SpacePosition[0] = receive_string.indexOf('-');

       x_data = receive_string.substring(0,SpacePosition[0]);

       //if(x_data.length()>=4){break;}

       new_string = receive_string.substring(SpacePosition[0]+1);

       SpacePosition[1] = new_string.indexOf('+');

       y_data = new_string.substring(0,SpacePosition[1]);

       z_data = new_string.substring(SpacePosition[1]+1);

      Data_one = x_data.toInt();

      Data_two = y_data.toInt();

      //if( (Data_one<=120) && (z_data =="state") ){Car_Move(LEFT_TRANSLATION);}

      if( (Data_one<=280) && (Data_one>=20)){Car_Move(LEFT_TRANSLATION);}

      else if ( (Data_one>=360) && (Data_one<=600) ){Car_Move(RIGHT_TRANSLATION);}

      else if( z_data == "forward" ){Car_Move(FORWARD);}

      else if( z_data == "back" ){Car_Move(BACK );}

      else {Car_Move(STOP);}

     

      receive_string = "";

      x_data="";

      y_data="";

      z_data="";

      new_string="";

    }  

  nh.spinOnce();

  delay(100);

}

② Open the terminal (Alt+ctrl+T) and enter the roslaunch astra_camera astra.launch command, as shown in the figure below.

Open the second terminal (shift+ctrl+T) and enter the roslaunch color_tracking camera_calibration.launch command, as shown in the figure below.

③ Move the fire extinguisher and observe the effect of the black hand chassis following the movement of the fire extinguisher.

     [Note: Please place the fire extinguisher within the area that can be captured by the camera; due to the influence of the hardware, it is recommended to move the fire extinguisher slightly slower. You can first move the fire extinguisher to a position, and then observe the effect of chassis tracking]

     We can see in the rviz interface the center coordinates and area of ​​the red target collected by the camera for tracking (as shown in the figure below).

Port, terminal, machine vision, open source robot project, artificial intelligence car, desktop omnidirectional chassis, desktop application chassis, four-wheel drive omnidirectional chassis, open source robot, application chassis mechanical design, omnidirectional mobile chassis, open source robot operation System, Mecanum

At the same time, the chassis can be observed to track the red fire extinguisher until it moves close to the fire extinguisher.

For program source code information, see Desktop Omnidirectional Chassis-Machine Vision

Guess you like

Origin blog.csdn.net/Robotway/article/details/132791619