Artificial intelligence suit Ai Kit horizontal evaluation

Products covered in this article

1 mechArm 270

2 mycobot 280

3 mypalletizer 260

AI  kits

main content

The topic of today's article mainly introduces the three robotic arms that match with the aikit kit, and what are the differences between them.

foreword

If you had a robotic arm, what would you use it for? Simply control the mechanical arm to move; let it repeat a certain trajectory; or let it work instead of humans in industry . With the progress of the times, robots frequently appear around us. They replace us in dangerous work and serve humans. Today let's take a look at how the robotic arm works in an industrial scene.

introduce

what is AI kit?

The artificial intelligence suit is an entry-level artificial intelligence suit integrating vision, positioning and grasping, and automatic sorting modules.

Based on the Linux system, a 1:1 simulation model is built in ROS , the control of the robotic arm can be realized through the development of software, and the basic knowledge of artificial intelligence can be quickly learned.

At present, our artificial intelligence suite can realize color recognition and capture, and image recognition and capture.

This kit is very helpful for users who are just getting started with robotic arms and machine vision. It can take you to quickly understand how artificial intelligence projects are built, and further understand how machine vision is linked with robotic arms .

Next, let's take a brief look at the three robotic arms that can be adapted to the aikit suit

robotic arm

myPalletizer 260

myPalletizer260 is a four-axis palletizing robot arm, a fully-wrapped lightweight four-axis palletizing robot arm, with an overall de-finned design, small and compact, and easy to carry. myPalletizer has a body weight of 960g, a load of 250g, and a working radius of 260mm. It is specially designed for makers and education. It has rich expansion interfaces . The ai kit simulates industrial scenes and can be used for machine vision learning. Three robotic arms suitable for the t suit

mechArm 270

mechArm 270 is a small six-axis robotic arm with a centrally symmetrical structure (imitation of an industrial structure). mechArm 270 has a body weight of 1kg, a load of 250g, and a working radius of 270mm. It is compact and portable in design, small but powerful, easy to operate, and can work with people in a safe manner.

myCobot 280

myCobot 280 is the smallest and lightest six-axis collaborative robotic arm (UR structure) in the world. It can be re-developed according to user needs to achieve personalized customization. myCobot has a body weight of 850g, a payload of 250g, and an effective working radius of 280mm. It is small in size but powerful in function. It can be matched with a variety of end effectors to adapt to a variety of application scenarios, and can also support the secondary development of multi-platform software to meet the needs of scientific research, education, Various scenarios such as smart home and business exploration.

Let's first take a look at how aikit works with these three robotic arms.

content

Video address : https://youtu.be/9J2reiPYNxg

The video content shows the color recognition and intelligent sorting functions, as well as the image recognition and intelligent sorting functions.

Let's briefly introduce how aikit implements it. (Take color recognition and intelligent sorting functions as examples)

The artificial intelligence project mainly uses two modules

●Vision processing module

●Calculation module (processing conversion between eye to hand)

Vision processing module:

OpenCV (Open Source Computer Vision ) is an open source computer vision library for developing computer vision applications. OpenCV includes a large number of functions and algorithms for image processing, video analysis, deep learning- based object detection and recognition, and more .

We used OpenCV to process the image. The video obtained from the camera is processed to obtain information in the video such as color, image, plane coordinates (x, y) in the video, etc. Pass the obtained information to the processor for further processing.

Below is part of the code for processing images (color recognition)

# detect cube color
def color_detect(self, img):
	# set the arrangement of color'HSV
	x = y = 0
	gs_img = cv2.GaussianBlur(img, (3, 3), 0) # Gaussian blur
	# transfrom the img to model of gray
	hsv = cv2.cvtColor(gs_img, cv2.COLOR_BGR2HSV)

	for mycolor, item in self.HSV.items():
		redLower = np.array(item[0])
		redUpper = np.array(item[1])
		# wipe off all color expect color in range
		mask = cv2.inRange(hsv, item[0], item[1])
		# a etching operation on a picture to remove edge roughness
		erosion = cv2.erode(mask, np.ones((1, 1), np.uint8), iterations=2)
		# the image for expansion operation, its role is to deepen the color depth in the picture
		dilation = cv2.dilate(erosion, np.ones(
			(1, 1), np.uint8), iterations=2)


		# adds pixels to the image
		target = cv2.bitwise_and(img, img, mask=dilation)
		# the filtered image is transformed into a binary image and placed in binary
		ret, binary = cv2.threshold(dilation, 127, 255, cv2.THRESH_BINARY)
		# get the contour coordinates of the image, where contours is the coordinate value, here only the contour is detected
		contours, hierarchy = cv2.findContours(
			dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

		if len(contours) > 0:
			# do something about misidentification
			boxes = [
				box
			for box in [cv2.boundingRect(c) for c in contours]
			if min(img.shape[0], img.shape[1]) / 10
			< min(box[2], box[3])
			< min(img.shape[0], img.shape[1]) / 1
			]
			if boxes:
				for box in boxes:
					x, y, w, h = box
				# find the largest object that fits the requirements
				c = max(contours, key=cv2.contourArea)
				# get the lower left and upper right points of the positioning object
				x, y, w, h = cv2.boundingRect(c)
				# locate the target by drawing rectangle
				cv2.rectangle(img, (x, y), (x+w, y+h), (153, 153, 0), 2)
				# calculate the rectangle center
				x, y = (x*2+w)/2, (y*2+h)/2
				# calculate the real coordinates of mycobot relative to the target
				if mycolor == "red":
					self.color = 0

				elif mycolor == "green":
					self.color = 1

				elif mycolor == "cyan" or mycolor == "blue":
					self.color = 2

				else:
					self.color = 3


		if abs(x) + abs(y) > 0:
			return x, y
		else:
			return None

It is not enough to obtain our image information, we have to process the obtained information and pass it to the robotic arm to execute the command. The calculation module is used here.

computing module

NumPy (Numerical  Python ) is an open source Python library, mainly used for mathematical calculations. NumPy provides a large number of functions and algorithms for scientific computing, including matrix operations, linear algebra, random number generation, Fourier transform, and more.

We have to process the coordinates on the image and convert them into real coordinates. There is a professional term for this is eye to hand. We use python to calculate our coordinates through the numpy calculation library, and finally send them to the robotic arm for sorting.

The following is part of the code for the calculation

    while cv2.waitKey(1) < 0:
       # read camera
        _, frame = cap.read()
        # deal img
        frame = detect.transform_frame(frame)
        if _init_ > 0:
            _init_ -= 1
            continue
        # calculate the parameters of camera clipping
        if init_num < 20:
            if detect.get_calculate_params(frame) is None:
                cv2.imshow("figure", frame)
                continue
            else:
                x1, x2, y1, y2 = detect.get_calculate_params(frame)
                detect.draw_marker(frame, x1, y1)
                detect.draw_marker(frame, x2, y2)
                detect.sum_x1 += x1
                detect.sum_x2 += x2
                detect.sum_y1 += y1
                detect.sum_y2 += y2
                init_num += 1
                continue
        elif init_num == 20:
            detect.set_cut_params(
                (detect.sum_x1)/20.0,
                (detect.sum_y1)/20.0,
                (detect.sum_x2)/20.0,
                (detect.sum_y2)/20.0,
            )
            detect.sum_x1 = detect.sum_x2 = detect.sum_y1 = detect.sum_y2 = 0
            init_num += 1
            continue

        # calculate params of the coords between cube and mycobot
        if nparams < 10:
            if detect.get_calculate_params(frame) is None:
                cv2.imshow("figure", frame)
                continue
            else:
                x1, x2, y1, y2 = detect.get_calculate_params(frame)
                detect.draw_marker(frame, x1, y1)
                detect.draw_marker(frame, x2, y2)
                detect.sum_x1 += x1
                detect.sum_x2 += x2
                detect.sum_y1 += y1
                detect.sum_y2 += y2
                nparams += 1
                continue
        elif nparams == 10:
            nparams += 1
            # calculate and set params of calculating real coord between cube and mycobot
            detect.set_params(
                (detect.sum_x1+detect.sum_x2)/20.0,
                (detect.sum_y1+detect.sum_y2)/20.0,
                abs(detect.sum_x1-detect.sum_x2)/10.0 +
                abs(detect.sum_y1-detect.sum_y2)/10.0
            )
            print ("ok")
            continue

        # get detect result
        detect_result = detect.color_detect(frame)
        if detect_result is None:
            cv2.imshow("figure", frame)
            continue
        else:
            x, y = detect_result
            # calculate real coord between cube and mycobot
            real_x, real_y = detect.get_position(x, y)
            if num == 20:
                detect.pub_marker(real_sx/20.0/1000.0, real_sy/20.0/1000.0)
                detect.decide_move(real_sx/20.0, real_sy/20.0, detect.color)
                num = real_sx = real_sy = 0

            else:
                num += 1
                real_sy += real_y
                real_sx += real_x

Our project is open source and can be found on GitHub

https://github.com/elephantrobotics/mycobot_ros/blob/noetic/mycobot_ai/ai_mycobot_280/scripts/advance_detect_obj_color.py

the difference

After comparing the video, content, and program code, the frameworks of the three robotic arms are the same, and they only need to be slightly modified in the data to run successfully.

Comparing the differences between these three robotic arms, there are roughly two points.

One is essentially to compare the differences between the four-axis and six-axis robotic arms in actual use. (Comparison between myPalletizer and mechArm/myCobot)

Let's take a look at a rough comparison between a four-axis robot arm and a six-axis robot arm

It can be seen from the video that whether it is a four-axis robot arm or a six-axis robot arm, the working range of AI Kit is sufficient. The biggest difference between the two is that during the program startup process, myPalletizer moves quickly and easily. Only four joints are moving, and can perform tasks efficiently and stably; myCobot needs to mobilize six joints, two more joints than myPalletizer, and the amount of calculation in the program is larger than that of myPalletizer, and it takes longer Some (small scenes).

Let's briefly summarize that when the scene is fixed, we can give priority to the working range of the robotic arm when considering how to choose the robotic arm. In the case of a robotic arm that fits within the working envelope, efficiency and stability will be a must. If there is an industrial scene similar to our AI kit, the four-axis robotic arm will be the preferred choice. Of course, the six-axis robotic arm can operate in a larger space and can achieve more complex movements. They can perform rotary motions in space, which is not possible with four-axis robotic arms. Therefore, six-axis robotic arms are generally better suited for industrial applications that require precise manipulation and complex motion.

The second and two models are six-axis robotic arms, and the main difference between them is the structure. mechArm is a robotic arm with a centrally symmetrical structure, and myCobot is a collaborative robotic arm with a UR structure. We can compare the differences between these two structures in actual application scenarios.

Here are the parameters and specifications of the two robotic arms

The difference in the structure of the two leads to their different range of motion. Taking mechArm as an example, the mechanical arm with central symmetry structure is composed of three pairs of relative joints, and the movement direction of each pair of joints is opposite. The mechanical arm with this structure has better balance, which can offset the moment between the joints and keep the mechanical arm stable.

As shown in the video, mechArm is relatively stable in operation.

Seeing this, you may have doubts, isn’t myCobot useless? Of course not, the mechanical arm of the UR structure is more flexible, can achieve a wider range of motion, and is suitable for larger applications. More importantly, myCobot is a collaborative robotic arm, which has better human-computer interaction capabilities and can work in collaboration with humans. Six-axis collaborative robotic arms are commonly used for logistics and assembly work on production lines, as well as in fields such as medical care, research and education.

Summarize

As mentioned at the beginning, the difference between the three robotic arms equipped with the AI ​​kit is essentially how to choose a suitable robotic arm for use. If you choose a robotic arm for a specific scenario, you need to determine the working radius of the robotic arm, the environment in which it is used, and the load of the robotic arm according to the needs of the scenario.

If you want to learn about robotic arms, you can choose a mainstream robotic arm on the market to start learning. myPalletizer is designed based on the palletizing robot arm, which is mainly used to realize the palletizing and pallet loading and unloading of goods; mechArm is designed based on the mainstream industrial robot arm, because its special structure can maintain The stability of the robotic arm; myCobot is designed based on a collaborative robotic arm, which is a popular robotic arm in recent years. It can work with humans and provide human strength and precision.

The above is the whole content of this article. In the future, robotic arm technology will continue to develop, bringing more convenience to human production, work and life. If you like this article, please leave us a comment!

Guess you like

Origin blog.csdn.net/m0_71627844/article/details/131582891
Recommended