树莓派自动跟随机器人

YetiBorg v2 - 示例 - 跟我来

ModMyPi LTD 在BuildYetiBorg撰写 - 建 于2017年10月25日。

与你的YetiBorg v2一起追逐

机器人可以制作非常有趣的宠物,YetiBorg v2也不例外。关于宠物的最好的事情是能够玩游戏和教授技巧,所以我们认为我们会教YetiBorg v2如何跟随我们像狗一样:)

通过这个例子,你可以看到我们如何让你的机器人只使用相机跟随你。

部分

我们需要这个脚本才能工作:

它是如何工作的?

该脚本的工作原理是从相机拍摄图像并查看是否有任何动作。它通过拍摄一对图像并观察它们的不同来实现这一点。例如:

首先是左图像,然后是中间图像。正确的图像是我们在两幅图像之间取得差异时得到的图像。我们看到没有变化的黑色。

当我们看到足够的变化时,我们会让YetiBorg v2继续前进。为了决定我们转向哪种方式,我们采用相同的差异图像并将其分成几部分。

在左边我们有不同的图像。在中间我们将图像切成了我们的部分。在右边,我们选择了差异最大的部分,并决定以此为目标。离中心越远,我们向左或向右转得越快。

这听起来不错,但是一旦我们已经移动会发生什么?

如你所见,我们得到了很多运动,YetiBorg v2会变得困惑:(

我们有办法解决这个问题。如果您查看第一张图像,则只能在某些部分看到移动。当YetiBorg v2驾驶时,我们会看到所有部分的运动。我们所做的是采用所有部分的平均变化量,并将其作为我们没有移动的基线。现在他不会从上面的图像开车。

但是,当移动和跟随某人时,这仍然有用吗?

确实如此,机器人仍可以看到有人移动的更大差异,然后它就会看到背景的移动。

调整行为

像任何好狗一样,YetiBorg v2经常会被其他东西分散注意力而忘记跟随谁。我们可以通过调整“自动驱动器设置”部分中用于检测移动的一些设置来改善行为。

  • autoZoneCount
    这是图像被分割成用于检测的切片数。太高或太低都会导致检测问题。我们发现40到120之间的值最好。
  • autoMinimumMovement
    更改此值将调整YetiBorg v2开始追逐之前需要多少运动。如果机器人根本没有跟随你,请尝试将其关闭。如果机器人经常追逐无生命的物体(椅子,桌子等等),您可能需要将其打开。
  • steeringGain
    这可以控制YetiBorg v2转向面部运动的速度。较小的值可能反应不够快,较大的值通常会使机器人摇摆而不是跟随:)如果你无法使机器人跟上你的运动,请将其调高。
  • flippedImage
    如果机器人似乎正在远离运动,他可能会将相机翻转过来。在True和之间交换此设置False以更改图像的哪个方向来解决问题。

还有一些其他的东西无法通过调整一些数字来修复:

  • 相机具有狭窄的视野,通过站立得太远或偏向容易隐藏YetiBorg v2。
  • 就像T-Rex一样,YetiBorg v2只能看到动作。如果你完全静止,你将是隐形的:)
  • 步行速度最好,也可以错过非常快速的动作。
  • 可能与其他宠物玩得不好。你被警告过:D
  • 在光线充足的地区工作效果最佳。在黑暗的房间里,有人穿黑色衣服,相机看不到太多变化:(
  • 很高的物体仍然可能引起注意,例如灯柱......

举个例子

该示例是在入门说明中安装的标准YetiBorg v2示例集的一部分:bash <(curl https://www.piborg.org/installer/install-yetiborg-v2.txt)

跑一次

转到YetiBorg v2代码目录:cd ~/yetiborgv2并使用以下命令运行脚本:./yeti2FollowMe.py

在启动时运行

使用以下命令打开/etc/rc.local进行添加:sudo nano /etc/rc.local然后在该行上方添加此行exit 0/home/pi/yetiborgv2/yeti2FollowMe.py &最后按CTRL+ O,ENTER保存文件,然后按CTRL+ X退出nano。下次启动Raspberry Pi时,它应该为您启动脚本:)

完整代码清单 - yeti2FollowMe.py

#!/usr/bin/env python
# coding: Latin-1

# Load library functions we want
import time
import os
import sys
import ZeroBorg
import io
import threading
import picamera
import picamera.array
import cv2
import numpy

# Re-direct our output to standard error, we need to ignore standard out to hide some nasty print statements from pygame
sys.stdout = sys.stderr
print 'Libraries loaded'

# Global values
global running
global ZB
global camera
global processor
global motionDetected
running = True
motionDetected = False

# Setup the ZeroBorg
ZB = ZeroBorg.ZeroBorg()
#ZB.i2cAddress = 0x44                  # Uncomment and change the value if you have changed the board address
ZB.Init()
if not ZB.foundChip:
    boards = ZeroBorg.ScanForZeroBorg()
    if len(boards) == 0:
        print 'No ZeroBorg found, check you are attached :)'
    else:
        print 'No ZeroBorg at address %02X, but we did find boards:' % (ZB.i2cAddress)
        for board in boards:
            print '    %02X (%d)' % (board, board)
        print 'If you need to change the I²C address change the setup line so it is correct, e.g.'
        print 'ZB.i2cAddress = 0x%02X' % (boards[0])
    sys.exit()
#ZB.SetEpoIgnore(True)                 # Uncomment to disable EPO latch, needed if you do not have a switch / jumper
# Ensure the communications failsafe has been enabled!
failsafe = False
for i in range(5):
    ZB.SetCommsFailsafe(True)
    failsafe = ZB.GetCommsFailsafe()
    if failsafe:
        break
if not failsafe:
    print 'Board %02X failed to report in failsafe mode!' % (ZB.i2cAddress)
    sys.exit()
ZB.ResetEpo()

# Power settings
voltageIn = 8.4                         # Total battery voltage to the ZeroBorg (change to 9V if using a non-rechargeable battery)
voltageOut = 6.0                        # Maximum motor voltage

# Camera settings
imageWidth  = 320                       # Camera image width
imageHeight = 240                       # Camera image height
frameRate = 10                          # Camera image capture frame rate

# Auto drive settings
autoZoneCount = 80                      # Number of detection zones, higher is more accurate
autoMinimumMovement = 20                # Minimum movement detection before driving
steeringGain = 4.0                      # Use to increase or decrease the amount of steering used
flippedImage = True                     # True if the camera needs to be rotated
showDebug = True                        # True to display detection values

# Setup the power limits
if voltageOut > voltageIn:
    maxPower = 1.0
else:
    maxPower = voltageOut / float(voltageIn)

# Calculate the nearest zoning which fits
zones = range(0, imageWidth, imageWidth / autoZoneCount)
zoneWidth = zones[1]
zoneCount = len(zones)

# Image stream processing thread
class StreamProcessor(threading.Thread):
    def __init__(self):
        super(StreamProcessor, self).__init__()
        self.stream = picamera.array.PiRGBArray(camera)
        self.event = threading.Event()
        self.lastImage = None
        self.terminated = False
        self.reportTick = 0
        self.start()
        self.begin = 0

    def run(self):
        # This method runs in a separate thread
        while not self.terminated:
            # Wait for an image to be written to the stream
            if self.event.wait(1):
                try:
                    # Read the image and do some processing on it
                    self.stream.seek(0)
                    self.ProcessImage(self.stream.array)
                finally:
                    # Reset the stream and event
                    self.stream.seek(0)
                    self.stream.truncate()
                    self.event.clear()

    # Image processing function
    def ProcessImage(self, image):
        # Flip the image if needed
        if flippedImage:
            image = cv2.flip(image, -1)
        # If this is the first image store and move on
        if self.lastImage is None:
            self.lastImage = image.copy()
            return
        # Work out the difference from the last image
        imageDiff = cv2.absdiff(self.lastImage, image)
        # Build up the zone change levels
        zoneDetections = []
        for zone in zones:
            # Grab the zone from the differences
            zoneDiff = imageDiff[:, zone : zone + zoneWidth, :]
            # Get an average for the zone
            zoneChange = zoneDiff.mean()
            zoneDetections.append(zoneChange)
        # Set drives or report motion status
        self.SetSpeedFromDetection(zoneDetections)
        # Save the previous image
        self.lastImage = image.copy()

    # Set the motor speed from the motion detection
    def SetSpeedFromDetection(self, zoneDetections):
        global ZB
        global motionDetected
        # Find the largest and average detections
        largestZone = 0
        largestDetection = 0
        averageDetection = 0
        for i in range(zoneCount):
            if zoneDetections[i] > largestDetection:
                largestZone = i
                largestDetection = zoneDetections[i]
            averageDetection += zoneDetections[i]
        averageDetection /= float(zoneCount)
        # Remove the baseline motion from the largest zone
        detection = largestDetection - averageDetection
        # Determine if the motion is strong enough to count as a detection
        if detection > autoMinimumMovement:
            # Motion detected
            motionDetected = True
            if showDebug:
                if self.reportTick < 2:
                    print 'MOVEMENT   %05.2f [%05.2f %05.2f]' % (detection, largestDetection, averageDetection)
                    print '           Zone %d of %d' % (largestZone + 1, zoneCount)
                    self.reportTick = frameRate
                else:
                    self.reportTick -= 1
            # Calculate speeds based on zone
            steering = ((2.0 * largestZone) / float(zoneCount - 1)) - 1.0
            steering *= steeringGain
            if steering < 0.0:
                # Steer to the left
                driveLeft = 1.0 + steering
                driveRight = 1.0
                if driveLeft <= 0.05:
                    driveLeft = 0.05
            else:
                # Steer to the right
                driveLeft = 1.0
                driveRight = 1.0 - steering
                if driveRight <= 0.05:
                    driveRight = 0.05
        else:
            # No motion detected
            motionDetected = False
            if showDebug:
                if self.reportTick < 2:
                    print '--------   %05.2f [%05.2f %05.2f]' % (detection, largestDetection, averageDetection)
                    self.reportTick = frameRate
                else:
                    self.reportTick -= 1
            # Stop moving
            driveLeft  = 0.0
            driveRight = 0.0
        # Set the motors
        ZB.SetMotor1(-driveRight * maxPower) # Rear right
        ZB.SetMotor2(-driveRight * maxPower) # Front right
        ZB.SetMotor3(-driveLeft  * maxPower) # Front left
        ZB.SetMotor4(-driveLeft  * maxPower) # Rear left

# Image capture thread
class ImageCapture(threading.Thread):
    def __init__(self):
        super(ImageCapture, self).__init__()
        self.start()

    def run(self):
        global camera
        global processor
        print 'Start the stream using the video port'
        camera.capture_sequence(self.TriggerStream(), format='bgr', use_video_port=True)
        print 'Terminating camera processing...'
        processor.terminated = True
        processor.join()
        print 'Processing terminated.'

    # Stream delegation loop
    def TriggerStream(self):
        global running
        while running:
            if processor.event.is_set():
                time.sleep(0.01)
            else:
                yield processor.stream
                processor.event.set()

# Startup sequence
print 'Setup camera'
camera = picamera.PiCamera()
camera.resolution = (imageWidth, imageHeight)
camera.framerate = frameRate
imageCentreX = imageWidth / 2.0
imageCentreY = imageHeight / 2.0

print 'Setup the stream processing thread'
processor = StreamProcessor()

print 'Wait ...'
time.sleep(2)
captureThread = ImageCapture()

try:
    print 'Press CTRL+C to quit'
    ZB.MotorsOff()
    # Loop indefinitely
    while running:
        # # Change the LED to show if we have detected motion
        # We do this regularly to keep the communications failsafe test happy
        ZB.SetLed(motionDetected)
        # Wait for the interval period
        time.sleep(0.1)
    # Disable all drives
    ZB.MotorsOff()
except KeyboardInterrupt:
    # CTRL+C exit, disable all drives
    print '
User shutdown'
    ZB.MotorsOff()
except:
    # Unexpected error, shut down!
    e = sys.exc_info()[0]
    print
    print e
    print '
Unexpected error, shutting down!'
    ZB.MotorsOff()
# Tell each thread to stop, and wait for them to end
running = False
captureThread.join()
processor.terminated = True
processor.join()
del camera
ZB.SetLed(False)
print 'Program terminated.'

猜你喜欢

转载自blog.csdn.net/qq_42444944/article/details/86650006