机器人视觉项目:视觉检测识别+机器人跟随(20)

1.尝试用tx2外接kinectv1深度相机获取RGBD图像信息,传入到我们的行人检测代码框架中,调试基于深度相机的行人检测与跟踪算法的效果。
首先安装kinect对应Arm处理器的ubuntu驱动程序,
libfreenect v2.0
OpenNI V2.2.0.33
Nite V2.0.0
安装libfreenect
sudo apt-get install git g++ cmake libxi-dev libxmu-dev libusb-1.0-0-dev pkg-config freeglut3-dev build-essential
安装libfreenect
git clone https://github.com/OpenKinect/libfreenect.git
cd libfreenect
mkdir build; cd build
cmake .. -DBUILD_OPENNI2_DRIVER=ON
make
安装OpenNI2
cd OpenNI-Linux-x86-2.2/
sudo ./install.sh
source OpenNIDevEnvironment
cp ~/libfreenect/platform/linux/udev/51-kinect.rules /etc/udev/rules.d/


cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so OpenNI-Linux-x86-2.2/Redist/OpenNI2/Drivers/
cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so OpenNI-Linux-x86-2.2/Tools/OpenNI2/Drivers/

lsusb

Bus 002 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
Bus 002 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 002 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
cd OpenNI-Linux-x86-2.2/Tools/
***以上步骤都可以在tx2中正常操作,接下来的测试步骤
./NiViewer    没有看到预期的输出RBG图像和深度图像
后面还要安装Nite2 由于出现问题,没有继续下去,这里也把教程放上来:
cd NiTE-2.0.0/
sudo ./install.sh
source NiTEDevEnvironment
cp ~/libfreenect/build/lib/OpenNI2-FreenectDriver/libFreenectDriver.so NiTE-2.0.0/Samples/Bin/OpenNI2/Drivers/
拷贝OpenNI库到运行sample的目录,因为Nite依赖于OpenNI:cp OpenNI-Linux-x86-2.2/Redist/libOpenNI2.so NiTE-2.0.0/Samples/Bin
cd NiTE-2.0.0/Samples/Bin/
验证tracking功能
./UserViewer
2.由于没有能安装成功kinect的驱动到tx2上,尝试使用小强机器人自带的摄像头检测,普通摄像头直接调用opencv摄像头测试程序即可验证,
这里吧代码放上来:

# import cv2
# import numpy as np
#
#
# cap = cv2.VideoCapture(0)
# fourcc = cv2.cv.CV_FOURCC(*'XVID')
# #opencv3的话用:fourcc = cv2.VideoWriter_fourcc(*'XVID')
# out = cv2.VideoWriter('output.avi',fourcc,20.0,(640,480))#保存视频
# while True:
#     ret,frame = cap.read()
#     gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
#     out.write(frame)#写入视频
#     cv2.imshow('frame',frame)#一个窗口用以显示原视频
#     cv2.imshow('gray',gray)#另一窗口显示处理视频
#
#
#     if cv2.waitKey(1) &0xFF == ord('q'):
#         break
#
# cap.release()
# out.release()
# cv2.destroyAllWindows()

import cv2
import numpy as np

cap = cv2.VideoCapture(1)
while(1):
    # get a frame
    ret, frame = cap.read()
    # show a frame
    cv2.imshow("capture", frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()
# ********************


代码是可行的,但是调用小强摄像头出错,没有办法强行调用,这里采用普通的外接摄像头进行测试。

使用外接的摄像头,可能进行行人识别,从效果上看比较不错,不过也会有个别情况下出现丢失目标。

猜你喜欢

转载自blog.csdn.net/Synioe/article/details/82837373