机器人视觉项目:视觉检测识别+机器人跟随(1)

更新一波暑假做的机器人视觉检测跟随的项目,有一些笔记都放在博客中,有需要的可以交流~

项目的目的是在机器人上搭建视觉检测系统,Kinect+ros+深度学习目标检测+行人识别(opencv的SVM行人检测以及Darknet神经网络Yolov3-tiny)+目标跟踪算法(主要是滤波算法)

下面开始更新啦...

目标:在TX2上运行人体检测和跟踪算法

1. 刷机(安装jetpack)

2. 测试

测试cuda

/home/ubuntu/NVIDIA_CUDA-<version>_Samples/bin/armv7l/linux/release/oceanFFT

测试Multimedia API

nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/backend$ ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10

总提示输入参数不对,BUG!!!

后面有需要再来测这个

3. 编译安装opencv3.1

https://github.com/duinodu/scripts/shell/install/install_opencv.sh

有点费时间,替代方案:sudo apt-get install libopencv-dev)

4. 测试

https://github.com/duinodu/testopencvinstall

5. 安装MXNet

https://github.com/duinodu/scripts/shell/install/jetpacktx2/installmxnet_tx2.sh

pip install mxnet-jetson-tx2 ???

6. 测试

TODO

7. 安装tensorflow

8. 测试

TODO

9. 安装TensorRT

  • Download *.tar.gz from nvidia website

  • tar -zxvf *.tar.gz

  • set PATH, LD_LIBRARY_PATH, TENSORRT_INC_DIR, TENSORRT_LIB_DIR

  • pip install tensorrt-**.whl in python directory

  • pip install uff-*.whl in uff directory

  • pip install numpy -U

阅读下面的坑

  • If you are installing TensorRT from a tar package (instead of using the .deb packages and apt-get), you will need to update the custom_plugins example to point to the location that the tar package was installed into. For example, in the <PYTHON_INSTALL_PATH>/tensorrt/examples/custom_layers/tensorrtplugins/setup.py file change the following: Change TENSORRT_INC_DIR to point to the <TAR_INSTALL_ROOT>/include directory. Change TENSORRT_LIB_DIR to point to <TAR_INSTALL_ROOT>/lib64 directory.

The PyTorch based sample will not work with the CUDA 9 Toolkit. It will only work with the CUDA 8 Toolkit.

When using the TensorRT APIs from Python, import the tensorflow and uff modules before importing the tensorrt module. This is required to avoid a potential namespace conflict with the protobuf library as well as the cuDNN version. In a future update, the modules will be fixed to allow the loading of these Python modules to be in an arbitrary order.

The TensorRT Python APIs are only supported on x86 based systems. Some installation packages for ARM based systems may contain Python .whl files. Do not install these on the ARM systems, as they will not function.

The TensorRT product version is incremented from 2.1 to 3.0.1 because we added major new functionality to the product. The libnvinfer package version number was incremented from 3.0.2 to 4.0 because we made non-backward compatible changes to the application programming interface.

The TensorRT debian package name was simplified in this release to tensorrt. In previous releases, the product version was used as a suffix, for example tensorrt-2.1.2.

If you have trouble installing the TensorRT Python modules on Ubuntu 14.04, refer to the steps on installing swig to resolve the issue. For installation instructions, see Unix Installation.

The Flatten layer can only be placed in front of the Fully Connected layer. This means that the Flatten layer can only be used if its output is directly fed to a Fully Connected layer.

The Squeeze layer only implements the binary squeeze (removing specific size 1 dimensions). The batch dimension cannot be removed.

If you see the Numpy.core.multiarray failed to import error message, upgrade your NumPy to version 1.13.0 or greater.

For Ubuntu 14.04, use pip version >= 9.0.1 to get all the dependencies installed.

10. 测试

自带测试样例

在1080的机器上安装了TensorRT,测试sampleFasterRCNN需要下载模型文件,这个需要翻墙 !-_-!

onnx test

TODO

nvcaffe test

TODO

这个问题的关键在于,嵌入式设备的运算速度和算法精度的权衡。 首先要对TX2能够运行多少层网络,有一个定性的认识。

猜你喜欢

转载自blog.csdn.net/Synioe/article/details/82793053