[3D Reconstruction] [Deep Learning] [Dataset] Create a personal Gen6D test data set based on COLMAP

[3D Reconstruction] [Deep Learning] [Dataset] Create a personal Gen6D test data set based on COLMAP

Tip: I have recently started to conduct research on [3D reconstruction], record relevant knowledge points, and share methods that have been solved for problems encountered in learning.



Preface

Gen6D can be generalized to the pose estimation algorithm of new objects without the need for CAD models or renderable models. This article creates a personal test data set based on videos captured by COLMAP with a mobile phone. official reference


Download and install colmap software

Download the COLMAP software [ Download address ]. This article uses the CUDA version under Windows:

After unzipping, double-click to open COLMAP.bat, and the following interface will appear. The software is installed successfully:

Place the COLMAP project in the appropriate location, rename it, and then add it to the environment. In the variable:


Shoot video capture images

Store the captured video in the Gen6D/data\custom\video path:

The target object when shooting video must be still. If the target object has no texture, the texture of the background must be rich enough.

Split the captured video into images:

# 激活虚拟环境
conda activate gen6d
# 进入Gen6D工程目录下
cd XXX
# 分割视频
# --transpose 解决图片颠倒
python prepare.py --action video2image --input data/custom/video/XXX.mp4 --output data/custom/XXX/images --frame_inter 10 --image_size 960 --transpose
# eg:python prepare.py --action video2image --input data/custom/video/people.mp4 --output data/custom/people/images --frame_inter 10 --image_size 960

Check the segmented image under the Gen6D\data\custom\people\images path:

the image size has been scaled. If the image size is too large, there will be a series of problems such as insufficient storage.


restore camera pose

# colmap假如没有添加环境变量就需要colmap.exe的完整路径
python prepare.py --action sfm --database_name custom/XXX--colmap colmap.exe
# eg:python prepare.py --action sfm --database_name custom/people --colmap colmap.exe


Manually specify the object area

InstallCloudCompare

Install CloudCompare , click Next–>Customize installation path–>Default selection all the way

Save target object point cloud

Use CloudCompar to open Gen6D/data/custom/people/colmap/pointcloud.ply:

manually specify the target area: crop the target object point cloud.

Export the cropped target object point cloud: You need to cancel the remaining part, select only segmented, and save it in the Gen6D/data/custom/people/ directory in binary format, named object_point_cloud.ply.

Specify the positive X and Z directions of the target object

Use CloudCompar to open the just saved Gen6D/data/custom/people/object_point_cloud.ply.

  1. Vector in the positive Z direction: Fit a plane, and its normal is used as the vector in the positive Z direction.
    Also crop an area first: select only segmented->Tools->Fit->Plane to fit a horizontal plane where the target object is placed (parallel), and use its normal as the vector in the positive Z direction.
    The fitting plane is fitted based on all selected point clouds, so the selection of point clouds is very critical. The picture below is very bad, it is not the plane where the target object is placed at all.
  2. Vector in the positive X direction: Calculate the vector in the positive X direction by selecting two points.

    Create meta_info.txt in the Gen6D/data/custom/people/ directory and record the positive X direction and positive Z direction (it seems that you cannot copy and paste and have to type it by hand).
    0.090675 -1.660917 -0.227757
    0.560382 -0.0676446 0.825467
    

Summarize

The process of producing a personal Gen6D test data set based on COLMAP from videos taken with a mobile phone is explained as simply and in detail as possible.

Guess you like

Origin blog.csdn.net/yangyu0515/article/details/132695616