Consistency depth estimation of the situation under occlusion

July 17, 2019 11:37:05

论文 Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras

There are a few highlights:

1, when processing moving object instance segmentation and tracking are not required, need not split instance,

  Although the article says still need a network to predict the possible movement area, but compared to instantiate division, difficulty or dropped points.

2, occlusion-aware consistency occlusion depth in case of prediction consistency

3, internal reference can learn through the network

This article is still a little dry, after all, Google produced.

 

The main point here would like to re-second holds many lessons, occlusion depth forecast in case of problems

 

Here also is well understood better achieve:

About two cameras to observe the same scene, but because of the presence of occlusion of the left and right of the two cameras

With a depth of three-dimensional points prediction result is inconsistent. What are the characteristics of these areas blocked it?

Why are these areas, see illustration above under carefully understand you will know.

Here not only to do warp ref img like sfmlearner in to tgt, but also to warp each other in two camera position on the depth chart,

Then remove the above-described region a partial loss calculations.

My previous blog is an example of the virtual viewpoint warp of color and depth charts.

 

Depth inconsistency, first, because there are blocked, and second, because the three-dimensional point in the movement.

 

So the way to deal with the predicted depth of moving objects is also helpful

Overall, the calculation of loss is to make clearer, the calculation should not be removed out of the section.

Subsequent code implementation and training I would like to take a moment. . .

 

Guess you like

Origin www.cnblogs.com/shepherd2015/p/11200214.html