人工智能深度估计技术

人工智(能)走起!!!

下面是基本操作:

在Hugging Face网页中找到Depth Estimation的model,如下图:

Hugging Face – The AI community building the future.

 (上Hugging Face要翻墙!你翻不翻我不管。。。)

以下内容摘自Hugging Face:

Monocular depth estimation

Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a single image. In other words, it is the process of estimating the distance of objects in a scene from a single camera viewpoint.

Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, occlusion, and texture.

The task illustrated in this tutorial is supported by the following model architectures:

DPTGLPN

In this guide you’ll learn how to:

  • create a depth estimation pipeline
  • run depth estimation inference by hand

Before you begin, make sure you have all the necessary libraries installed:

pip install -q transformers

Depth estimation pipeline

The simplest way to try out inference with a model supporting depth estimation is to use the corresponding pipeline(). Instantiate a pipeline from a checkpoint on the Hugging Face Hub:

from transformers import pipeline

checkpoint = "vinvino02/glpn-nyu"
depth_estimator = pipeline("depth-estimation", model=checkpoint)

 Next, choose an image to analyze:

from PIL import Image
import requests

url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"
image = Image.open(requests.get(url, stream=True).raw)
image

Photo of a busy street

 Pass the image to the pipeline.

predictions = depth_estimator(image)

The pipeline returns a dictionary with two entries. The first one, called predicted_depth, is a tensor with the values being the depth expressed in meters for each pixel. The second one, depth, is a PIL image that visualizes the depth estimation result.

Let’s take a look at the visualized result:

predictions["depth"]

Depth estimation visualization

Depth estimation inference by hand

Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand.

Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub. Here we’ll use the same checkpoint as before:

from transformers import AutoImageProcessor, AutoModelForDepthEstimation

checkpoint = "vinvino02/glpn-nyu"

image_processor = AutoImageProcessor.from_pretrained(checkpoint)
model = AutoModelForDepthEstimation.from_pretrained(checkpoint)

Prepare the image input for the model using the image_processor that will take care of the necessary image transformations such as resizing and normalization:

pixel_values = image_processor(image, return_tensors="pt").pixel_values

Pass the prepared inputs through the model:

import torch

with torch.no_grad():
    outputs = model(pixel_values)
    predicted_depth = outputs.predicted_depth

Visualize the results:

import numpy as np

# interpolate to original size
prediction = torch.nn.functional.interpolate(
    predicted_depth.unsqueeze(1),
    size=image.size[::-1],
    mode="bicubic",
    align_corners=False,
).squeeze()
output = prediction.numpy()

formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
depth

声明:本文中文版可以在CSDN博主「XBL0430」的同名文章中阅读,

           关于版权问题,对方文章的发布已获得本人同意。

           文章链接:人工智能深度估计技术(中文翻译版)_XBL0430的博客-CSDN博客

           (对了,人家花了好多心思翻译我的文章,麻烦大家对TA多多支持! )


本文内容为小编自己汇总,内容可能会有错误或疏漏,感谢大家的提议!

记得点赞和关注哦~

猜你喜欢

转载自blog.csdn.net/zyl_coder/article/details/132429177