TEXTure environment configuration, run through inference demo

Environment configuration

# 创建一个名为texture的环境
conda create -n texture python=3.9 -y

# 激活环境
conda activate texture

# 按照https://github.com/TEXTurePaper/TEXTurePaper的要求安装requirements.txt里面的各种包
pip install -r requirements.txt

Installing the kaolin package, you may encounter various problems here

pip install kaolin==0.11.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/{TORCH_VER}_{CUDA_VER}.html

It is recommended to go directly to the pytorch official website to install the latest version of cuda and the corresponding pytorch, which will definitely make it available for all graphics cards (
to avoid running the Text Conditioned Texture Generation command报错1 )

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

Click on the screenshot section below to view torch_version and cuda_version.
Insert image description here
The screenshot above shows torch-2.0.1_cu117

You can come to this web page to view the corresponding version of torch and cuda's html.
Insert image description here
So the command to install the kaolin package should be:

pip install kaolin==0.14.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.0.1_cu117.html

Configure the access token of huggingface

Before using Hugging Face's features or resources, you should log in to your Hugging Face account by running the huggingface-cli login command, which will store your access token in the default location. This access token will be used to authenticate you so you can access private models, datasets, etc.

Here are the specific steps:

Stept1: Set the token.
Set a new token in Access Tokens in the setting:
Name it as you like. Click
Generate a token to generate a token.
Insert image description here
You can see the example below. A token named texture has been set up.
Log in to your huggingface account first

You can also refer to this link on how to set up Token:
https://huggingface.co/docs/hub/security-tokens

Step2:
Open a terminal or command line interface: Open a terminal window or command line interface and make sure you can enter commands on the command line.

Run the login command: Run the following command in the terminal, which will start the login process:
huggingface-cli login
Insert image description here
In Token, enter the token obtained in step 1 (just copy and paste it). The password entered here is invisible, just like ubuntu

Run the Text Conditioned Texture Generation command

python -m scripts.run_texture --config_path=configs/text_guided/napoleon.yaml

The following error may appear:

Error 1

/home/aaa/anaconda3/envs/texture/lib/python3.9/site-packages/torch/cuda/__init__.py:146: UserWarning: 
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

Reason for the error : CUDA capability sm_86: computing power 8.6.
On the surface, it refers to PyTorch, but in fact it is a problem with the CUDA version that PyTorch relies on. The
translation is: the computing power of RTX 3090 is 8.6, but the current CUDA version that PyTorch relies on supports the calculation. The force is only 3.7, 5.0, 6.0, 7.0

Graphics cards with computing power 7.0 can run under the CUDA version that supports the highest computing power 7.5, but graphics cards with computing power 7.5 cannot run under the CUDA version that supports the highest computing power 7.0. Similarly, graphics cards with computing power 8.x cannot run under the CUDA version that supports the highest computing power 7.0
. Running under the CUDA version with the highest computing power 7.x

Solution:
Go directly to the pytorch official website to install the latest version of cuda and the corresponding pytorch, which will definitely make all graphics cards usable.
Insert image description here

Reference blog

Error 2

After solving the above problem, re-execute the command,
Insert image description hereInsert image description here

An error occurs:

ValueError: Could not find a backend to open `experiments/napoleon/results/step_00010_rgb.mp4`` with iomode `wI`.
Based on the extension, the following plugins might add capable backends:
  FFMPEG:  pip install imageio[ffmpeg]
  pyav:  pip install imageio[pyav]
100% painting step 10/10 [02:03<00:00, 12.32s/it]

Solution:
Follow the prompts to install

pip install imageio[ffmpeg]

pip install imageio[pyav]

Run successfully

After solving the above problem, re-execute the command:
python -m scripts.run_texture --config_path=configs/text_guided/napoleon.yaml
Insert image description here
Insert image description here
Insert image description here
Insert image description here
Insert image description here
the operation is successful!

View Results

The result of the operation will be saved in experimentsthe folder, experiments/napoleon/mesh/mesh.objwhich is the three-dimensional mesh model after mapping.

View the mapped 3D mesh model

Insert image description here
You can open the mesh.obj file in your local location
Insert image description here

Download the MeshLab software, and then you can directly open mesh.obj to view the textured 3D mesh model.

Insert image description here

Guess you like

Origin blog.csdn.net/weixin_43845922/article/details/132270873