Projective Urban Texturing reading + compilation notes

Projective Urban Texturing(Yiangos Georgiou)

overview

We propose a neural architecture to generate photorealistic textures in urban environments. Our Projected Urban Texturing (PUT) system iteratively generates target texture styles and details, and the output is a texture atlas that is applied to an input 3D city model. PUT is conditional on previously adjacent textures to ensure consistency between successively generated textures.

Algorithm implementation

insert image description here
The initial input is only mesh, and the texture atlas is empty. The first step is rendering (the first time is an empty texture, so there is actually no rendering texture); the second step is to pass the rendered image to the network (described below), The role of the network is texture generation, which automatically predicts and generates corresponding textures according to the input mesh; the third step is the texture propagation module, which deducts the textures generated in the second step and puts them in the texture atlas. Finally, according to the current texture atlas and mesh, loop back to the first step, render a part of the textured picture and then pass it to the network.

Network Brief

insert image description here
The neural network uses the architecture of Johnson et al. [Justin Johnson, Alexandre Alahi, and Fei fei Li. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. 3]. The network consists of three convolutions, 9 residual blocks, two fractional stride convolutions with stride 1/2, and a convolutional layer that maps features to RGB images (3x512x256).
A total of three Losses were used.
Multi-layer Patch-wise Contrastive Loss
Adversarial losses
Inter-frame consistency loss
The total objective function is a weighted combination of the above losses
Note: The two sets of 3D geometry and real-world panoramas are unpaired, i.e. there is no geometric or semantic correspondence.

compile

github address: https://github.com/ygeorg01/PUT
win10\win11+blender3.3

environment.yml

name: PUT
channels:
  - pytorch
  - anaconda
  - nvidia
  - conda-forge
  - defaults
dependencies:
  - blas=1.0=mkl
  - brotli=1.0.9=h8ffe710_7
  - brotli-bin=1.0.9=h8ffe710_7
  - bzip2=1.0.8=he774522_0
  - ca-certificates=2023.5.7=h56e8100_0
  - certifi=2023.5.7=pyhd8ed1ab_0
  - charset-normalizer=3.1.0=pyhd8ed1ab_0
  - cuda-cccl=12.1.109=0
  - cuda-cudart=11.8.89=0
  - cuda-cudart-dev=11.8.89=0
  - cuda-cupti=11.8.87=0
  - cuda-libraries=11.8.0=0
  - cuda-libraries-dev=11.8.0=0
  - cuda-nvrtc=11.8.89=0
  - cuda-nvrtc-dev=11.8.89=0
  - cuda-nvtx=11.8.86=0
  - cuda-profiler-api=12.1.105=0
  - cuda-runtime=11.8.0=0
  - dominate=2.8.0=pyhd8ed1ab_0
  - eigen=3.3.7=h59b6b97_1
  - ffmpeg=4.2.2=he774522_0
  - filelock=3.9.0=py310haa95532_0
  - freetype=2.12.1=ha860e81_0
  - giflib=5.2.1=h8cc25b3_3
  - glib=2.69.1=h5dc1a3c_2
  - gst-plugins-base=1.18.5=h9e645db_0
  - gstreamer=1.18.5=hd78058f_0
  - hdf5=1.10.6=h1756f20_1
  - icc_rt=2022.1.0=h6049295_2
  - icu=58.2=ha925a31_3
  - idna=3.4=pyhd8ed1ab_0
  - intel-openmp=2023.1.0=h59b6b97_46319
  - jinja2=3.1.2=py310haa95532_0
  - jpeg=9e=h2bbff1b_1
  - krb5=1.19.4=h5b6d351_0
  - lerc=3.0=hd77b12b_0
  - libblas=3.9.0=8_mkl
  - libbrotlicommon=1.0.9=h8ffe710_7
  - libbrotlidec=1.0.9=h8ffe710_7
  - libbrotlienc=1.0.9=h8ffe710_7
  - libcblas=3.9.0=8_mkl
  - libclang=14.0.6=default_hb5a9fac_1
  - libclang13=14.0.6=default_h8e68704_1
  - libcublas=11.11.3.6=0
  - libcublas-dev=11.11.3.6=0
  - libcufft=10.9.0.58=0
  - libcufft-dev=10.9.0.58=0
  - libcurand=10.3.2.106=0
  - libcurand-dev=10.3.2.106=0
  - libcusolver=11.4.1.48=0
  - libcusolver-dev=11.4.1.48=0
  - libcusparse=11.7.5.86=0
  - libcusparse-dev=11.7.5.86=0
  - libdeflate=1.17=h2bbff1b_0
  - libffi=3.4.4=hd77b12b_0
  - libiconv=1.16=h2bbff1b_2
  - liblapack=3.9.0=8_mkl
  - libnpp=11.8.0.86=0
  - libnpp-dev=11.8.0.86=0
  - libnvjpeg=11.9.0.86=0
  - libnvjpeg-dev=11.9.0.86=0
  - libogg=1.3.5=h2bbff1b_1
  - libpng=1.6.39=h8cc25b3_0
  - libprotobuf=3.20.3=h23ce68f_0
  - libtiff=4.5.0=h6c2663c_2
  - libuv=1.44.2=h2bbff1b_0
  - libvorbis=1.3.7=he774522_0
  - libwebp=1.2.4=hbc33d0d_1
  - libwebp-base=1.2.4=h2bbff1b_1
  - libxml2=2.10.3=h0ad7f3c_0
  - libxslt=1.1.37=h2bbff1b_0
  - lz4-c=1.9.4=h2bbff1b_0
  - markupsafe=2.1.1=py310h2bbff1b_0
  - mkl=2020.4=hb70f87d_311
  - mpmath=1.2.1=py310haa95532_0
  - networkx=2.8.4=py310haa95532_1
  - numpy=1.23.1=py310h8a5b91a_0
  - opencv=4.6.0=py310h4ed8f06_3
  - openssl=1.1.1t=h2bbff1b_0
  - packaging=23.1=pyhd8ed1ab_0
  - pcre=8.45=hd77b12b_0
  - pillow=9.3.0=py310hd77b12b_2
  - pip=23.0.1=py310haa95532_0
  - pysocks=1.7.1=pyh0701188_6
  - python=3.10.6=hbb2ffb3_1
  - python_abi=3.10=2_cp310
  - pytorch=2.0.1=py3.10_cuda11.8_cudnn8_0
  - pytorch-cuda=11.8=h24eeafa_5
  - pytorch-mutex=1.0=cuda
  - qt-main=5.15.2=he8e5bd7_8
  - qt-webengine=5.15.9=hb9a9bb5_5
  - qtwebkit=5.212=h2bbfb41_5
  - requests=2.31.0=pyhd8ed1ab_0
  - setuptools=67.8.0=py310haa95532_0
  - sqlite=3.41.2=h2bbff1b_0
  - sympy=1.11.1=py310haa95532_0
  - tk=8.6.12=h2bbff1b_0
  - torchvision=0.13.1=cpu_py310h378ed51_0
  - typing_extensions=4.5.0=py310haa95532_0
  - tzdata=2023c=h04d1e81_0
  - urllib3=2.0.3=pyhd8ed1ab_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wheel=0.38.4=py310haa95532_0
  - win_inet_pton=1.1.0=pyhd8ed1ab_6
  - xz=5.4.2=h8cc25b3_0
  - zlib=1.2.13=h8cc25b3_0
  - zstd=1.5.5=hd43e919_0
prefix: E:\anaconda\envs\PUT
 conda env create -f environment.yml

requirements.txt

certifi==2023.5.7
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1678108872112/work
dominate @ file:///home/conda/feedstock_root/build_artifacts/dominate_1684707922510/work
filelock @ file:///C:/b/abs_c7yrhs9uz2/croot/filelock_1672387617533/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
Jinja2 @ file:///C:/b/abs_7cdis66kl9/croot/jinja2_1666908141852/work
MarkupSafe @ file:///C:/ci/markupsafe_1654508036328/work
mpmath==1.2.1
networkx @ file:///C:/b/abs_b935xy_9g6/croot/networkx_1678964342510/work
numpy @ file:///D:/bld/numpy_1657483944523/work
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1681337016113/work
Pillow==9.3.0
PySocks @ file:///D:/bld/pysocks_1661604991356/work
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1684774241324/work
sympy @ file:///C:/b/abs_95fbf1z7n6/croot/sympy_1668202411612/work
torch==2.0.1
torchvision @ file:///C:/b/abs_88nq4vr9ec/croot/torchvision_1670313558208/work
typing_extensions @ file:///C:/b/abs_a1bb332wcs/croot/typing_extensions_1681939523095/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1686156552494/work
win-inet-pton @ file:///D:/bld/win_inet_pton_1667051142467/work
pip install -r requirements.txt

Note:
–scene_path …/scenes/005 --model_name consistency --blend custom
Remember to preset the above parameters (can be set in pycharm).

If you use windows, change / to \, you can use the .replace method.

Start the blender software from the command line (os.system). If this step fails, you may get a completely black image hh.

os.system("E:\\blender.exe %s -b -P change_UV.py --render-output %s/##### --render-frame %d  -- pathToImage %s" % (
    blender_file, consistency_out_dir, frame_number, UV_path))

The scriping path in blender is hard-code and may need to be modified.

The camera is below the Curve.

The author's paper mentioned the use of global illumination, but there is no global illumination in blender, you need to install a plug-in, or set the entire background to white light.

Partial results:

Please add a picture description
Please add a picture description
![Please add a picture description](https://img-blog.csdnimg.cn/403e087bf69d417396ef48980127afb2.pngPlease add a picture description

Please add a picture description

Intermediate and final results for UV textures:
Please add a picture description
Please add a picture description
Welcome to the trifecta.

Guess you like

Origin blog.csdn.net/qq_44324007/article/details/127963311