AI algorithm, a new life! Cute girl to challenge~

Hi, my name is Jack.

It's been a long time since I gave an interesting AI tutorial. I'll schedule one today.

Only one picture or video is needed to generate the corresponding two-dimensional "wife".

Feel it:

Put a video, feel stronger:

GANsNRoses show

The two-dimensional "wife" changes following the action of the video.

Earlier I wrote an algorithmic tutorial for a first-order motion model:

Let the picture move, Trump and Mona Lisa sing affectionately

The functionality looks similar, but the algorithm implementation is different.

The algorithm of real people driving animation pictures, this effect is better:

The correct way to open the second dimension

This animation face control transformation algorithm uses GAN, which only needs one input, generation + control, and this algorithm can be used.

GANsNRoses

The name of this algorithm is GANsNRoses, a style transfer algorithm.

Simply put, it is an animation image that takes the content code of the face image as input and outputs a variety of randomly selected style codes.

The algorithm implementation is also not complicated:

The generator is responsible for generating animated faces, and the discriminator is responsible for identifying whether it is an animated face.

The generators are divided into content encoder c and style encoder s.

Style encoders, responsible for the overall style, such as hair style, face position, hair color, etc.

Content encoder c, responsible for the control of details, such as head tilt angle, etc.

For more details, you can read the paper directly:

https://arxiv.org/pdf/2106.06561.pdf

test

There are currently three ways:

  • Web Demo

  • al

  • Build locally

Web Demo

Web Demo is the easiest to use, just upload pictures.

https://gradio.app/g/AK391/GANsNRoses

But it seems to only support the production of pictures.

After testing the effect of the dragon mother, she would probably cry and faint in the toilet.

al

Colab is also very simple to run, just have a ladder.

https://colab.research.google.com/github/mchong6/GANsNRoses/blob/main/inference_colab.ipynb

It saves the trouble of the deployment environment, just run it foolishly, and both pictures and videos can be tested.

Build locally

The main purpose is to build the environment. This creates a virtual environment directly with Conda, and then installs various third-party libraries:

conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=<CUDA_VERSION>
pip install tqdm gdown kornia scipy opencv-python dlib moviepy lpips aubio ninja

Conda use can refer to this article:

Stop messing with the development environment, build it once and for all

Then download the model weight file, you can, the model weight file is still quite large, 1.6G, downloading from Google Drive is relatively slow, it is recommended to use Colab to test directly.

Summarize

In fact, there are quite a lot of such algorithms, and in terms of effects alone, there is still a lot of room for improvement .

Finally, I see that there is an up master at station B, Zi Min himself made a ghost animal, the new treasure island version:

GANsNRoses show 2

At present, this kind of algorithm is actually okay to be a ghost.


Finally, I will give you a copy to help me get the data structure of BAT and other first-tier manufacturers. It was written by a Google master, and it is very useful for students who have weak algorithms or need to improve:

Google and Ali's Leetcode brushing notes

As well as the BAT algorithm engineer learning route, books + videos, complete learning routes and instructions that I have compiled, it will definitely help those who want to become algorithm engineers:

How I became an algorithm engineer, super detailed learning path


I'm Jack, see you next time.

Guess you like

Origin blog.csdn.net/c406495762/article/details/118197090