Use Python+AI to make children's hand-painted drawings jump up (complete source code attached)

Hello everyone, today I will introduce a very interesting project, based on AI recognition, to make children's hand-painted dance pictures .

Automatically generate animations of children's hand-drawn characters or humanoid characters (that is, characters with two arms, two legs, etc.) in just a few minutes , and the generated animations can still be life-like.

Not only can you dance, but you can also play taekwondo, kick in the air, and simulate human movements.

picture

Project Introduction

This project is currently only tested on macOS and Ubuntu systems, Windows systems will have problems.

I am using the Ubuntu20.04 system , and there are basically no problems.

It is roughly realized through the following processes.

  • Recognizing human figures through object detection

  • Lift the human figure from the scene using the character mask

  • Prepare for animation with "rigging"

  • 3D motion capture to make 2D character animation

picture

Next, Xiao F will teach you how to deploy.

Project Deployment-Python

Technology must learn to share and communicate, and it is not recommended to work behind closed doors. One person can go fast, and a group of people can go farther.

The complete code of this article can be obtained as follows

Method ①, add WeChat ID: dkl88194 , Remarks: from CSDN + AI painting
Method ②, WeChat search official account: Python learning and data mining, background reply: AI painting

First, you need to install Anaconda (version 4.11.0) to facilitate the creation of a Python environment.

The installation method of Anaconda, you can Baidu by yourself, it is relatively easy.

After installation, create a virtual environment, download the project, and install the required dependencies.

# 创建虚拟环境
conda create --name animated_drawings python=3.8.13

# 激活环境
conda activate animated_drawings

# 下载项目
git clone https://github.com/facebookresearch/AnimatedDrawings.git

# 打开目录文件
cd AnimatedDrawings

# 安装依赖
pip install -e .

If git is not installed and the project cannot be downloaded, you can directly use the files provided by Xiao F.

Project deployment - run

1. Quick start

Now that everything is set up, let's animate the drawing!

Use the following code in the terminal.

(animated_drawings) AnimatedDrawings % python

# 在Python终端里运行如下代码
from animated_drawings import render
render.start('./examples/config/mvc/interactive_window_example.yaml')

Let's take a look at the content of this yaml configuration file.

picture

The following files are included.

character_cfg - object file

motion_cfg - motion file

retarget_cfg - identify the target file

Among them, the target file can use the following types, and for the specific situation of the target, you can check joint_overlay.png.

picture

Action files can use the following categories, including jazz dance, jacking and other actions.

picture

Target recognition files are available in the following categories, different types of targets, not just human-like targets .

There are also four-legged pigs and six-armed beetles~

picture

Object recognition files with four legs.

picture

There are also object recognition files for the six arms.

picture

Finally, run the above command, and an animation will be generated on the desktop pop-up window.

Use the space bar to pause/unpause the scene, the arrow keys to move forward and backward, and the q key to turn off the screen.

picture

The original picture is like this.

picture

Isn't it very interesting to turn children's masterpieces into lively goals.

If you want to modify characters, actions, and scenes , you can follow the above instructions and replace the content in the interactive_window_example.yaml file to achieve.

2. Export MP4 video

If you want to save the animation as a video file instead of viewing it directly in the window.

You can use the following code in the Python interpreter.

from animated_drawings import render# 导出MP4视频render.start('./examples/config/mvc/export_mp4_example.yaml')

The content of the configuration file has several more parameters than the above file.

picture

The result is as follows.
picture

3. Export transparent gif

Maybe you want transparent gif files instead of mp4 files.

You can copy and run the following code in the Python interpreter.

from animated_drawings import render

# 导出GIF文件
render.start('./examples/config/mvc/export_gif_example.yaml')

Also look at the content of the configuration file, which is not too different from the settings for exporting mp4.

picture

4. Headless rendering

If you want to generate video headlessly (for example on a remote server accessed via ssh), you can add the following code to the configuration file.

view:    USE_MESA: True

5. Draw your own painting

After talking about so many examples, everyone must really want to know how to animate the pictures you draw.

The following small F will introduce how to achieve it.

We need to generate target files for different targets, and the author also provides a convenient way to generate them.

The authors train a drawn human figure detector**** and pose estimator , and provide scripts to automatically generate annotation files based on model predictions.

picture

As for the action files and target recognition files, you can first use the ones provided by the author.

First, you need to run the TorchServe Docker container, so that you can quickly feed the provided image to our machine learning model and get the prediction result .

For Docker, you can install it yourself, and use the following commands to build the environment.

(animated_drawings) AnimatedDrawings % cd torchserve

# 创建镜像
(animated_drawings) torchserve % docker build -t docker_torchserve .

# 运行容器
(animated_drawings) torchserve % docker run -d --name docker_torchserve -p 8080:8080 -p 8081:8081 docker_torchserve

After waiting about 10 seconds, make sure Docker and TorchServe are working properly by pinging the server.

(animated_drawings) torchserve % curl http://localhost:8080/ping

# should return:
# {
    
    
#   "status": "Healthy"
# }

After the service is set up, the following command can be used to make the image generate animation.

(animated_drawings) torchserve % cd ../examples
(animated_drawings) examples % python image_to_animation.py drawings/garlic.png garlic_out

The original picture is like this. I entrusted a friend to draw it, and found many problems during the drawing process.

picture

Let's take a look at the results drawn by Little F, magic dance.

By modifying the action type of the configuration file, there are jacking jumps, hahaha.

picture

It was found that it is necessary to use blank paper for drawing , otherwise there will be problems with target segmentation.

And the drawing must ensure that the entire target is closed (in fact, it is almost the same when it is colored), otherwise it is easy to report errors and there are blank spaces.

For example, the colors of the hands and feet in the picture above are not the same, because the contours are not closed.

6. Other operations

There are also things like fixing wrong predictions, adding multiple characters to the scene, adding background images, using BVH files with different skeletons, customizing BVH files (motion files), adding additional character skeletons, and more.

picture

Interested friends can learn by themselves.

Summarize

Through the above operations, children's painting and dancing can be realized through AI.

Interested friends, you can try it yourself to provide happiness for children.

Guess you like

Origin blog.csdn.net/2301_78285120/article/details/130918177