无需Avatarify 无需蒙版 一键生成多人版 “蚂蚁呀嘿“视频

抖音上的蚂蚁呀嘿火遍全网,很多小伙伴都不知道如何制作。本文抛弃繁琐的操作,利用PaddleHub与PaddleGAN框架一键生成多人版的”蚂蚁呀嘿“视频。

首先我们需要安装PaddleHub,利用其中的face detection功能来定位照片中人脸。

安装方法如下:

pip install paddlehub --upgrade -i https://pypi.douban.com/simple

安装之后paddlehub之后,还需要安装一下人脸检测的模型,命令如下:

hub install ultra_light_fast_generic_face_detector_1mb_640 

生成”蚂蚁呀嘿“视频需要用到PaddleGAN套件中的动作迁移功能,所以下一步需要安装PaddleGAN套件。因为我修改了PaddleGAN套件部分代码,所以这个代码已经保存在AIStudio环境中,直接安装就可以了。使用以下命令安装PaddleGAN。

AIStudio 地址(推荐,可直接运行):

https://aistudio.baidu.com/aistudio/projectdetail/1285661

也可以从以下地址下载:

https://gitee.com/txyugood/PaddleGAN.git

cd PaddleGAN/
pip install -v -e .

安装PaddleGAN依赖的PaddlePaddle框架。

python -m pip install https://paddle-wheel.bj.bcebos.com/2.0.0-rc0-gpu-cuda10.1-cudnn7-mkl_gcc8.2%2Fpaddlepaddle_gpu-2.0.0rc0.post101-cp37-cp37m-linux_x86_64.whl

在PaddleGAN/application/tools新建一个first-order-mayi.py文件,该文件就是生成”蚂蚁呀嘿“视频的主程序。

扫描二维码关注公众号,回复: 12585581 查看本文章

代码如下:

#  Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.

import argparse

import os
import paddle
from ppgan.apps.first_order_predictor import FirstOrderPredictor
from skimage import img_as_ubyte
import paddlehub as hub
import math
import cv2
import imageio

parser = argparse.ArgumentParser()
parser.add_argument("--config", default=None, help="path to config")
parser.add_argument("--weight_path",
                    default=None,
                    help="path to checkpoint to restore")
parser.add_argument("--source_image", type=str, help="path to source image")
parser.add_argument("--driving_video", type=str, help="path to driving video")
parser.add_argument("--output", default='output', help="path to output")
parser.add_argument("--relative",
                    dest="relative",
                    action="store_true",
                    help="use relative or absolute keypoint coordinates")
parser.add_argument(
    "--adapt_scale",
    dest="adapt_scale",
    action="store_true",
    help="adapt movement scale based on convex hull of keypoints")

parser.add_argument(
    "--find_best_frame",
    dest="find_best_frame",
    action="store_true",
    help=
    "Generate from the frame that is the most alligned with source. (Only for faces, requires face_aligment lib)"
)

parser.add_argument("--best_frame",
                    dest="best_frame",
                    type=int,
                    default=None,
                    help="Set frame to start from.")
parser.add_argument("--cpu", dest="cpu", action="store_true", help="cpu mode.")

parser.set_defaults(relative=False)
parser.set_defaults(adapt_scale=False)

if __name__ == "__main__":
    #解析参数
    args = parser.parse_args()
    if args.cpu:
        paddle.set_device('cpu')
    cache_path = os.path.join(args.output,"cache")
    if not os.path.exists(cache_path):
        os.makedirs(cache_path)
    image_path = args.source_image
    
    origin_img = cv2.imread(image_path)
    image_width = origin_img.shape[1]
    image_hegiht = origin_img.shape[0]
	#获取人脸检测模型
    module = hub.Module(name="ultra_light_fast_generic_face_detector_1mb_640")
    face_detecions = module.face_detection(paths = [image_path], visualization=True, output_dir='face_detection_output')
    face_detecions = face_detecions[0]['data']
    
    face_list = []
    #遍历人脸检测结果,并保存人脸部分的图片,在原图中的位置,以及人脸的尺寸。
    #这里需要对检测结果得出的尺寸近一步放大。
    for i, face_dect in enumerate(face_detecions):
        left = math.ceil(face_dect['left'])
        right = math.ceil(face_dect['right'])
        top = math.ceil(face_dect['top'])
        bottom = math.ceil(face_dect['bottom'])
        width = right - left
        height = bottom - top
        center_w = left + width // 2
        center_h = top + height // 2

        new_left = max(center_w - height, 0)
        new_right = min(center_w + height, image_width)

        new_top = max(center_h - height, 0)
        new_bottom = min(center_h + height, image_hegiht)

        origin_img = cv2.imread(image_path)
        face_img = origin_img[new_top:new_bottom, new_left:new_right, :]
        face_height = face_img.shape[0]
        face_weight = face_img.shape[1]

        cv2.imwrite(os.path.join(cache_path,'face_{}.jpeg'.format(i)), face_img)

        face_list.append({
    
    "path" : os.path.join(cache_path,'face_{}.jpeg'.format(i)),
                            "width":face_weight, "height":face_height, 
                            "top":new_top, "bottom":new_bottom,
                            "left":new_left, "right":new_right})
    #使用驱动视频,对所有的人脸图片进行动作迁移,将生产的图片序列保存起来。
    frames = 0
    for face_dict in face_list:
        predictor = FirstOrderPredictor(output=args.output,
                                        weight_path=args.weight_path,
                                        config=args.config,
                                        relative=args.relative,
                                        adapt_scale=args.adapt_scale,
                                        find_best_frame=args.find_best_frame,
                                        best_frame=args.best_frame)
        predictions,fps = predictor.run(face_dict["path"], args.driving_video)
        face_dict['pre'] = predictions
        frames = len(predictions)
    images = []    
    #遍历所有的生成的图片序列,放到原图中对应的位置。
    for i in range(frames):
        new_frame = origin_img.copy()
        new_frame = new_frame[:,:,[2,1,0]]
        for face_dict in face_list:
            pre = face_dict["pre"][i]
            face_weight = face_dict["width"]
            face_height = face_dict["height"]
            top = face_dict["top"]
            bottom = face_dict["bottom"]
            left = face_dict["left"]
            right = face_dict["right"]
            img = cv2.resize(pre,(face_weight, face_height))
            new_frame[top:bottom, left:right, :] = img_as_ubyte(img)
        images.append(new_frame)
    #生成视频,这一步是没有声音的,后面是用ffmpeg合成带音频的视频文件。
    imageio.mimsave(os.path.join(args.output, 'result.mp4'),
                [img_as_ubyte(frame) for frame in images],
                fps=fps)
    #使用ffmpeg将声音合并到视频中去。
    os.system("ffmpeg -i"  + os.path.join(args.output, 'result.mp4') + "-i /home/aistudio/MYYH.mp3 -c:v copy -c:a aac -strict experimental " + os.path.join(args.output, 'result.mp4'))

运行脚本生成视频。

此处借用了GT大佬
https://aistudio.baidu.com/aistudio/projectdetail/1584416项目中的驱动视频。

/home/aistudio/1.jpeg是测试的照片,可以使用右侧的上传功能上传自己的照片,然后替换–source_image 后面的路径后,运行脚本即可。

最终/home/aistudio/output/mayiyahei.mp4就是最终生成的"蚂蚁呀嘿"视频。

cd /home/aistudio/PaddleGAN/applications/
python -u tools/first-order-mayi.py  \
     --driving_video /home/aistudio/MaYiYaHei.mp4 \
     --source_image /home/aistudio/1.jpeg \
     --relative --adapt_scale \
     --output /home/aistudio/output

最后放一张效果图:

在这里插入图片描述

该程序目前还有许多可以改进的地方,后续会继续优化。

推荐使用AI Studio运行该程序,不但有免费的V100算力可使用,还可以方便的一键运行脚本生成视频。

欢迎关注我的公众号:人工智能研习社,分享更多的人工智能技术干货。

猜你喜欢

转载自blog.csdn.net/txyugood/article/details/114156690