Want to know what your future baby will look like?

​​​​​​​​​​​​​​​​​​​​​​​Abstract: In this case, a photo of the child can be generated based on a frontal photo of the father and mother, and the parameters can be adjusted. Look at the looks of children of different genders and ages.

This article is shared from the HUAWEI CLOUD community "BabyGAN: Generating Child Photos Based on Parents' Photos" by Shanhaizhiguang.

In this case, a photo of the child can be generated based on a frontal photo of the father and mother, and the parameters can be adjusted to see the appearance of children of different genders and ages.

In order to ensure the photo generation effect, when uploading photos of parents, try to upload photos that can reveal the facial features and light-colored background.

This case is only for learning and communication, please do not use it for other purposes.

In addition, due to imperfect technology, the generated child photos may be distorted or distorted. You can replace different parent photos and regenerate the child photos until a satisfactory generation effect is achieved.

Let's start running this case step by step.

1. Install the required modules

This step takes about 4 minutes

!pip install imutils moviepy dlib
复制代码

2. Download the code and model files

import os
import moxing as mox

root_dir = '/home/ma-user/work/ma_share/'
code_dir = os.path.join(root_dir, 'BabyGAN')
if not os.path.exists(os.path.join(root_dir, 'BabyGAN.zip')):
    mox.file.copy('obs://arthur-1/BabyGAN/BabyGAN.zip', os.path.join(root_dir, 'BabyGAN.zip'))
    os.system('cd %s; unzip BabyGAN.zip' % root_dir)

os.chdir(code_dir)
复制代码

3. Load related modules and models

import cv2
import math
import pickle
import imageio
import warnings
import PIL.Image
import numpy as np
from glob import glob
from PIL import Image
import tensorflow as tf
from random import randrange
import moviepy.editor as mpy
import matplotlib.pyplot as plt
from IPython.display import clear_output
from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter

import config
import dnnlib
import dnnlib.tflib as tflib
from encoder.generator_model import Generator

%matplotlib inline
warnings.filterwarnings("ignore")
复制代码

Load the model file. This code block can only be executed once. If an error occurs, please restart the kernel and re-run all the codes

tflib.init_tf()
URL_FFHQ = "./karras2019stylegan-ffhq-1024x1024.pkl"
with dnnlib.util.open_url(URL_FFHQ, cache_dir=config.cache_dir) as f:
    generator_network, discriminator_network, Gs_network = pickle.load(f)
generator = Generator(Gs_network, batch_size=1, randomize_noise=False)
model_scale = int(2 * (math.log(1024, 2) - 1))

age_direction = np.load('./ffhq_dataset/latent_directions/age.npy')
horizontal_direction = np.load('./ffhq_dataset/latent_directions/angle_horizontal.npy')
vertical_direction = np.load('./ffhq_dataset/latent_directions/angle_vertical.npy')
eyes_open_direction = np.load('./ffhq_dataset/latent_directions/eyes_open.npy')
gender_direction = np.load('./ffhq_dataset/latent_directions/gender.npy')
smile_direction = np.load('./ffhq_dataset/latent_directions/smile.npy')

def get_watermarked(pil_image: Image) -> Image:
    try:
        image = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR)
        (h, w) = image.shape[:2]
        image = np.dstack([image, np.ones((h, w), dtype="uint8") * 255])
        pct = 0.08
        full_watermark = cv2.imread('./media/logo.png', cv2.IMREAD_UNCHANGED)
        (fwH, fwW) = full_watermark.shape[:2]
        wH = int(pct * h * 2)
        wW = int((wH * fwW) / fwH * 0.1)
        watermark = cv2.resize(full_watermark, (wH, wW), interpolation=cv2.INTER_AREA)
        overlay = np.zeros((h, w, 4), dtype="uint8")
        (wH, wW) = watermark.shape[:2]
        overlay[h - wH - 10: h - 10, 10: 10 + wW] = watermark
        output = image.copy()
        cv2.addWeighted(overlay, 0.5, output, 1.0, 0, output)
        rgb_image = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
        return Image.fromarray(rgb_image)
    except:
        return pil_image
def generate_final_images(latent_vector, direction, coeffs, i):
    new_latent_vector = latent_vector.copy()
    new_latent_vector[:8] = (latent_vector + coeffs * direction)[:8]
    new_latent_vector = new_latent_vector.reshape((1, 18, 512))
    generator.set_dlatents(new_latent_vector)
    img_array = generator.generate_images()[0]
    img = PIL.Image.fromarray(img_array, 'RGB')
    if size[0] >= 512: img = get_watermarked(img)
    img_path = "./for_animation/" + str(i) + ".png"
    img.thumbnail(animation_size, PIL.Image.ANTIALIAS)
    img.save(img_path)
    face_img.append(imageio.imread(img_path))
    clear_output()
    return img
def generate_final_image(latent_vector, direction, coeffs):
    new_latent_vector = latent_vector.copy()
    new_latent_vector[:8] = (latent_vector + coeffs * direction)[:8]
    new_latent_vector = new_latent_vector.reshape((1, 18, 512))
    generator.set_dlatents(new_latent_vector)
    img_array = generator.generate_images()[0]
    img = PIL.Image.fromarray(img_array, 'RGB')
    if size[0] >= 512: img = get_watermarked(img)
    img.thumbnail(size, PIL.Image.ANTIALIAS)
    img.save("face.png")
    if download_image == True: files.download("face.png")
    return img
def plot_three_images(imgB, fs=10):
    f, axarr = plt.subplots(1, 3, figsize=(fs, fs))
    axarr[0].imshow(Image.open('./aligned_images/father_01.png'))
    axarr[0].title.set_text("Father's photo")
    axarr[1].imshow(imgB)
    axarr[1].title.set_text("Child's photo")
    axarr[2].imshow(Image.open('./aligned_images/mother_01.png'))
    axarr[2].title.set_text("Mother's photo")
    plt.setp(plt.gcf().get_axes(), xticks=[], yticks=[])
    plt.show()
复制代码

4. Have a photo of your father and mother ready

In this case, a default parent photo has been prepared. In the file resource management window on the left sidebar, go to the ma_share/BabyGAN directory, and then enter the father_image or mother_image directory to see the provided parent photos. ,As shown below:

If you need to change the photo of your parents, please see section 11 of this article, "Changing the Photos of the Father and Mother"

if len(glob(os.path.join('./father_image', '*.jpg'))) != 1 or (not os.path.exists('./father_image/father.jpg')):
    raise Exception('请在 ma_share/BabyGAN/father_image 目录下准备一张父亲的照片,且命名为father.jpg')

if len(glob(os.path.join('./mother_image', '*.jpg'))) != 1 or (not os.path.exists('./mother_image/mother.jpg')):
    raise Exception('请在 ma_share/BabyGAN/father_image 目录下准备一张母亲的照片,且命名为mother.jpg')
复制代码

5. Get the father's face area and align the face

!python align_images.py ./father_image ./aligned_images
复制代码

Check out the father's face

if os.path.isfile('./aligned_images/father_01.png'):
    pil_father = Image.open('./aligned_images/father_01.png')
    (fat_width, fat_height) = pil_father.size
    resize_fat = max(fat_width, fat_height) / 256
    display(pil_father.resize((int(fat_width / resize_fat), int(fat_height / resize_fat))))
else:
    raise ValueError('No face was found or there is more than one in the photo.')
复制代码

6. Obtain the mother's face area and perform face alignment

!python align_images.py ./mother_image ./aligned_images
复制代码

View mother's face

if os.path.isfile('./aligned_images/mother_01.png'):
    pil_mother = Image.open('./aligned_images/mother_01.png')
    (mot_width, mot_height) = pil_mother.size
    resize_mot = max(mot_width, mot_height) / 256
    display(pil_mother.resize((int(mot_width / resize_mot), int(mot_height / resize_mot))))
else:
    raise ValueError('No face was found or there is more than one in the photo.')
复制代码

7. Extract facial features

This step takes about 3 minutes

!python encode_images.py \
    --early_stopping False \
    --lr=0.25 \
    --batch_size=2 \
    --iterations=100 \
    --output_video=False \
    ./aligned_images \
    ./generated_images \
    ./latent_representations

if len(glob(os.path.join('./generated_images', '*.png'))) == 2:
    first_face = np.load('./latent_representations/father_01.npy')
    second_face = np.load('./latent_representations/mother_01.npy')
    print("Generation of latent representation is complete! Now comes the fun part.")
else:
    raise ValueError('Something wrong. It may be impossible to read the face in the photos. Upload other photos and try again.')
复制代码

8. Generate a photo of a family of three

Please modify the gender_influence and person_age parameters in the code below

gender_influence: Gender influence factor, the value range is [0.01, 0.99], the closer the value is to 0, the greater the influence of the father's appearance, and the greater the influence of the mother;

person_age: Age influence factor, the value range is [10, 50]. After setting this value, the appearance of the child of the corresponding age will be generated.

After each modification of the parameter value, re-run the following code block to generate a new photo of the child

genes_influence = 0.8  # 性别影响因子,取值范围[0.01, 0.99],取值越接近0,父亲的容貌影响越大,反之母亲影响越大
person_age = 10  # 年龄影响因子,取值范围[10, 50],设置该值后,将生成对应年龄的小孩的容貌

style = "Default"
if style == "Father's photo":
    lr = ((np.arange(1, model_scale + 1) / model_scale) ** genes_influence).reshape((model_scale, 1))
    rl = 1 - lr
    hybrid_face = (lr * first_face) + (rl * second_face)
elif style == "Mother's photo":
    lr = ((np.arange(1, model_scale + 1) / model_scale) ** (1 - genes_influence)).reshape((model_scale, 1))
    rl = 1 - lr
    hybrid_face = (rl * first_face) + (lr * second_face)
else:
    hybrid_face = ((1 - genes_influence) * first_face) + (genes_influence * second_face)

intensity = -((person_age / 5) - 6)
resolution = "512"
size = int(resolution), int(resolution)

download_image = False
face = generate_final_image(hybrid_face, age_direction, intensity)
plot_three_images(face, fs=15)
复制代码

9. Check your child's appearance at all ages

Please modify the gender_influence parameter in the code below. This parameter is the gender influence factor. The value range is [0.01, 0.99]. The closer the value is to 0, the greater the influence of the father's appearance, and the greater the influence of the mother.

After each modification of the parameter value, re-run the following code block

gender_influence = 0.8  # 性别影响因子,取值范围[0.01, 0.99],取值越接近0,父亲的容貌影响越大,反之母亲影响越大

!rm -rf ./for_animation
!mkdir ./for_animation
face_img = []
hybrid_face = ((1 - gender_influence) * first_face) + (gender_influence * second_face)
animation_resolution = "512"
animation_size = int(animation_resolution), int(animation_resolution)
frames_number = 50
download_image = False
for i in range(0, frames_number, 1):
    intensity = (8 * (i / (frames_number - 1))) - 4
    generate_final_images(hybrid_face, age_direction, intensity, i)
    clear_output()
    print(str(i) + " of {} photo generated".format(str(frames_number)))

for j in reversed(face_img):
    face_img.append(j)

automatic_download = False

if gender_influence <= 0.3:
    animation_name = "boy.mp4"
elif gender_influence >= 0.7:
    animation_name = "girl.mp4"
else:
    animation_name = "animation.mp4"

imageio.mimsave('./for_animation/' + animation_name, face_img)
clear_output()
display(mpy.ipython_display('./for_animation/' + animation_name, height=400, autoplay=1, loop=1))
复制代码

10. Check your child's appearance by gender

Please modify the person_age parameter in the code below. This parameter is the age influence factor, the value range is [10, 50]. After setting this value, the appearance of the child of the corresponding age will be generated.

After each modification of the parameter value, re-run the following code block

person_age = 10  # 小孩的年龄,取值范围[10, 50],设置该值后,将生成对应年龄的小孩的容貌

!rm -rf ./for_animation
!mkdir ./for_animation
face_img = []
intensity = -((person_age / 5) - 6)
animation_resolution = "512"
animation_size = int(animation_resolution), int(animation_resolution)
frames_number = 50  # 容貌变化的图像数,取值范围[10, 50]
download_image = False

for i in range(1, frames_number):
    gender_influence = i / frames_number
    hybrid_face = ((1 - gender_influence) * first_face) + (gender_influence * second_face)
    face = generate_final_images(hybrid_face, age_direction, intensity, i)
    clear_output()
    print(str(i) + " of {} photo generated".format(str(frames_number)))

for j in reversed(face_img):
    face_img.append(j)

animation_name = str(person_age) + "_years.mp4"
imageio.mimsave('./for_animation/' + animation_name, face_img)
clear_output()
display(mpy.ipython_display('./for_animation/' + animation_name, height=400, autoplay=1, loop=1))
复制代码

11. Change photos of father and mother

Next, you can upload the parent photos you are interested in to the father_image and mother_image directories, and re-run the code to generate new child photos.

You need to follow the rules and steps below:

1. Refer to the operation in the following figure and enter the ma_share/BabyGAN directory;

2. Prepare a photo of your father, upload it to the father_image directory, and the name must be father.jpg; (If you don't know how to upload the file to JupyterLab, please check this document )

3. Prepare a photo of your mother, upload it to the mother_image directory, and name it mother.jpg;

4. Only one photo is allowed in both the father_image and mother_image directories;

5. Re-run the code in steps 4-10.

Click Follow to learn about HUAWEI CLOUD's new technologies for the first time~

Guess you like

Origin juejin.im/post/6998338082705506318