3D face recognition-point cloud into a trainable picture

1. Scene introduction

     3D face point cloud is not easy to train directly, it needs to be converted into a two-dimensional picture. Most papers do this: First, find the normal vector of the plane where each point is located. The angle between the emitted vector and the horizontal plane and the vertical plane forms two channels, and the depth map is one channel. Then, normalize these three channels to [0~255] to form a picture visible to our human eyes. Finally, you can train face recognition just like training pictures.

2. The original point cloud

     Open the point cloud MashLab, as shown in the figure below:

        If you don't have MeshLab installed, you can open it with code to view the point cloud more intuitively. The code is as follows:

#coding:UTF-8
import os
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

def ply_reader(ply_file):
    with open(ply_file, "r") as f:
        lines = f.readlines()
        lines = lines[12:-3]
    x = []
    y = []
    z = []
    for line in lines:
        parts = line.split(" ")
        x.append(float(parts[0]))
        y.append(float(parts[1]))
        z.append(float(parts[2]))
    return x, y, z

def obj_reader(obj_file):
    alpha, beta, theta, x, y, z = [], [], [], [], [], []
    with open(obj_file, "r") as f:
        lines = f.readlines()
        lines = [i for i in lines if i[0]=="v"]
    for line in lines:
        if line[:2] == "vn":
            parts = line.rstrip("\n").split(" ")
            alpha.append(float(parts[1]))
            beta.append(float(parts[2]))
            theta.append(float(parts[3]))
        else:
            parts = line.rstrip("\n").split(" ")
            x.append(float(parts[1]))
            y.append(float(parts[2]))
            z.append(float(parts[3]))
    return x, y, z, alpha, beta, theta

def points_show(x, y, z):
    fig = plt.figure()
    ax = Axes3D(fig)
    ax.set_xlabel('X label',color='r')
    ax.set_ylabel('Y label',color='r')
    ax.set_zlabel('Z label')
    ax.scatter(x,y,z,c='b',marker='.',s=2,linewidth=0,alpha=1,cmap='spectral')
    plt.show()


if __name__ == "__main__":
    x, y, z, alpha, beta, theta = obj_reader("60.obj")
    points_show(x,y,z)

          The opened point cloud is as follows:

3. The picture of the three points of the point cloud normal vector and the four channels of the depth component is shown in the figure

     The four pictures from left to right are the cosine values ​​of the angles between the normal vector and the three planes of the three-dimensional space, and the last picture is the depth value.

     The three channels of the first picture, the third picture, and the fourth picture are superimposed to form a training color picture, as shown in the following figure:

 

Note: 3D face recognition can be performed by converting the face point cloud into the above picture.

       If you need technical communication, please leave a message or add the author's WeChat (the QR code below).

       

Guess you like

Origin blog.csdn.net/Guo_Python/article/details/115012022