人脸3D重建(转)

  1. 概述

为了提高不同光照和不同角度等实际工况条件下的人脸识别率,用2D人脸重建3D人脸模型,可以得到更多不同角度的人脸数据用于训练,从而提高人脸识别精度。另外用3维人脸数据来做人脸识别要比使用2D人脸图像具有更好的鲁棒性和更高的精度,特别是在人脸角度大,环境光变化,化妆、以及表情变化等复杂的情况下仍然具有较高的识别精度,因为相对于2D人脸图像数据而言,3D人脸包含了人脸的空间信息。但是高分辨率和高精度的3D人脸数据确不是那么容易得到的,特别是在各种各样的复杂或是长距离拍摄等实际工作条件下。2D人脸数据相对而言的确很容易得到,所以如何通过2D的人脸图像更好的重建出3维的人脸模型是探索人脸识别的一个重要方向。

本文中的代码可以从码云中下载:

https://gitee.com/wjiang/Face_3D_Reconstruction
  1. 2D-3D人脸重建原理

人脸面部的特征点,如不同人脸的眼角和鼻尖都是通过语义的位置完成对应的,所以PCA主成分分析适合用来通过主成分来得到一个紧凑的脸型。

假设这里用下面的等式S代表3D人脸的位置信息向量:

其中(xi,yi,zi)是n个顶点中第i个坐标信息,一个新的3D人脸形状模型可以看做是一个平均形状模型和主成分的线性组合,这样新的形状模型可以表示为:

(1)

其中表示平均形状,(特征向量的梯度下降)为前m个主成分特征向量组成的矩阵。而则为形状特征向量的系数。

 

扫描二维码关注公众号,回复: 2135527 查看本文章

我们可以关键点的2D位置信息与平均3D模型的对应关系来重新构建新的人脸3D模型。

在对齐的步骤,假定t个2D人脸特征点是被选择用作3D重建的。t个顶点,对应于特征点,也在面部几何结构上选择。为面部的特征顶点在X,Y轴坐标集,因此Sf是形状向量f的一个子集,根据等式(1),一个新的面部形状S'f的这些特征顶点的X,Y坐标可以表示为:

(2)

其中分别是和P的特征顶点的X,Y坐标。为了将脸部坐标转换到图片的坐标,令为转换之后的形状,所以:

(3)

其中是平移向量,是比例系数,注意因为2D面部图像和3D面部模型都是正面的,所以不需要旋转矩阵。又因为是正交矩阵,可以从等式(2)中衍生得到:

(4)

因为系数是由部分顶点计算得到的,为了避免得到奇数值,特征向量被应用为的约束,所以等式(4)变为:

(5)

其中是常量,是第i个特征向量的特征值。

    等式(2)和等式(3),这里有5个变量,为了计算脸部几何结构系数,需要进行一个迭代的过程,正如下面的概述。

    第一次迭代之前,我们让作为的初始值。

    第一步,让分别作为所有之间沿着X和Y轴的所有t特征点的平均距离,所以:

,

,

其中分别是上一次的迭代,在第一次迭代中,都设置为0,可以从等式(3)中计算得到。

    第二步,将赋值给,面部几何结构系数可以由等式(5)计算得到,然后新的可以通过将带到等式(2)得到。

    几何结构系数通常在重复步骤1和步骤2至多10次的迭代之后可以收敛。然后我们可以将带入等式(1)中得到整个3D的几何结构S'.

重构的脸部形状如图1(b)所示,脸部几何结构看起来很好,但是脸部特征顶点的X,Y的坐标和2D图像的特征点有些不同,原因是形状空间受限于3D人脸数据库。为了保证的特征顶点完全正确,脸部特征顶点的X,Y坐标必须强制对其到2D图片的特征点的X,Y坐标。根据特征顶点的位移,用Kriging插值法【11】来计算非特征顶点的位移。对于插值目的,径向基函数(RBF)是一个不错的选择。通过使用上面描述的方法,最终的三维脸部几何图形得到了精确的特征顶点的重构。最后的3D脸部形状如图1(c)所示。

  1. 3DMM

3D Morphable Modle 即三维形变模型,是一个典型的统计三维人脸模型,通过统计分析方法明确地学习了3D人脸的先验知识。它表示三维人脸是基本三维人脸的线性组合,由主成分分析(PCA)在一组密集排列的3D人脸上得到。将三维人脸重建问题看做是模型拟合问题,模型参数(即线性组合系数和相机参数)进行了优化,以便产生二维投影的三维脸最好符合输入2 d图像的位置(和纹理)的一组注释面部标记(例如,眼部中心、嘴角和鼻子尖)。基于3DMM的方法通常需要在线优化,因此是计算密集型,所以实时性比较差,另外需要注意的是PCA本质上是一种低通滤波,所以这类方法在恢复人脸的细节特征方面效果仍然不理想,所以下一章节介绍基于回归框架的3D重建。

 

下面通过一个简单的例子介绍3DMM。

这里要介绍的是一个开源的3DMM, Surrey 3DMM Face Model,这个模型是英国Surrey大学提供的一个多分辨率的3D形变人脸模型。官方只提供了一个低分辨率的模型sfm_shape_3448.bin,因为低分辨率的形变模型对大多数应用来说更实用。 如果需要完全版本的形变模型需要申请:

http://cvssp.org/faceweb/3dmm/facemodels/register.html

跟随形变模型一起发布的还有一个模型接口库eos,它是一个轻量级的3D形变模型拟合库,它可以进行基本的姿态和形状的拟合。

https://github.com/patrikhuber/eos

下面我们来通过下面的viewer程序来看下这个模型,下面的代码是用libigl配合nanogui编写(VS2015)的,这个程序分为两部分,通过#if 1来使能编译,第一个部分程序实现读一个由eos形变模型拟合库生成的网状纹理对象.obj文件。第二部分代码是一个eos的3D形变模型的viewer,通过这个viewer程序你可以通过手动的调整形状、形状的PCA系数以及表情融合变形系数,来观察3D模型的变化。

 

  1. #include <igl/readOFF.h>
  2. #include <igl/readOBJ.h>
  3. #include <igl/viewer/Viewer.h>
  4. #include <nanogui/formhelper.h>
  5. #include <nanogui/screen.h>
  6. #include <iostream>
  7. #include "tutorial_shared_path.h"
  8. int main(int argc, char *argv[])
  9. {
  10. Eigen::MatrixXd V;
  11. Eigen::MatrixXi F;
  12. bool boolVariable = true;
  13. float floatVariable = 0.1f;
  14. enum Orientation { Up= 0,Down,Left,Right } dir = Up;
  15. int a = 1;
  16. auto p = [&a]( double x)-> double { a++; return x / 2; };
  17. std:: cout << p( 100) << std:: endl;
  18. std:: cout<< "a1 = " << a << std:: endl;
  19. std:: cout << "a2 = " << a << std:: endl;
  20. // Load a mesh in OFF format
  21. //igl::readOFF(TUTORIAL_SHARED_PATH "/bunny.off", V, F);
  22. igl::readOBJ(TUTORIAL_SHARED_PATH "/out1.obj", V, F);
  23. // Init the viewer
  24. igl::viewer::Viewer viewer;
  25. //viewer.data.clear();
  26. // Extend viewer menu
  27. viewer.callback_init = [&](igl::viewer::Viewer& viewer)
  28. {
  29. // Add new group
  30. viewer.ngui->addGroup( "New Group");
  31. // Expose variable directly ...
  32. viewer.ngui->addVariable( "float",floatVariable);
  33. // ... or using a custom callback
  34. viewer.ngui->addVariable< bool>( "bool",[&]( bool val) {
  35. boolVariable = val; // set
  36. },[&]() {
  37. return boolVariable; // get
  38. });
  39. // Expose an enumaration type
  40. viewer.ngui->addVariable<Orientation>( "Direction",dir)->setItems({ "Up", "Down", "Left", "Right"});
  41. // Add a button
  42. viewer.ngui->addButton( "Print Hello",[](){ std:: cout << "Hello\n"; });
  43. // Add an additional menu window
  44. viewer.ngui->addWindow(Eigen::Vector2i( 220, 10), "New Window");
  45. // Expose the same variable directly ...
  46. viewer.ngui->addVariable( "float",floatVariable);
  47. // Generate menu
  48. viewer.screen->performLayout();
  49. return false;
  50. };
  51. // t(viewer);
  52. // Plot the mesh
  53. // viewer.data.set_normals(V);
  54. viewer.data.set_mesh(V, F);
  55. viewer.launch();
  56. }



  1. #include "eos/core/Mesh.hpp"
  2. #include "eos/morphablemodel/MorphableModel.hpp"
  3. #include "eos/morphablemodel/io/cvssp.hpp"
  4. #include "eos/morphablemodel/Blendshape.hpp"
  5. #include <igl/viewer/Viewer.h>
  6. #include "nanogui/slider.h"
  7. #include "nanogui/textbox.h"
  8. #include "nanogui/formhelper.h"
  9. #include "boost/program_options.hpp"
  10. #include "boost/filesystem.hpp"
  11. #include <iostream>
  12. #include <sstream>
  13. #include <iomanip>
  14. #include <random>
  15. //#include <algorithm>
  16. #include <map>
  17. using namespace eos;
  18. namespace po = boost::program_options;
  19. namespace fs = boost::filesystem;
  20. using std:: cout;
  21. using std:: endl;
  22. template < typename T>
  23. std:: string to_string(const T a_value, const int n = 6)
  24. {
  25. std:: ostringstream out;
  26. out << std::setprecision(n) << a_value;
  27. return out.str();
  28. }
  29. /**
  30. * Model viewer for 3D Morphable Models.
  31. *
  32. * It's working well but does have a few todo's left, and the code is not very polished.
  33. *
  34. * If no model and blendshapes are given via command-line, then a file-open dialog will be presented.
  35. * If the files are given on the command-line, then no dialog will be presented.
  36. */
  37. int main(int argc, char *argv[])
  38. {
  39. fs::path model_file, blendshapes_file;
  40. try {
  41. po:: options_description desc("Allowed options");
  42. desc.add_options()
  43. ( "help,h",
  44. "display the help message")
  45. ( "model,m", po::value<fs::path>(&model_file),
  46. "an eos 3D Morphable Model stored as cereal BinaryArchive (.bin)")
  47. ( "blendshapes,b", po::value<fs::path>(&blendshapes_file),
  48. "an eos file with blendshapes (.bin)")
  49. ;
  50. po::variables_map vm;
  51. po::store(po::command_line_parser(argc, argv).options(desc).run(), vm);
  52. if (vm.count( "help")) {
  53. cout << "Usage: eos-model-viewer [options]" << endl;
  54. cout << desc;
  55. return EXIT_SUCCESS;
  56. }
  57. po::notify(vm);
  58. }
  59. catch ( const po::error& e) {
  60. cout << "Error while parsing command-line arguments: " << e.what() << endl;
  61. cout << "Use --help to display a list of options." << endl;
  62. return EXIT_FAILURE;
  63. }
  64. // Should do it from the shape instance actually - never compute the Mesh actually!
  65. auto get_V = []( const core::Mesh& mesh)
  66. {
  67. Eigen::MatrixXd V(mesh.vertices.size(), 3);
  68. for ( int i = 0; i < mesh.vertices.size(); ++i)
  69. {
  70. V(i, 0) = mesh.vertices[i].x;
  71. V(i, 1) = mesh.vertices[i].y;
  72. V(i, 2) = mesh.vertices[i].z;
  73. }
  74. return V;
  75. };
  76. auto get_F = []( const core::Mesh& mesh)
  77. {
  78. Eigen::MatrixXi F(mesh.tvi.size(), 3);
  79. for ( int i = 0; i < mesh.tvi.size(); ++i)
  80. {
  81. F(i, 0) = mesh.tvi[i][ 0];
  82. F(i, 1) = mesh.tvi[i][ 1];
  83. F(i, 2) = mesh.tvi[i][ 2];
  84. }
  85. return F;
  86. };
  87. auto get_C = []( const core::Mesh& mesh)
  88. {
  89. Eigen::MatrixXd C(mesh.colors.size(), 3);
  90. for ( int i = 0; i < mesh.colors.size(); ++i)
  91. {
  92. C(i, 0) = mesh.colors[i].r;
  93. C(i, 1) = mesh.colors[i].g;
  94. C(i, 2) = mesh.colors[i].b;
  95. }
  96. return C;
  97. };
  98. morphablemodel::MorphableModel morphable_model;
  99. morphablemodel::Blendshapes blendshapes;
  100. // These are the coefficients of the currently active mesh instance:
  101. std:: vector< float> shape_coefficients;
  102. std:: vector< float> color_coefficients;
  103. std:: vector< float> blendshape_coefficients;
  104. igl::viewer::Viewer viewer;
  105. std::default_random_engine rng;
  106. std:: map<nanogui::Slider*, int> sliders; // If we want to set the sliders to zero separately, we need separate maps here.
  107. auto add_shape_coefficients_slider = [&sliders, &shape_coefficients, &blendshape_coefficients](igl::viewer::Viewer& viewer, const morphablemodel::MorphableModel& morphable_model, const morphablemodel::Blendshapes& blendshapes, std:: vector< float>& coefficients, int coefficient_id, std:: string coefficient_name) {
  108. nanogui::Widget *panel = new nanogui::Widget(viewer.ngui->window());
  109. panel->setLayout( new nanogui::BoxLayout(nanogui::Orientation::Horizontal, nanogui::Alignment::Middle, 0, 20));
  110. nanogui::Slider* slider = new nanogui::Slider(panel);
  111. sliders.emplace(slider, coefficient_id);
  112. slider->setFixedWidth( 80);
  113. slider->setValue( 0.0f);
  114. slider->setRange({ -3.5f, 3.5f });
  115. //slider->setHighlightedRange({ -1.0f, 1.0f });
  116. nanogui::TextBox *textBox = new nanogui::TextBox(panel);
  117. textBox->setFixedSize(Eigen::Vector2i( 40, 20));
  118. textBox->setValue( "0");
  119. textBox->setFontSize( 16);
  120. textBox->setAlignment(nanogui::TextBox::Alignment::Right);
  121. slider->setCallback([slider, textBox, &morphable_model, &blendshapes, &viewer, &coefficients, &sliders, &shape_coefficients, &blendshape_coefficients]( float value) {
  122. textBox->setValue(to_string(value, 2)); // while dragging the slider
  123. auto id = sliders[slider]; // Todo: if it doesn't exist, we should rather throw - this inserts a new item into the map!
  124. coefficients[id] = value;
  125. // Just update the shape (vertices):
  126. Eigen::VectorXf shape;
  127. if (blendshape_coefficients.size() > 0 && blendshapes.size() > 0)
  128. {
  129. shape = morphable_model.get_shape_model().draw_sample(shape_coefficients) + morphablemodel::to_matrix(blendshapes) * Eigen::Map< const Eigen::VectorXf>(blendshape_coefficients.data(), blendshape_coefficients.size());
  130. }
  131. else {
  132. shape = morphable_model.get_shape_model().draw_sample(shape_coefficients);
  133. }
  134. auto num_vertices = morphable_model.get_shape_model().get_data_dimension() / 3;
  135. Eigen::Map<Eigen::MatrixXf> shape_reshaped(shape.data(), 3, num_vertices); // Take 3 at a piece, then transpose below. Works. (But is this really faster than a loop?)
  136. viewer.data.set_vertices(shape_reshaped.transpose().cast< double>());
  137. });
  138. return panel;
  139. };
  140. auto add_blendshapes_coefficients_slider = [&sliders, &shape_coefficients, &blendshape_coefficients](igl::viewer::Viewer& viewer, const morphablemodel::MorphableModel& morphable_model, const morphablemodel::Blendshapes& blendshapes, std:: vector< float>& coefficients, int coefficient_id, std:: string coefficient_name) {
  141. nanogui::Widget *panel = new nanogui::Widget(viewer.ngui->window());
  142. panel->setLayout( new nanogui::BoxLayout(nanogui::Orientation::Horizontal, nanogui::Alignment::Middle, 0, 20));
  143. nanogui::Slider* slider = new nanogui::Slider(panel);
  144. sliders.emplace(slider, coefficient_id);
  145. slider->setFixedWidth( 80);
  146. slider->setValue( 0.0f);
  147. slider->setRange({ -0.5f, 2.0f });
  148. //slider->setHighlightedRange({ 0.0f, 1.0f });
  149. nanogui::TextBox *textBox = new nanogui::TextBox(panel);
  150. textBox->setFixedSize(Eigen::Vector2i( 40, 20));
  151. textBox->setValue( "0");
  152. textBox->setFontSize( 16);
  153. textBox->setAlignment(nanogui::TextBox::Alignment::Right);
  154. slider->setCallback([slider, textBox, &morphable_model, &blendshapes, &viewer, &coefficients, &sliders, &shape_coefficients, &blendshape_coefficients]( float value) {
  155. textBox->setValue(to_string(value, 2)); // while dragging the slider
  156. auto id = sliders[slider]; // if it doesn't exist, we should rather throw - this inserts a new item into the map!
  157. coefficients[id] = value;
  158. // Just update the shape (vertices):
  159. Eigen::VectorXf shape;
  160. if (blendshape_coefficients.size() > 0 && blendshapes.size() > 0)
  161. {
  162. shape = morphable_model.get_shape_model().draw_sample(shape_coefficients) + morphablemodel::to_matrix(blendshapes) * Eigen::Map< const Eigen::VectorXf>(blendshape_coefficients.data(), blendshape_coefficients.size());
  163. }
  164. else { // No blendshapes - doesn't really make sense, we require loading them. But it's fine.
  165. shape = morphable_model.get_shape_model().draw_sample(shape_coefficients);
  166. }
  167. auto num_vertices = morphable_model.get_shape_model().get_data_dimension() / 3;
  168. Eigen::Map<Eigen::MatrixXf> shape_reshaped(shape.data(), 3, num_vertices); // Take 3 at a piece, then transpose below. Works. (But is this really faster than a loop?)
  169. viewer.data.set_vertices(shape_reshaped.transpose().cast< double>());
  170. });
  171. return panel;
  172. };
  173. auto add_color_coefficients_slider = [&sliders, &shape_coefficients, &color_coefficients, &blendshape_coefficients](igl::viewer::Viewer& viewer, const morphablemodel::MorphableModel& morphable_model, const morphablemodel::Blendshapes& blendshapes, std:: vector< float>& coefficients, int coefficient_id, std:: string coefficient_name) {
  174. nanogui::Widget *panel = new nanogui::Widget(viewer.ngui->window());
  175. panel->setLayout( new nanogui::BoxLayout(nanogui::Orientation::Horizontal, nanogui::Alignment::Middle, 0, 20));
  176. nanogui::Slider* slider = new nanogui::Slider(panel);
  177. sliders.emplace(slider, coefficient_id);
  178. slider->setFixedWidth( 80);
  179. slider->setValue( 0.0f);
  180. slider->setRange({ -3.5f, 3.5f });
  181. //slider->setHighlightedRange({ -1.0f, 1.0f });
  182. nanogui::TextBox *textBox = new nanogui::TextBox(panel);
  183. textBox->setFixedSize(Eigen::Vector2i( 40, 20));
  184. textBox->setValue( "0");
  185. textBox->setFontSize( 16);
  186. textBox->setAlignment(nanogui::TextBox::Alignment::Right);
  187. slider->setCallback([slider, textBox, &morphable_model, &blendshapes, &viewer, &coefficients, &sliders, &shape_coefficients, &color_coefficients, &blendshape_coefficients]( float value) {
  188. textBox->setValue(to_string(value, 2)); // while dragging the slider
  189. auto id = sliders[slider]; // if it doesn't exist, we should rather throw - this inserts a new item into the map!
  190. coefficients[id] = value;
  191. // Set the new colour values:
  192. Eigen::VectorXf color = morphable_model.get_color_model().draw_sample(color_coefficients);
  193. auto num_vertices = morphable_model.get_color_model().get_data_dimension() / 3;
  194. Eigen::Map<Eigen::MatrixXf> color_reshaped(color.data(), 3, num_vertices); // Take 3 at a piece, then transpose below. Works. (But is this really faster than a loop?)
  195. viewer.data.set_colors(color_reshaped.transpose().cast< double>());
  196. });
  197. return panel;
  198. };
  199. // Extend viewer menu
  200. viewer.callback_init = [&](igl::viewer::Viewer& viewer)
  201. {
  202. // Todo: We could do the following: If a filename is given via cmdline, then don't open the dialogue!
  203. if (model_file.empty())
  204. model_file = nanogui::file_dialog({ { "bin", "eos Morphable Model file" },{ "scm", "scm Morphable Model file" } }, false);
  205. if (model_file.extension() == ".scm") {
  206. morphable_model = morphablemodel::load_scm_model(model_file. string()); // try?
  207. }
  208. else {
  209. morphable_model = morphablemodel::load_model(model_file. string()); // try?
  210. // morphablemodel::load_isomap(model_file.string()); // try?
  211. //load_isomap
  212. }
  213. if (blendshapes_file.empty())
  214. blendshapes_file = nanogui::file_dialog({ { "bin", "eos blendshapes file" } }, false);
  215. blendshapes = morphablemodel::load_blendshapes(blendshapes_file. string()); // try?
  216. // Error on load failure: How to make it pop up?
  217. //auto dlg = new nanogui::MessageDialog(viewer.ngui->window(), nanogui::MessageDialog::Type::Warning, "Title", "This is a warning message");
  218. // Initialise all coefficients (all zeros):
  219. shape_coefficients = std:: vector< float>(morphable_model.get_shape_model().get_num_principal_components());
  220. color_coefficients = std:: vector< float>(morphable_model.get_color_model().get_num_principal_components()); // Todo: It can have no colour model!
  221. blendshape_coefficients = std:: vector< float>(blendshapes.size()); // Todo: Should make it work without blendshapes!
  222. // Start off displaying the mean:
  223. const auto mesh = morphable_model.get_mean();
  224. viewer.data.set_mesh(get_V(mesh), get_F(mesh));
  225. viewer.data.set_colors(get_C(mesh));
  226. viewer.core.align_camera_center(viewer.data.V, viewer.data.F);
  227. // General:
  228. viewer.ngui->addWindow(Eigen::Vector2i( 10, 580), "Morphable Model");
  229. // load/save model & blendshapes
  230. // save obj
  231. // Draw random sample
  232. // Load fitting result... (uesful for maybe seeing where something has gone wrong!)
  233. // see: https://github.com/wjakob/nanogui/blob/master/src/example1.cpp#L283
  234. //viewer.ngui->addButton("Open Morphable Model", [&morphable_model]() {
  235. // std::string file = nanogui::file_dialog({ {"bin", "eos Morphable Model file"} }, false);
  236. // morphable_model = morphablemodel::load_model(file);
  237. //});
  238. viewer.ngui->addButton( "Random face sample", [&]() {
  239. const auto sample = morphable_model.draw_sample(rng, 1.0f, 1.0f); // This draws both shape and color model - we can improve the speed by not doing that.
  240. viewer.data.set_vertices(get_V(sample));
  241. viewer.data.set_colors(get_C(sample));
  242. // Set the coefficients and sliders to the drawn alpha value: (ok we don't have them - need to use our own random function)
  243. // Todo.
  244. });
  245. viewer.ngui->addButton( "Mean", [&]() {
  246. const auto mean = morphable_model.get_mean(); // This draws both shape and color model - we can improve the speed by not doing that.
  247. viewer.data.set_vertices(get_V(mean));
  248. viewer.data.set_colors(get_C(mean));
  249. // Set the coefficients and sliders to the mean:
  250. for ( auto&& e : shape_coefficients)
  251. e = 0.0f;
  252. for ( auto&& e : blendshape_coefficients)
  253. e = 0.0f;
  254. for ( auto&& e : color_coefficients)
  255. e = 0.0f;
  256. for ( auto&& s : sliders)
  257. s.first->setValue( 0.0f);
  258. });
  259. // The Shape PCA window:
  260. viewer.ngui->addWindow(Eigen::Vector2i( 230, 10), "Shape PCA");
  261. viewer.ngui->addGroup( "Coefficients");
  262. auto num_shape_coeffs_to_display = std::min(morphable_model.get_shape_model().get_num_principal_components(), 30);
  263. for ( int i = 0; i < num_shape_coeffs_to_display; ++i)
  264. {
  265. viewer.ngui->addWidget( std::to_string(i), add_shape_coefficients_slider(viewer, morphable_model, blendshapes, shape_coefficients, i, std::to_string(i)));
  266. }
  267. if (num_shape_coeffs_to_display < morphable_model.get_shape_model().get_num_principal_components())
  268. {
  269. nanogui::Label *label = new nanogui::Label(viewer.ngui->window(), "Displaying 30/" + std::to_string(morphable_model.get_shape_model().get_num_principal_components()) + " coefficients.");
  270. viewer.ngui->addWidget( "", label);
  271. }
  272. // The Expression Blendshapes window:
  273. viewer.ngui->addWindow(Eigen::Vector2i( 655, 10), "Expression blendshapes");
  274. viewer.ngui->addGroup( "Coefficients");
  275. for ( int i = 0; i < blendshapes.size(); ++i)
  276. {
  277. viewer.ngui->addWidget( std::to_string(i), add_blendshapes_coefficients_slider(viewer, morphable_model, blendshapes, blendshape_coefficients, i, std::to_string(i)));
  278. }
  279. // The Colour PCA window:
  280. viewer.ngui->addWindow(Eigen::Vector2i( 440, 10), "Colour PCA");
  281. viewer.ngui->addGroup( "Coefficients");
  282. auto num_color_coeffs_to_display = std::min(morphable_model.get_color_model().get_num_principal_components(), 30);
  283. for ( int i = 0; i < num_shape_coeffs_to_display; ++i)
  284. {
  285. viewer.ngui->addWidget( std::to_string(i), add_color_coefficients_slider(viewer, morphable_model, blendshapes, color_coefficients, i, std::to_string(i)));
  286. }
  287. if (num_color_coeffs_to_display < morphable_model.get_shape_model().get_num_principal_components())
  288. {
  289. nanogui::Label *label = new nanogui::Label(viewer.ngui->window(), "Displaying 30/" + std::to_string(morphable_model.get_color_model().get_num_principal_components()) + " coefficients.");
  290. viewer.ngui->addWidget( "", label);
  291. }
  292. // call to generate menu
  293. viewer.screen->performLayout();
  294. return false;
  295. };
  296. viewer.launch();
  297. return EXIT_SUCCESS;
  298. }


调整形状与颜色还有表情的系数可以得到不同的3D模型:

 

 

下面的应用程序演示了一种基于ibug(Intelligent Behaviour Understanding Group) LFPW图像的三维模型的相机的估计和模型的拟合。

其中LFPW数据库提供了各种角度的人脸图像和它们68 landmarks标记文件.pts文件,软件实现中用LandmarkMapper转换为顶点的索引,然后估计一个正交摄像机,用相机矩阵人脸形状被拟合到lannmarks标记上。拟合得到包含拟合系数的mesh,用相机参数和mesh从图像中提取出纹理信息,最终保存这些包含纹理信息的等距映射图片,将mesh保存为纹理对象.obj文件,这个object文件包含mesh的纹理坐标信息,与obj同时保存的还有mtl文件,它是等距映射图片isomap.png的一个连接文件,它们是分开保存的。

 

 

 

 

 

 

 

 




  1. Int main(int argc, char *argv[])
  2. {
  3. fs::path modelfile, isomapfile, imagefile, landmarksfile, mappingsfile, outputfile;
  4. try {
  5. po:: options_description desc("Allowed options");
  6. desc.add_options()
  7. ( "help,h",
  8. "display the help message")
  9. ( "model,m", po::value<fs::path>(&modelfile)->required()->default_value( "../share/sfm_shape_3448.bin"),
  10. "a Morphable Model stored as cereal BinaryArchive")
  11. ( "image,i", po::value<fs::path>(&imagefile)->required()->default_value( "data/image_0129.png"),
  12. "an input image")
  13. ( "landmarks,l", po::value<fs::path>(&landmarksfile)->required()->default_value( "data/image_0129.pts"),
  14. "2D landmarks for the image, in ibug .pts format")
  15. ( "mapping,p", po::value<fs::path>(&mappingsfile)->required()->default_value( "../share/ibug_to_sfm.txt"),
  16. "landmark identifier to model vertex number mapping")
  17. ( "output,o", po::value<fs::path>(&outputfile)->required()->default_value( "out"),
  18. "basename for the output rendering and obj files")
  19. ;
  20. po::variables_map vm;
  21. po::store(po::command_line_parser(argc, argv).options(desc).run(), vm);
  22. if (vm.count( "help")) {
  23. cout << "Usage: fit-model-simple [options]" << endl;
  24. cout << desc;
  25. return EXIT_SUCCESS;
  26. }
  27. po::notify(vm);
  28. }
  29. catch ( const po::error& e) {
  30. cout << "Error while parsing command-line arguments: " << e.what() << endl;
  31. cout << "Use --help to display a list of options." << endl;
  32. return EXIT_FAILURE;
  33. }
  34. // Load the image, landmarks, LandmarkMapper and the Morphable Model:
  35. Mat image = cv::imread(imagefile. string());
  36. LandmarkCollection<cv::Vec2f> landmarks;
  37. try {
  38. landmarks = read_pts_landmarks(landmarksfile. string());
  39. }
  40. catch ( const std::runtime_error& e) {
  41. cout << "Error reading the landmarks: " << e.what() << endl;
  42. return EXIT_FAILURE;
  43. }
  44. morphablemodel::MorphableModel morphable_model;
  45. try {
  46. morphable_model = morphablemodel::load_model(modelfile. string());
  47. }
  48. catch ( const std::runtime_error& e) {
  49. cout << "Error loading the Morphable Model: " << e.what() << endl;
  50. return EXIT_FAILURE;
  51. }
  52. core::LandmarkMapper landmark_mapper = mappingsfile.empty() ? core::LandmarkMapper() : core::LandmarkMapper(mappingsfile);
  53. // Draw the loaded landmarks:
  54. Mat outimg = image.clone();
  55. for ( auto&& lm : landmarks) {
  56. cv::rectangle(outimg, cv::Point2f(lm.coordinates[ 0] - 2.0f, lm.coordinates[ 1] - 2.0f), cv::Point2f(lm.coordinates[ 0] + 2.0f, lm.coordinates[ 1] + 2.0f), { 255, 0, 0 });
  57. }
  58. // These will be the final 2D and 3D points used for the fitting:
  59. vector<Vec4f> model_points; // the points in the 3D shape model
  60. vector< int> vertex_indices; // their vertex indices
  61. vector<Vec2f> image_points; // the corresponding 2D landmark points
  62. // Sub-select all the landmarks which we have a mapping for (i.e. that are defined in the 3DMM):
  63. for ( int i = 0; i < landmarks.size(); ++i) {
  64. auto converted_name = landmark_mapper.convert(landmarks[i].name);
  65. if (!converted_name) { // no mapping defined for the current landmark
  66. continue;
  67. }
  68. int vertex_idx = std::stoi(converted_name.get());
  69. auto vertex = morphable_model.get_shape_model().get_mean_at_point(vertex_idx);
  70. model_points.emplace_back(Vec4f(vertex.x(), vertex.y(), vertex.z(), 1.0f));
  71. vertex_indices.emplace_back(vertex_idx);
  72. image_points.emplace_back(landmarks[i].coordinates);
  73. }
  74. // Estimate the camera (pose) from the 2D - 3D point correspondences
  75. fitting::ScaledOrthoProjectionParameters pose = fitting::estimate_orthographic_projection_linear(image_points, model_points, true, image.rows);
  76. fitting:: RenderingParameters rendering_params(pose, image.cols, image.rows);
  77. // The 3D head pose can be recovered as follows:
  78. float yaw_angle = glm::degrees(glm::yaw(rendering_params.get_rotation()));
  79. // and similarly for pitch and roll.
  80. // Estimate the shape coefficients by fitting the shape to the landmarks:
  81. Mat affine_from_ortho = fitting::get_3x4_affine_camera_matrix(rendering_params, image.cols, image.rows);
  82. vector< float> fitted_coeffs = fitting::fit_shape_to_landmarks_linear(morphable_model, affine_from_ortho, image_points, vertex_indices);
  83. // Obtain the full mesh with the estimated coefficients:
  84. core::Mesh mesh = morphable_model.draw_sample(fitted_coeffs, vector< float>());
  85. // Extract the texture from the image using given mesh and camera parameters:
  86. Mat isomap = render::extract_texture(mesh, affine_from_ortho, image);
  87. // Save the mesh as textured obj:
  88. outputfile += fs::path( ".obj");
  89. core::write_textured_obj(mesh, outputfile. string());
  90. // And save the isomap:
  91. outputfile.replace_extension( ".isomap.png");
  92. cv::imwrite(outputfile. string(), isomap);
  93. cout << "Finished fitting and wrote result mesh and isomap to files with basename " << outputfile.stem().stem() << "." << endl;
  94. return EXIT_SUCCESS;
  95. }

前面的例子是用LFPW数据的人脸图片和已经标记好的landmarks,当然我们还可以通过其他的一些手段获取图片的人脸,以及获得68 landmarks标记信息,例如用dlib获取人脸图片,用eos的LandmarkCollection类获取人脸的landmarks标记信息,下面就是这么一个例子:

 

 

 

 

 

 

 

 

 




  1. //eos library include
  2. #include "eos/core/Landmark.hpp"
  3. #include "eos/core/LandmarkMapper.hpp"
  4. #include "eos/fitting/nonlinear_camera_estimation.hpp"
  5. #include "eos/fitting/linear_shape_fitting.hpp"
  6. #include "eos/render/utils.hpp"
  7. #include "eos/render/texture_extraction.hpp"
  8. //OpenCV include
  9. #include "opencv2/core/core.hpp"
  10. #include "opencv2/highgui/highgui.hpp"
  11. #if 0
  12. #ifdef WIN32
  13. #define BOOST_ALL_DYN_LINK // Link against the dynamic boost lib. Seems to be necessary because we use /MD, i.e. link to the dynamic CRT.
  14. #define BOOST_ALL_NO_LIB // Don't use the automatic library linking by boost with VS2010 (#pragma ...). Instead, we specify everything in cmake.
  15. #endif
  16. #endif
  17. #include "boost/program_options.hpp"
  18. #include <boost/filesystem.hpp>
  19. #include <vector>
  20. #include <iostream>
  21. #include <fstream>
  22. #include <sstream>
  23. #include <iomanip>
  24. using namespace eos;
  25. namespace po = boost::program_options;
  26. namespace fs = boost::filesystem;
  27. using eos::core::Landmark;
  28. using eos::core::LandmarkCollection;
  29. using cv::Mat;
  30. using cv::Vec2f;
  31. using cv::Vec3f;
  32. using cv::Vec4f;
  33. using std:: cout;
  34. using std:: endl;
  35. using std:: vector;
  36. using std:: string;
  37. using Eigen::Vector4f;
  38. int main(int argc, char *argv[])
  39. {
  40. /// read eos file
  41. fs::path modelfile, isomapfile,mappingsfile, outputfilename, outputfilepath;
  42. try {
  43. po:: options_description desc("Allowed options");
  44. desc.add_options()
  45. ( "help,h", "display the help message")
  46. ( "model,m", po::value<fs::path>(&modelfile)->required()->default_value( "../share1/sfm_shape_3448.bin"), "a Morphable Model stored as cereal BinaryArchive")
  47. ( "mapping,p", po::value<fs::path>(&mappingsfile)->required()->default_value( "../share1/ibug2did.txt"), "landmark identifier to model vertex number mapping")
  48. ( "outputfilename,o", po::value<fs::path>(&outputfilename)->required()->default_value( "out"), "basename for the output rendering and obj files")
  49. ( "outputfilepath,o", po::value<fs::path>(&outputfilepath)->required()->default_value( "output/"), "basename for the output rendering and obj files")
  50. ;
  51. po::variables_map vm;
  52. po::store(po::command_line_parser(argc, argv).options(desc).run(), vm);
  53. if (vm.count( "help")) {
  54. cout << "Usage: webcam_face_fit_model_keegan [options]" << endl;
  55. cout << desc;
  56. return EXIT_SUCCESS;
  57. }
  58. po::notify(vm);
  59. }
  60. catch ( const po::error& e) {
  61. cout << "Error while parsing command-line arguments: " << e.what() << endl;
  62. cout << "Use --help to display a list of options." << endl;
  63. return EXIT_SUCCESS;
  64. }
  65. try
  66. {
  67. cv:: VideoCapture cap(0);
  68. dlib::image_window win;
  69. // Load face detection and pose estimation models.
  70. dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();
  71. dlib::shape_predictor pose_model;
  72. dlib::deserialize( "../share1/shape_predictor_68_face_landmarks.dat") >> pose_model;
  73. #define TEST_FRAME
  74. cv::Mat frame_capture;
  75. #ifdef TEST_FRAME
  76. frame_capture = cv::imread( "./data/image_0129.png");
  77. cv::imshow( "input", frame_capture);
  78. cv::imwrite( "frame_capture.png", frame_capture);
  79. cv::waitKey( 1);
  80. #endif
  81. // Grab and process frames until the main window is closed by the user.
  82. int frame_count = 0;
  83. while (!win.is_closed())
  84. {
  85. CAPTURE_FRAME:
  86. Mat image;
  87. #ifndef TEST_FRAME
  88. cap >> frame_capture;
  89. #endif
  90. frame_capture.copyTo(image);
  91. // Turn OpenCV's Mat into something dlib can deal with. Note that this just
  92. // wraps the Mat object, it doesn't copy anything. So cimg is only valid as
  93. // long as frame_capture is valid. Also don't do anything to frame_capture that would cause it
  94. // to reallocate the memory which stores the image as that will make cimg
  95. // contain dangling pointers. This basically means you shouldn't modify frame_capture
  96. // while using cimg.
  97. dlib::cv_image<dlib::bgr_pixel> cimg(frame_capture);
  98. // Detect faces
  99. std:: vector<dlib::rectangle> faces = detector(cimg);
  100. if (faces.size() == 0) goto CAPTURE_FRAME;
  101. for ( size_t i = 0; i < faces.size(); ++i)
  102. {
  103. cout << faces[i] << endl;
  104. }
  105. // Find the pose of each face.
  106. std:: vector<dlib::full_object_detection> shapes;
  107. for ( unsigned long i = 0; i < faces.size(); ++i)
  108. shapes.push_back(pose_model(cimg, faces[i]));
  109. /// face 68 pointers
  110. for ( size_t i = 0; i < shapes.size(); ++i)
  111. {
  112. morphablemodel::MorphableModel morphable_model;
  113. try
  114. {
  115. morphable_model = morphablemodel::load_model(modelfile. string());
  116. }
  117. catch ( const std::runtime_error& e)
  118. {
  119. cout << "Error loading the Morphable Model: " << e.what() << endl;
  120. return EXIT_FAILURE;
  121. }
  122. core::LandmarkMapper landmark_mapper = mappingsfile.empty() ? core::LandmarkMapper() : core::LandmarkMapper(mappingsfile);
  123. /// every face
  124. LandmarkCollection<Vec2f> landmarks;
  125. landmarks.reserve( 68);
  126. cout << "point_num = " << shapes[i].num_parts() << endl;
  127. int num_face = shapes[i].num_parts();
  128. for ( size_t j = 0; j < num_face; ++j)
  129. {
  130. dlib::point pt_save = shapes[i].part(j);
  131. Landmark<Vec2f> landmark;
  132. /// input
  133. landmark.name = std::to_string(j + 1);
  134. landmark.coordinates[ 0] = pt_save.x();
  135. landmark.coordinates[ 1] = pt_save.y();
  136. //cout << shapes[i].part(j) << "\t";
  137. landmark.coordinates[ 0] -= 1.0f;
  138. landmark.coordinates[ 1] -= 1.0f;
  139. landmarks.emplace_back(landmark);
  140. }
  141. // Draw the loaded landmarks:
  142. Mat outimg = image.clone();
  143. cv::imshow( "image", image);
  144. cv::waitKey( 10);
  145. int face_point_i = 1;
  146. for ( auto&& lm : landmarks)
  147. {
  148. cv:: Point numPoint(lm.coordinates[0] - 2.0f, lm.coordinates[1] - 2.0f);
  149. cv::rectangle(outimg, cv::Point2f(lm.coordinates[ 0] - 2.0f, lm.coordinates[ 1] - 2.0f), cv::Point2f(lm.coordinates[ 0] + 2.0f, lm.coordinates[ 1] + 2.0f), { 255, 0, 0 });
  150. char str_i[ 11];
  151. sprintf(str_i, "%d", face_point_i);
  152. cv::putText(outimg, str_i, numPoint, CV_FONT_HERSHEY_COMPLEX, 0.3, cv::Scalar( 0, 0, 255));
  153. ++i;
  154. }
  155. //cout << "face_point_i = " << face_point_i << endl;
  156. cv::imshow( "rect_outimg", outimg);
  157. cv::waitKey( 1);
  158. // These will be the final 2D and 3D points used for the fitting:
  159. std:: vector<Vec4f> model_points; // the points in the 3D shape model
  160. std:: vector< int> vertex_indices; // their vertex indices
  161. std:: vector<Vec2f> image_points; // the corresponding 2D landmark points
  162. // Sub-select all the landmarks which we have a mapping for (i.e. that are defined in the 3DMM):
  163. for ( int i = 0; i < landmarks.size(); ++i)
  164. {
  165. auto converted_name = landmark_mapper.convert(landmarks[i].name);
  166. if (!converted_name)
  167. {
  168. // no mapping defined for the current landmark
  169. continue;
  170. }
  171. int vertex_idx = std::stoi(converted_name.get());
  172. //Vec4f vertex = morphable_model.get_shape_model().get_mean_at_point(vertex_idx);
  173. auto vertex = morphable_model.get_shape_model().get_mean_at_point(vertex_idx);
  174. model_points.emplace_back(Vec4f(vertex.x(), vertex.y(), vertex.z(), 1.0f));
  175. vertex_indices.emplace_back(vertex_idx);
  176. image_points.emplace_back(landmarks[i].coordinates);
  177. }
  178. // Estimate the camera (pose) from the 2D - 3D point correspondences
  179. fitting::RenderingParameters rendering_params = fitting::estimate_orthographic_camera(image_points,
  180. model_points,
  181. image.cols,
  182. image.rows);
  183. Mat affine_from_ortho = get_3x4_affine_camera_matrix(rendering_params,
  184. image.cols,
  185. image.rows);
  186. // cv::imshow("affine_from_ortho", affine_from_ortho);
  187. // cv::waitKey();
  188. // The 3D head pose can be recovered as follows:
  189. float yaw_angle = glm::degrees(glm::yaw(rendering_params.get_rotation()));
  190. // Estimate the shape coefficients by fitting the shape to the landmarks:
  191. std:: vector< float> fitted_coeffs = fitting::fit_shape_to_landmarks_linear(morphable_model,
  192. affine_from_ortho,
  193. image_points,
  194. vertex_indices);
  195. #if 0
  196. cout << "size = " << fitted_coeffs.size() << endl;
  197. for ( int i = 0; i < fitted_coeffs.size(); ++i)
  198. cout << fitted_coeffs[i] << endl;
  199. #endif
  200. // Obtain the full mesh with the estimated coefficients:
  201. core::Mesh mesh = morphable_model.draw_sample(fitted_coeffs, std:: vector< float>());
  202. // Extract the texture from the image using given mesh and camera parameters:
  203. Mat isomap = render::extract_texture(mesh, affine_from_ortho, image);
  204. ///// save obj
  205. std:: stringstream strOBJ;
  206. strOBJ << std::setw( 10) << std::setfill( '0') << frame_count << ".obj";
  207. // Save the mesh as textured obj:
  208. outputfilename = strOBJ.str();
  209. std:: cout << outputfilename << std:: endl;
  210. auto outputfile = outputfilepath. string() + outputfilename. string();
  211. core::write_textured_obj(mesh, outputfile);
  212. // And save the isomap:
  213. outputfilename.replace_extension( ".isomap.png");
  214. cv::imwrite(outputfilepath. string() + outputfilename. string(), isomap);
  215. cv::imshow( "isomap_png", isomap);
  216. cv::waitKey( 1);
  217. outputfilename.clear();
  218. }
  219. frame_count++;
  220. // Display it all on the screen
  221. win.clear_overlay();
  222. win.set_image(cimg);
  223. win.add_overlay(render_face_detections(shapes));
  224. }
  225. }
  226. catch (dlib::serialization_error& e)
  227. {
  228. cout << "You need dlib's default face landmarking model file to run this example." << endl;
  229. cout << "You can get it from the following URL: " << endl;
  230. cout << " http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
  231. cout << endl << e.what() << endl;
  232. }
  233. catch ( std::exception& e)
  234. {
  235. cout << e.what() << endl;
  236. }
  237. return EXIT_SUCCESS;
  238. }

  1. Cascaded Regression

姿态估计和2D人脸对齐方面基于级联回归的方法现在被广泛的应用并且被认为是比较有前景的一种方法。通常一个有识别力的基于级联回归的方法在本地特征空间通过应用基于学习的方法来规避不可微的问题。这种方法允许从数据中学习到函数的梯度。

下面简单介绍级联回归的步骤,首先给定一张图片I和预先训练好的关于参数向量的模型,基于回归的方法是以下面的方式迭代的更新参数

使得后验概率最大化。基于回归的方法通过用有监督的方法在训练集中学习得到梯度来解决这个非线性的优化问题。 也就是说目标是找到一个回归:

其中是从输入图片中提取特征的向量,给定当前的模型参数,并且是预估模型参数更新,这个映射可以通过任意的回归方法从训练集中学习得到。例如线性回归,随机森林又或者是人工神经网络等等。与这些回归算法形成对比的是,级联回归方法产生一个由N个弱回归级联组成的强回归:

其中是级联中的第N个弱回归,本文中用一个简单的线性回归做例子:

其中An是一个投影矩阵并且bn是第n个弱回归的偏移(bias)。

更特殊的,给定一个训练样本,我们首先运用岭回归算法(ridge regression algorithm) 学习第一个弱回归,最小化损失:

然后更新训练样本,例如模型阐述和相应的特诊向量,用学到的回归量给第二个弱回归的学习产生一个新的训练集。这个过程反复直到收敛或者超过预定义的回归量的最大数。

在测试阶段,这些预先训练过的弱回归量逐渐应用到输入图像中,初始模型参数估计为更新模型并输出最终拟合结果。


  1. 纹理萃取

3D脸部几何结构重建之后,将2D图像投影正交到3D几何结构以产生纹理。可能有一些顶点没有相应的色彩信息是因为它们在人脸正面图像中被遮挡了,因此在产生的纹理映射中仍然有一些空白区域,可以用已知的颜色在空白区域进行插值。

 

综合不同的姿态,光照和表情

在自然环境中,PIE(姿态,光照和表情)在人脸识别算法中仍然是一个关键的挑战性的问题。

为了提高人脸识别的精度,在各种各样的PIE情况下捕获采样人脸图片是必要的。然而,

用任何2D-to-2D的方法在不同的PIE情况产生一个新的面部图像都是困难的。从给定的2D脸部图像重建3D人脸模型的方法可以解决这个问题。然后这个重建的3D人脸模型被旋转产生不同姿态的图片。通过应用不同的光线,不同的光照被创造出来。最后,一种基于mpe4的面部动画技术用于生成表情,这也是人脸识别的一个重要因素,但在大多数研究中都没有考虑到。

 

  1. 姿态

 

姿态的多样性是人脸识别的困难的首要根源。当在输入图像姿态有大的变化时, 人脸识别系统的性能会显著的下降,特别是当系统给的训练数据只有很少的的非正脸的图像时。改善多个视图识别的一个合理方法是用同多个视图进行训练。在被提出的工作中,通过将3D模型旋转到正确的姿态来产生任意视图非常方便的。

为了人脸识别的训练目的,在人脸图片的多视图上的特征点的位置是需要的,通常,人脸图片上的任意视角的人脸对其是十分困难的,并且目前还没有技术能以高精度的自动解决这个问题。多数多视图人脸识别方法需要手工的给这些在大量的训练集和测试采样图像上的特征点贴标签来对其它们,这种方法及不精确也非常耗时。

 

建议的方法是,因为多视图图像是由旋转3D模型得到,在新的人脸图像的对齐不在是一个问题。当在旋转3D模型后多个视角的人脸图像被投影, 通过3D模型上的相应特征顶点投影到2D图像可以得到脸部特征点的位置。因此多视角人脸图像的特征点位置的获取是自动和精确的。

 

  1. 光照

光照是人脸识别的另一个重要的问题。同样的人脸由于光照的改变而表现不同。由光照引发的改变通常大于个体间的差异。

 

原文地址:https://blog.csdn.net/jcjx0315/article/details/78671670?locationNum=7&fps=1

猜你喜欢

转载自blog.csdn.net/zhang43211234/article/details/81004077