深度学习AI美颜教程----AI美发算法特效

给照片或者视频中的人物头发换颜色,这个技术已经在手机app诸如天天P图,美图秀秀等应用中使用,并获得了不少用户的青睐。

如何给照片或者视频中的人物头发换发色?

换发色算法流程如下图所示:

1,AI头发分割模块

基于深度学习的目标分割算法已经比较成熟,比较常用的有FCN,SegNet,UNet,PspNet,DenseNet等等。

这里我们使用Unet网络来进行头发分割,具体可以参考如下链接:点击打开链接

Unet头发分割代码如下:

 
  1. def get_unet_256(input_shape=(256, 256, 3),

  2. num_classes=1):

  3. inputs = Input(shape=input_shape)

  4. # 256

  5.  
  6. down0 = Conv2D(32, (3, 3), padding='same')(inputs)

  7. down0 = BatchNormalization()(down0)

  8. down0 = Activation('relu')(down0)

  9. down0 = Conv2D(32, (3, 3), padding='same')(down0)

  10. down0 = BatchNormalization()(down0)

  11. down0 = Activation('relu')(down0)

  12. down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)

  13. # 128

  14.  
  15. down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)

  16. down1 = BatchNormalization()(down1)

  17. down1 = Activation('relu')(down1)

  18. down1 = Conv2D(64, (3, 3), padding='same')(down1)

  19. down1 = BatchNormalization()(down1)

  20. down1 = Activation('relu')(down1)

  21. down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)

  22. # 64

  23.  
  24. down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)

  25. down2 = BatchNormalization()(down2)

  26. down2 = Activation('relu')(down2)

  27. down2 = Conv2D(128, (3, 3), padding='same')(down2)

  28. down2 = BatchNormalization()(down2)

  29. down2 = Activation('relu')(down2)

  30. down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)

  31. # 32

  32.  
  33. down3 = Conv2D(256, (3, 3), padding='same')(down2_pool)

  34. down3 = BatchNormalization()(down3)

  35. down3 = Activation('relu')(down3)

  36. down3 = Conv2D(256, (3, 3), padding='same')(down3)

  37. down3 = BatchNormalization()(down3)

  38. down3 = Activation('relu')(down3)

  39. down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)

  40. # 16

  41.  
  42. down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)

  43. down4 = BatchNormalization()(down4)

  44. down4 = Activation('relu')(down4)

  45. down4 = Conv2D(512, (3, 3), padding='same')(down4)

  46. down4 = BatchNormalization()(down4)

  47. down4 = Activation('relu')(down4)

  48. down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)

  49. # 8

  50.  
  51. center = Conv2D(1024, (3, 3), padding='same')(down4_pool)

  52. center = BatchNormalization()(center)

  53. center = Activation('relu')(center)

  54. center = Conv2D(1024, (3, 3), padding='same')(center)

  55. center = BatchNormalization()(center)

  56. center = Activation('relu')(center)

  57. # center

  58.  
  59. up4 = UpSampling2D((2, 2))(center)

  60. up4 = concatenate([down4, up4], axis=3)

  61. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  62. up4 = BatchNormalization()(up4)

  63. up4 = Activation('relu')(up4)

  64. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  65. up4 = BatchNormalization()(up4)

  66. up4 = Activation('relu')(up4)

  67. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  68. up4 = BatchNormalization()(up4)

  69. up4 = Activation('relu')(up4)

  70. # 16

  71.  
  72. up3 = UpSampling2D((2, 2))(up4)

  73. up3 = concatenate([down3, up3], axis=3)

  74. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  75. up3 = BatchNormalization()(up3)

  76. up3 = Activation('relu')(up3)

  77. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  78. up3 = BatchNormalization()(up3)

  79. up3 = Activation('relu')(up3)

  80. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  81. up3 = BatchNormalization()(up3)

  82. up3 = Activation('relu')(up3)

  83. # 32

  84.  
  85. up2 = UpSampling2D((2, 2))(up3)

  86. up2 = concatenate([down2, up2], axis=3)

  87. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  88. up2 = BatchNormalization()(up2)

  89. up2 = Activation('relu')(up2)

  90. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  91. up2 = BatchNormalization()(up2)

  92. up2 = Activation('relu')(up2)

  93. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  94. up2 = BatchNormalization()(up2)

  95. up2 = Activation('relu')(up2)

  96. # 64

  97.  
  98. up1 = UpSampling2D((2, 2))(up2)

  99. up1 = concatenate([down1, up1], axis=3)

  100. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  101. up1 = BatchNormalization()(up1)

  102. up1 = Activation('relu')(up1)

  103. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  104. up1 = BatchNormalization()(up1)

  105. up1 = Activation('relu')(up1)

  106. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  107. up1 = BatchNormalization()(up1)

  108. up1 = Activation('relu')(up1)

  109. # 128

  110.  
  111. up0 = UpSampling2D((2, 2))(up1)

  112. up0 = concatenate([down0, up0], axis=3)

  113. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  114. up0 = BatchNormalization()(up0)

  115. up0 = Activation('relu')(up0)

  116. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  117. up0 = BatchNormalization()(up0)

  118. up0 = Activation('relu')(up0)

  119. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  120. up0 = BatchNormalization()(up0)

  121. up0 = Activation('relu')(up0)

  122. # 256

  123.  
  124. classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0)

  125.  
  126. model = Model(inputs=inputs, outputs=classify)

  127.  
  128. #model.compile(optimizer=RMSprop(lr=0.0001), loss=bce_dice_loss, metrics=[dice_coeff])

  129.  
  130. return model

分割效果举例如下:

使用的训练和测试数据集合大家自己准备即可。

2,头发换色模块

这个模块看起来比较简单,实际上却并非如此。

这个模块要细分为①头发颜色增强与修正模块;②颜色空间染色模块;③头发细节增强;

①头发颜色增强与修正模块

为什么要颜色增强与修正?

先看下面一组图,我们直接使用HSV颜色空间对纯黑色的头发进行染色,目标色是紫色,结果如下:

大家可以看到,针对上面这张原图,头发比较黑,在HSV颜色空间进行头发换色之后,效果图中很不明显,只有轻微的颜色变化;

为什么会出现这种情况?原因如下:

我们以RGB和HSV颜色空间为例,首先来看下HSV和RGB之间的转换公式:

设 (r, g, b)分别是一个颜色的红、绿和蓝坐标,它们的值是在0到1之间的实数。设max等价于r, g和b中的最大者。设min等于这些值中的最小者。要找到在HSL空间中的 (h, s, l)值,这里的h ∈ [0, 360)是角度的色相角,而s, l ∈ [0,1]是饱和度和亮度,计算为:

我们假设头发为纯黑色,R=G=B=0,那么按照HSV计算公式可以得到H = S = V = 0;

假设我们要把头发颜色替换为红色(r=255,g=0,b=0);

那么,我们先将红色转换为对应的hsv,然后保留原始黑色头发的V,红色头发的hs,重新组合新的hsV,在转换为RGB颜色空间,即为头发换色之后的效果(hs是颜色属性,v是明度属性,保留原始黑色头发的明度,替换颜色属性以达到换色目的);

HSV转换为RGB的公式如下:

对于黑色,我们计算的结果是H=S=V=0,由于V=0,因此,p=q=t=0,不管目标颜色的hs值是多少,rgb始终都是0,也就是黑色;

这样,虽然我们使用了红色,来替换黑色头发,但是,结果却依旧是黑色,结论也就是hsv/hsl颜色空间,无法对黑色换色。

下面,我们给出天天P图和美妆相机对应紫色的换发色效果:

与之前HSV颜色空间的结果对比,我们明显可以看到,天天P图和美妆相机的效果要更浓,更好看,而且对近乎黑色的头发进行了完美的换色;

由于上述原因,我们这里需要对图像中的头发区域进行一定的增强处理:提亮,轻微改变色调;

这一步通常可以在PS上进行提亮调色,然后使用LUT来处理;

经过提亮之后的上色效果如下图所示:

可以看到,基本与美妆相机和天天P图类似了。

②HSV/HSL/YCbCr颜色空间换色

这一步比较简单,保留明度分量不变,将其他颜色、色调分量替换为目标发色就可以了。

这里以HSV颜色空间为例:

假如我们要将头发染发为一半青色,一般粉红色,那么我们构建如下图所示的颜色MAP:

对于头发区域的每一个像素点P,我们将P的RGB转换为HSV颜色空间,得到H/S/V;

根据P在原图头发区域的位置比例关系,我们在颜色MAP中找到对应位置的像素点D,将D的RGB转换为HSV颜色空间,得到目标颜色的h/s/v;

根据目标颜色重组hsV,然后转为RGB即可;

这一模块代码如下:

 
  1. // h = [0,360], s = [0,1], v = [0,1]

  2. void RGBToHSV(int R, int G, int B, float* h, float* s, float * v)

  3. {

  4. float min, max;

  5. float r = R / 255.0f;

  6. float g = G / 255.0f;

  7. float b = B / 255.0f;

  8. min = MIN2(r,MIN2(g,b));

  9. max = MAX2(r,MAX2(g,b));

  10. if (max == min)

  11. *h = 0;

  12. if (max == r && g >= b)

  13. *h = 60.0f * (g - b) / (max - min);

  14. if (max == r && g < b)

  15. *h = 60.0f * (g - b) / (max - min) + 360.0f;

  16.  
  17. if (max == g)

  18. *h = 60.0f * (b - r) / (max - min) + 120.0f;

  19. if (max == b)

  20. *h = 60.0f * (r - g) / (max - min) + 240.0f;

  21.  
  22. if (max == 0)

  23. *s = 0;

  24. else

  25. *s = (max - min) / max;

  26. *v = max;

  27. };

  28. void HSVToRGB(float h, float s, float v, int* R, int *G, int *B)

  29. {

  30. float q = 0, p = 0, t = 0, r = 0, g = 0, b = 0;

  31. int hN = 0;

  32. if (h < 0)

  33. h = 360 + h;

  34. hN = (int)(h / 60);

  35. p = v * (1.0f - s);

  36. q = v * (1.0f - (h / 60.0f - hN) * s);

  37. t = v * (1.0f - (1.0f - (h / 60.0f - hN)) * s);

  38. switch (hN)

  39. {

  40. case 0:

  41. r = v;

  42. g = t;

  43. b = p;

  44. break;

  45. case 1:

  46. r = q;

  47. g = v;

  48. b = p;

  49. break;

  50. case 2:

  51. r = p;

  52. g = v;

  53. b = t;

  54. break;

  55. case 3:

  56. r = p;

  57. g = q;

  58. b = v;

  59. break;

  60. case 4:

  61. r = t;

  62. g = p;

  63. b = v;

  64. break;

  65. case 5:

  66. r = v;

  67. g = p;

  68. b = q;

  69. break;

  70. default:

  71. break;

  72. }

  73. *R = (int)CLIP3((r * 255.0f),0,255);

  74. *G = (int)CLIP3((g * 255.0f),0,255);

  75. *B = (int)CLIP3((b * 255.0f),0,255);

  76. };

效果图如下所示:

本文算法对比美妆相机效果如下:

③头发区域增强

这一步主要是为了突出头发丝的细节,可以使用锐化算法,如Laplace锐化,USM锐化等等。

上述过程基本是模拟美妆相机染发算法的过程,给大家参考一下,最后给出本文算法的一些效果举例:

本文效果除了实现正常的单色染发,混合色染发之外,还实现了挑染,如最下方一组效果图所示。

对于挑染的算法原理:

计算头发纹理,根据头发纹理选取需要挑染的头发束,然后对这些头发束与其他头发分开染色即可,具体逻辑这里不再累赘,大家自行研究,这里给出解决思路供大家参考。

最后,本文算法理论上实时处理是没有问题的,头发分割已经可以实时处理,所以后面基本没有什么耗时操作,使用opengl实现实时染发是没有问题的。

猜你喜欢

转载自blog.csdn.net/a1ccwt/article/details/81327412