阅读笔记之——Contextual Loss

版权声明: https://blog.csdn.net/gwplovekimi/article/details/84631194

给出几篇论文的链接(

https://arxiv.org/pdf/1803.02077.pdf

https://arxiv.org/pdf/1803.04626.pdf

无论是风格转换的任务中,还是超分辨率重建的任务中,有一个核心内容是找到生成图像跟目标图像特征之间的相似性度量。对于超分任务,从概率的角度看,也是希望生成图像跟目标图像的分布尽可能相似,因此提出使用一个比较特征分布的目标函数,而不仅仅是比较外观。若仅仅比较外观,就会存在MSE loss所带来的问题——over smooth

The commonly used loss functions for comparing images can be classified into two types:

Pixel-to-pixel loss functions(针对图像pixel维度的)——compare pixels at the same spatial coordinates, e.g., L1, L2, the perceptual loss。这类损失函数对input 和GT 的要求比较高,是逐像素进行匹配的,对以PSNR、SSIM为客观评价指标的问题贡献比较大,但是从目前的研究来看,单单用这类损失函数,已经不能够满足我们的需求了。比如SRGAN这篇论文中提到MSE代价函数使重建结果有较高的信噪比PSNR,但是缺少了高频信息,出现过度平滑的纹理。perceptual loss的提出主要是为了更好的保留图像的高频信息。

全局平均,从而导致过平滑。见下图

Global loss functions(针对整幅图像特征的)——perceptual loss,Gram loss which successfully captures style and texture by comparing statistics collected over the entire image.(通过比较在整个图像上收集的统计数据,成功捕获样式和纹理。)与perceptual loss相似的,Gram loss也是计算在特征层上的损失,这两个损失都是在整个VGG网络中得到的特征层进行的计算,约束的是全局高频特征的相似性;然而图像的相似性一般是局部的,这些约束也不是十分的合理(由于它的全局性,它把全局特性转换为整个图像。它不能用于约束生成的图像的内容)

更注重纹理特征,会更加的sharp。但是存在的问题是虽然更明亮,但是部分会出现倾斜,见下图

adversarial loss functions (GAN,针对身材图像和目标图像的“逼真程度”)。GAN loss是一个常见的损失函数,通过简单的判断生成的图像是否“逼真”到以假乱真的程度,但是GAN的模式崩溃问题到目前都没有一个较好的解决办法。

而这篇论文的Contextual Loss——a loss function targeted at non-aligned data.(用于不对齐数据之间的loss)based on the similarity between their features, ignoring the spatial positions of the features. And, this approach allows the generated image to spatially deform with respect to the target. The Contextual loss is not overly global (which is the main limitation of the Gram loss) since it compares features, and therefore regions, based on semantics.

A nice characteristic of the Contextual loss is its tendency to maintain the appearance of the target image.(更倾向于保持目标图像的外观?可以理解为保持目标的轮廓么~~~) This enables generation of images that look real even without using GANs, whose goal is speci cally to distinguish between `real' and `fake', and are sometimes difficult to fine tune in training.

关于perceptual loss

https://www.jianshu.com/p/58fd418fcabf

猜你喜欢

转载自blog.csdn.net/gwplovekimi/article/details/84631194