1. Gradient-based low rank method and its application in image inpainting
这篇文章讲了low-rank下的inpainting问题,给观察y,想要修复得到x,一开始都是给了字典,想用L1 norm,接下来开始发现数据的特点,可以矩阵化的处理,后来转化成standard low-rank approximation problem. 再紧接着,regularisation function可以改变形式,引入 norm, 这样子在p,q分别取2,1 的时候,就可以简化形式,同时,A 也可以写成一个对角矩阵和另一个矩阵相乘的形式,这样子整个模型就再次简化了,之后就可以进行相应的处理了,一般都是altenately 求解。 这是SAIST的算法。
精华也就是regularization 项的放松,成了最后的核模,还有一个就是对矩阵那部分的处理,矩阵分解。
之后才引入本文最重要的idea 就是那个grad-LR method, 其实这个算法和前面那个算法是有关系的,它既考虑了整个图片的性质爷考虑了图像梯度的性质,所以在她的优化模型中,把图像本身,和图像梯度都囊括进去了,combine这两部分,然后解优化问题。
1 Theory introduction
1.1 Importance of utilizing the priors from gradient domain
Our motivation comes from the fundamental property of a regular image.
i.e. if the rank of similarly patches extracted from the image is low, the rank of similarly patches extracted from the gradient image would also be low at the same scale order of magnitude or at the smaller scale.
1.2 Review of SAIST
spatially adaptive iterative singular-value thresholding (SAIST) method, In SAIST, it first assumes that the inpatined/desired image is to be sparse in some dictionary and can be formulated into the following minimization problem:
U is a dictionary,
(n is patch size)
In SAIST method, the author used the group sparsity to group a set of similar patches
e.g.finding the k-nearest-neighbor of an exemplar( 样例)patch
, and exploited a pseudo-matrix norm
to define the group sparsity.
where,
is related to image patches by X = UA, and the pseudo-matrix norm is defined by:
is i-th row of matrix A.
when p=1, q= 2,
which is the sum of standard deviation associated with sparse coefficient vector in each row.
The main innovation of SAIST is that, under the assumption that the basis U is orthogonal, Specifically, when the pseudo- matrix norm ‖⋅‖
is used in SSC, the minimization is:
U is a dictionary.
is a group if a set of similar patches.
By rewriting that
where
=diag{
} which (I =min{n,m}) is a diagonal matrix in
V is a right-multipling matrix of which each column in
can be decomposed to
we also have
Then model can be rewritten as:
this is a standard low-rank approximation problem.
is a nuclear norm, (defined as the sum of its single values) , and it is a relaxation of the rank function, which measures the number of non-zero singular values.
In practice,
denotes the soft thresholding operator with threshold (regularization parameter) and the reconstructed data matrix is conveniently obtained by
2 Proposed grad-LR method
(replace the function by their gradient in objective function)
2.1 Grad-LR method
Model:
This is the part of model description.
2. Gradient based Low rank Method for Highly Under-sampled Magnetic resonance Imaging Reconstruction
the model used in this paper is similar with the model used in last one~~