最小二乘估计和奇异值分解(SVD)

一 背景

最小二乘估计(Least Square Estimation)是方程参数估测的常用方法之一,在机器人视觉中应用广泛,在估计相机参数矩阵(Cameral calibration matrix)、单应矩阵(Homography)、基本矩阵(Fundamental matrix)、本质矩阵(Essential matrix)中都有使用,这种估计方法称为DLT(Direct Linear Transformation)算法。
在进行最小二乘估计时,常采用SVD(Singular Value Decomposition)方法。

二 数学计算

1 最小二乘估计

在估算单应矩阵时,会用到DLT算法,其核心就是SVD方法,基本流程见下图。其中方程形式为Ah=0,A是2n*9矩阵,n为采样点数量,h是1*9矩阵。
图1 DLT算法流程
书[Hartley, 2003]中介绍到,A经过SVD可以得到A=UDVT,其中D是对角矩阵,对角元素按照从大到小排列。那么结论是,V的最后一列等于h,也就是说V的最后一列的9个元素,就是h内9个元素的估算结果。
在做最小二乘估计时,其实不需要完整进行SVD分解,只需要得到AT*A的最小特征值对应的特征向量即可,因为这个向量就是上面所说的V的最后一列。

2 SVD(奇异值分解)

矩阵奇异值分解的数学表示是A=UDVT,其中,D是对角矩阵,其值是AT*A的特征值的平方根,叫做奇异值,具体可参考[Zeng, 2015]。

3 特征值和特征向量

特征值和特征向量的基本算法有四种——定义法、幂法、雅各比(Jacobi)法和QR法。定义法就是从矩阵特征值和特征向量的定义出发,可参考[Lay, 2007],定义法适用于无误差的情况,而在计算机视觉的实际应用中,采集的数据有人工误差的引入,因此不适用。幂法的缺点是可能遗漏特征值,而这里需要知道最小特征值,因此也不适用。雅各比(Jacobi)法和QR法都能够求取所有特征值,也能够得到所有特征向量,因而适合与我们的应用背景。
我们目前适用的是雅各比(Jacobi)法,具体参考[Zhou, 2014]。需要注明的是,该博客所分享的代码中,第28行,对dbMax取的第一个值时,我认为应该取矩阵元素的绝对值,而不是其元素值本身。另外,Jacobi法的使用前提是原始矩阵为实对称阵。

Python程序

主函数:

import SVD

#------------------------------------------------
# main 
if __name__ == "__main__":
    print("Hello world")

    arr = [1., 2., 3.,
            7., 8., 9.]

    # doing SVD (single value decompositioin)
    eigens_sorted2 = SVD.Doing_svd(arr, 2, 3)

    print(eigens_sorted2)

SVD类:

from math import *

PI = 3.1415926

#------------------------------------------------
# Matrix math : dot prodict and transpose
#------------------------------------------------
def getTranspose(arr, nRow, nCol):
    arrTrans = []

    nRowT = nCol
    nColT = nRow

    i = 0
    j = 0
    while i<nRowT:
        j = 0
        while j<nColT:
            arrTrans.append(arr[j*nCol+i])
            j += 1
        i += 1


    return arrTrans

#------------------------------------------------
def getDotProduct_ATransA(arr, nRow, nCol):
    result = []
    arrTrans = getTranspose(arr, nRow, nCol)
    #print("trans")
    #epiACounter = 0
    #for ele in arrTrans:
    #   print("{},".format(ele)),
    #   epiACounter += 1
    #   if(epiACounter == 9):
    #       print("")
    #       epiACounter = 0

    i = 0
    j = 0

    nColT = nRow
    nRowT = nCol

    while i<nRowT:
        j = 0
        while j<nCol:
            #print("i {},   j {}".format(i,j))
            ele = 0
            k = 0
            while k<nColT:
                ele = ele + (arrTrans[i*nColT+k]*arr[k*nCol+j])
                #print("i {},   j {}, k {}".format(i,j,k))
                #print("{} * {}".format(arrTrans[i*nColT+k], arr[k*nCol+j]))
                #print("i {},   j {}, k {}".format(i,j,k))
                #print(ele)
                k += 1
            result.append(ele)
            j += 1
        i += 1

    #print(result)
    return result
#------------------------------------------------
# Single value decompostion
#------------------------------------------------
def getUnitMatrix(dimension):
    unitMatrix = []

    i = 0
    j = 0
    while i<dimension:
        j = 0
        while j<dimension:
            if(i==j):
                ele = 1.
            else:
                ele = 0.
            unitMatrix.append(ele)
            j += 1
        i += 1
    return unitMatrix
#------------------------------------------------
def arcTan(p1, p2):
    if p2!=0.:
        return atan(p1/p2)
    else:
        value = p1
        if value>0:
            return PI/2.
        else:
            return PI/-2.
#------------------------------------------------
def getEigenValue_Jacobi(arr, eps, maxCount):
    # step 1 : make eigen vector matrix ready
    sizeDim = int(sqrt(len(arr)))
    sizeRow = int(sizeDim)
    sizeCol = int(sizeDim)
    eigenVector = getUnitMatrix(sizeDim)

    #print(eigenVector)

    # step 2 : iterate
    # step 2.1 : find the largest element 
    nCount = 0
    while True:
        maxEle = abs(arr[1])
        #maxEle = abs(arr[1])
        nRow = 0
        nCol = 1

        i = 0
        j = 0
        while i<sizeDim:
            j=0
            while j<sizeDim:
                value = abs(arr[i*sizeRow+j])
                #print(value)

                if i!=j:
                    if value>maxEle:
                        maxEle = value
                        nRow = i
                        nCol = j
                j += 1
            i += 1

        #print("max element ({},{}) {}".format(nRow,nCol,maxEle))

        # step 2.2 : check
        if maxEle<eps:
            break
        if nCount>maxCount:
            break

        nCount += 1

        # step 3 : calculate
        App = arr[nRow*sizeDim+nRow]
        Apq = arr[nRow*sizeDim+nCol]
        Aqq = arr[nCol*sizeDim+nCol]
        #print("App = {}".format(App))

        theta = 0.5*arcTan(-2.*Apq,Aqq-App)
        sinTheta = sin(theta)
        cosTheta = cos(theta)
        sin2Theta = sin(2.*theta)
        cos2Theta = cos(2.*theta)

        arr[nRow*sizeDim+nRow] = App*cosTheta*cosTheta + Aqq*sinTheta*sinTheta + 2.*Apq*cosTheta*sinTheta
        arr[nCol*sizeDim+nCol] = App*sinTheta*sinTheta + Aqq*cosTheta*cosTheta - 2.*Apq*cosTheta*sinTheta
        arr[nRow*sizeDim+nCol] = 0.5*(Aqq-App)*sin2Theta + Apq*cos2Theta
        arr[nCol*sizeDim+nRow] = arr[nRow*sizeDim+nCol]

        i = 0
        while i<sizeDim:
            if (i!=nRow)and(i!=nCol):
                u = i*sizeDim + nRow # p
                w = i*sizeDim + nCol # q
                dbMax = arr[u]
                arr[u] = arr[w]*sinTheta + dbMax*cosTheta
                arr[w] = arr[w]*cosTheta - dbMax*sinTheta
            i += 1

        j = 0
        while j<sizeDim:
            if (j!=nRow)and(j!=nCol):
                u = nRow*sizeDim + j # p
                w = nCol*sizeDim + j # q
                dbMax = arr[u]
                arr[u] = arr[w]*sinTheta + dbMax*cosTheta
                arr[w] = arr[w]*cosTheta - dbMax*sinTheta
            j += 1

        # step 4 : get eigen vector
        i = 0
        while i<sizeDim:
            u = i*sizeDim + nRow
            w = i*sizeDim + nCol
            dbMax = eigenVector[u]
            eigenVector[u] = eigenVector[w]*sinTheta + dbMax*cosTheta
            eigenVector[w] = eigenVector[w]*cosTheta - dbMax*sinTheta

            i += 1

    #print("eigen vactors")
    #print(eigenVector)
    #print("eigen values")
    #print(arr)

    # make elements in eigen values matrix, i.e. arr, that are less than the eps to be 0
    for i in range(sizeDim):
        for j in range(sizeDim):
            if i!=j:
                if arr[i*sizeDim+j]<eps:
                    arr[i*sizeDim+j] = 0

    eigens_svd = [arr] + [eigenVector] 

    return eigens_svd
#------------------------------------------------
def getEigenValueAndVector_sorting(eigenValueAndVector):
    # get eigen values and eigen vectors from function "getEigenValue_Jacobi"
    eigenValue  = eigenValueAndVector[0]
    eigenVector = eigenValueAndVector[1]
    #print(eigenVectorAndValue)
    #print(eigenValue)
    #print(eigenVector)

    nDim = int(sqrt(len(eigenValue)))
    #print(nDim)

    if nDim!=3:
        print("the dimenstion of eigen value matrix is not 3!")
        return

    eigenValue_sorted  = [0,0,0] 
    eigenVector_sorted = getUnitMatrix(3)

    # step 1 : find the largest eigen value
    largestEle  = eigenValue[0*nDim+0]
    position_largestEigenValue = 0
    for i in range(nDim):
        ele = eigenValue[i*nDim+i]
        #print("ele : {} ({},{})".format(ele, i, i))
        if ele>largestEle:
            largestEle = ele
            position_largestEigenValue = i
    #print("the largest eigen value is : {} ({},{})".format(largestEle, position_largestEigenValue,position_largestEigenValue))
    eigenValue_sorted[0] = largestEle
    eigenVector_sorted[0] = eigenVector[0*nDim+position_largestEigenValue]
    eigenVector_sorted[3] = eigenVector[1*nDim+position_largestEigenValue]
    eigenVector_sorted[6] = eigenVector[2*nDim+position_largestEigenValue]
    #print("largest")
    #print(eigenVector_sorted)

    # step 2 : find the mediumlarge and the smallest eigen values
    eigenValues_rest = [] # 2 row, 2 column. [eigen value, eigen value position] for each row
    for i in range(nDim):
        if i!=position_largestEigenValue:
            eigenValues_rest.append(eigenValue[i*nDim+i])
            eigenValues_rest.append(i)
    #print("the rest eigen values are {}".format(eigenValues_rest))

    position_mediumEigenValue = 0 
    position_smallestEigenValue = 1
    if eigenValues_rest[0]<eigenValues_rest[2]:
        position_mediumEigenValue = 1 
        position_smallestEigenValue = 0
    #print(position_mediumEigenValue)
    #print(position_smallestEigenValue)

    position_mediumEigenValue_inEigenValue = eigenValues_rest[position_mediumEigenValue*2+1]
    eigenValue_sorted[1] = eigenValues_rest[position_mediumEigenValue*2+0]
    eigenVector_sorted[1] = eigenVector[0*nDim+position_mediumEigenValue_inEigenValue]
    eigenVector_sorted[4] = eigenVector[1*nDim+position_mediumEigenValue_inEigenValue]
    eigenVector_sorted[7] = eigenVector[2*nDim+position_mediumEigenValue_inEigenValue]


    position_smallestEigenValue_inEigenValue = eigenValues_rest[position_smallestEigenValue*2+1]
    eigenValue_sorted[2] = eigenValues_rest[position_smallestEigenValue*2+0]
    eigenVector_sorted[2] = eigenVector[0*nDim+position_smallestEigenValue_inEigenValue]
    eigenVector_sorted[5] = eigenVector[1*nDim+position_smallestEigenValue_inEigenValue]
    eigenVector_sorted[8] = eigenVector[2*nDim+position_smallestEigenValue_inEigenValue]

    #print("eigen value stored : ")
    #print(eigenValue_sorted)
    #print("eigen vector stored : ")
    #print(eigenVector_sorted)

    eigens_sorted = [eigenValue_sorted] + [eigenVector_sorted]

    return eigens_sorted

#------------------------------------------------
def Doing_svd(arr, nRow, nCol):
    arrT = getTranspose(arr, nRow, nCol)
    arrTTimesarr = getDotProduct_ATransA(arr, nRow, nCol)
    lowerlimit = 1.e-15
    eigens = getEigenValue_Jacobi(arrTTimesarr, lowerlimit, 300)    
    eigens_sorted = getEigenValueAndVector_sorting(eigens)

    return eigens_sorted

#------------------------------------------------
def Test_showing():
    print("Hello, this comes from function \" Test_showing \"")
    return 


参考文献

[Hartley, 2003] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision[M]. Cambridge University Press, 2003.
[Lay, 2007] David C. Lay, 沈复兴, 傅莺莺,等. 线性代数及其应用[M]. 人民邮电出版社, 2007.
[Zeng, 2015] 如何轻松干掉svd(矩阵奇异值分解),用代码说话, http://www.cnblogs.com/whu-zeng/p/4705970.html
[Zhou, 2014] 矩阵的特征值和特征向量的雅克比算法C/C++实现, http://blog.csdn.net/zhouxuguang236/article/details/40212143

猜你喜欢

转载自blog.csdn.net/qq_32454557/article/details/76647374
今日推荐