逆透视变换详解 及 代码实现(二)


根据 逆透视变换详解 及 代码实现(一)的原理

下面我用车上拍摄的车道图像,采用逆透视变换得到的图像,给出代码前我们先看下处理结果。


首先是原始图像:


下图为逆透视变换图像:



下面说具体的实现吧!!

一、参数设置:

1、需要知道相机的内部参数(这个具体步骤可以找相关文档,这里就不具体展开说)。

我们这里假设已经获取内部参数:

相机焦距,相机光学中心, 相机高度, 相机的俯仰角, 相机的偏航角, 相机拍摄出的图像尺寸。

参数说明:  其中偏航角 俯仰角 就是在(一)中所说的世界坐标经过旋转矩阵得到相应的相机坐标。  而偏航角和俯仰角将决定这个旋转矩阵。  而相机焦距  和相机光学中心 是可以从相机标定后得出 ,相机高度需要自己测量。

图像尺寸,是拍出图像的尺寸。

2、 设定逆透视变换的参数:

逆透视图像的尺寸,需要进行逆透视变换的区域,逆透视变换的差值算法。

逆透视变换的区域:原始图像中需要变换的区域(当然这个区域不能超过消失点区域,后面会说到)

逆透视图像的尺寸: 就是需要将逆透视变换区域映射到这个逆透视图像上。

差值算法:因为需要映射,所以某些数值需要估计出,这里用双线性差值。


二、

根据相机的内部参数计算消失点:

因为图像是二维的,所以消失点是是一个二维坐标。

code:

function [ vp ] = GetVanishingPoint( cameraInfo )
%GetVanishingPoint Summary of this function goes here
%   Detailed explanation goes here
 vpp = [ sin(cameraInfo.yaw*pi/180)/cos(cameraInfo.pitch*pi/180);cos(cameraInfo.yaw*pi/180)/cos(cameraInfo.pitch*pi/180);0];

 tyawp = [
	  cos(cameraInfo.yaw*pi/180), -sin(cameraInfo.yaw*pi/180), 0;         
	  sin(cameraInfo.yaw*pi/180), cos(cameraInfo.yaw*pi/180), 0;
      0, 0, 1];
  
 tpitchp = [1, 0, 0;
            0, -sin(cameraInfo.pitch*pi/180), -cos(cameraInfo.pitch*pi/180);
            0, cos(cameraInfo.pitch*pi/180), -sin(cameraInfo.pitch*pi/180)];
  
 transform = tyawp*tpitchp;
 
 
 t1p = [
    cameraInfo.focalLengthX, 0, cameraInfo.opticalCenterX;
    0, cameraInfo.focalLengthY, cameraInfo.opticalCenterY;
    0, 0, 1];
transform = t1p*transform;

vp = transform*vpp;
 
end

三、利用消失点可以得到uv平面中的图像范围和对应的xy平面的范围

code:

uvLimitsp = [ vp.x,         ipmInfo.ipmRight, ipmInfo.ipmLeft,  vp.x;
              ipmInfo.ipmTop, ipmInfo.ipmTop, ipmInfo.ipmTop,   ipmInfo.ipmBottom];

xyLimits = TransformImage2Ground(uvLimitsp,cameraInfo);

function [ xyLimits ] = TransformImage2Ground( uvLimits,cameraInfo )
%TransformImage2Ground Summary of this function goes here
%   Detailed explanation goes here
[row , col ] = size(uvLimits);
inPoints4 = zeros(row+2,col);
inPoints4(1:2,:) = uvLimits;
inPoints4(3,:) = 1;
inPoints3 = inPoints4(1:3,:);

c1 = cos(cameraInfo.pitch*pi/180);
s1 = sin(cameraInfo.pitch*pi/180);
c2 = cos(cameraInfo.yaw*pi/180);
s2 = sin(cameraInfo.yaw*pi/180);

matp= [
    -cameraInfo.cameraHeight*c2/cameraInfo.focalLengthX,cameraInfo.cameraHeight*s1*s2/cameraInfo.focalLengthY,(cameraInfo.cameraHeight*c2*cameraInfo.opticalCenterX/cameraInfo.focalLengthX)- (cameraInfo.cameraHeight *s1*s2* cameraInfo.opticalCenterY/ cameraInfo.focalLengthY) - cameraInfo.cameraHeight *c1*s2;

    cameraInfo.cameraHeight *s2/cameraInfo.focalLengthX, ...
    cameraInfo.cameraHeight *s1*c2/cameraInfo.focalLengthY, ...
    (-cameraInfo.cameraHeight *s2* cameraInfo.opticalCenterX ....
      /cameraInfo.focalLengthX)-(cameraInfo.cameraHeight *s1*c2* ....
      cameraInfo.opticalCenterY /cameraInfo.focalLengthY) -  ...
      cameraInfo.cameraHeight *c1*c2;

    0, cameraInfo.cameraHeight *c1/cameraInfo.focalLengthY, (-cameraInfo.cameraHeight *c1* cameraInfo.opticalCenterY/cameraInfo.focalLengthY)+cameraInfo.cameraHeight*s1;

    0, -c1 /cameraInfo.focalLengthY,(c1* cameraInfo.opticalCenterY /cameraInfo.focalLengthY) - s1];

inPoints4 = matp*inPoints3;
inPointsr4 = inPoints4(4,:);
div = inPointsr4;
inPoints4(1,:) = inPoints4(1,:)./div;
inPoints4(2,:) = inPoints4(2,:)./div;
inPoints2 = inPoints4(1:2,:);
xyLimits = inPoints2;

end

四、根据得到的范围计算xy平面的一一对应的映射

row1 = xyLimits(1,:);
row2 = xyLimits(2,:);
xfMin = min(row1); xfMax = max(row1);
yfMin = min(row2); yfMax = max(row2);

[outRow outCol] = size(outImage);
stepRow = (yfMax - yfMin)/outRow;
stepCol = (xfMax - xfMin)/outCol;
xyGrid = zeros(2,outRow*outCol);
y = yfMax-0.5*stepRow;

for i = 1:outRow
    x = xfMin+0.5*stepCol;
    for j = 1:outCol
        xyGrid(1,(i-1)*outCol+j) = x;
        xyGrid(2,(i-1)*outCol+j) = y;
        x = x + stepCol;
    end
    y = y - stepRow;
end

五、将xy平面的映射转换到uv平面,并画出这个映射

%TransformGround2Image
uvGrid = TransformGround2Image(xyGrid,cameraInfo);
% Image mean 
means = mean(R(:))/255;
RR = double(R)/255;
for i=1:outRow
    for j = 1:outCol;
        ui = uvGrid(1,(i-1)*outCol+j);
        vi = uvGrid(2,(i-1)*outCol+j);
         if (ui<ipmInfo.ipmLeft || ui>ipmInfo.ipmRight || vi<ipmInfo.ipmTop || vi>ipmInfo.ipmBottom) 
          outImage(i,j) = means;
         else
             x1 = int32(ui); x2 = int32(ui+1);
             y1 = int32(vi); y2 = int32(vi+1);
              x = ui-double(x1) ;  y = vi-double(y1);
             val = double(RR(y1,x1))*(1-x)*(1-y)+double(RR(y1,x2))*x*(1-y)+double(RR(y2,x1))*(1-x)*y+double(RR(y2,x2))*x*y;
              outImage(i,j) = val;
         end
    end
end

最终可以显示这个图像:如上面的逆透视变化图像!

具体的code,可以在这里下载。如果问题可以留言交流!!



另外,如果需要标定相机的可以参考这篇博文:

http://blog.csdn.net/yeyang911/article/details/52382722


猜你喜欢

转载自blog.csdn.net/yeyang911/article/details/51915348
今日推荐