关于Kinect根据深度图对齐彩色图抠人比彩色图像人大的问题

 我根据Kinect自带的那个绿色抠人Demo,大家有没有发现抠出来的人比彩色图像中人变大了,该Demo是根据一个函数NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution来对齐,导致我根据帧差法得到的前景图无法与抠出人的大小对齐,因此无法修复抠出来人边缘的抖动。下面,我介绍一个函数MapColorFrameToDepthFrame,次函数是将一帧的彩色图像映射到深度图像,然后抠出来的人才是和彩色图像中的人是一样大的。次函数的用法也比较有意思,看代码:

HRESULT ProcessDepth(Mat &depth)
{
        HRESULT hr = S_OK;
        NUI_IMAGE_FRAME imageFrame;

        //得到当前帧深度消息图像信息
        hr = g_pNuiSensor->NuiImageStreamGetNextFrame(g_pDepthStreamHandle, 0, &imageFrame);

        if (FAILED(hr))
        {
                return hr;
        }

        BOOL nearMode;
        INuiFrameTexture* pColorToDepthTexture;
        hr = g_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture(
                g_pDepthStreamHandle, &imageFrame, &nearMode, &pColorToDepthTexture);

        INuiFrameTexture * pTexture = imageFrame.pFrameTexture;
        NUI_LOCKED_RECT LockedRect;
        NUI_LOCKED_RECT ColorToDepthLockRect;

        //锁住
        pTexture->LockRect(0, &LockedRect, NULL, 0);
        pColorToDepthTexture->LockRect(0,&ColorToDepthLockRect,NULL,0);
        int outputIndex = 0;

        if (ColorToDepthLockRect.Pitch != 0)
        {
                HRESULT hrState = S_OK;
                INuiCoordinateMapper* pMapper;

                hrState = g_pNuiSensor->NuiGetCoordinateMapper(&pMapper);

                if (FAILED(hrState))
                {
                        return hrState;
                }

                hrState = pMapper->MapColorFrameToDepthFrame(NUI_IMAGE_TYPE_COLOR,NUI_IMAGE_RESOLUTION_640x480,NUI_IMAGE_RESOLUTION_640x480,
                        640 * 480, (NUI_DEPTH_IMAGE_PIXEL*)ColorToDepthLockRect.pBits,640 * 480, depthPoints);

                if (FAILED(hrState))
                {
                        return hrState;
                }
        }
        //像素赋值
        if (LockedRect.Pitch != 0)
        {        
                //获取深度图像
                for (int i = 0; i != depth.rows; ++ i)
                {
                        uchar *ptr = depth.ptr<uchar>(i);  //第i行的指针  

                        //其二是既表示深度值又含有人物序号,则像素值的高13位保存了深度值,低三位保存用户序号,  
                        //注意这里需要转换,因为每个数据是2个字节,存储的同上面的颜色信息不一样,  
                        uchar *pBufferRun = (uchar*)(LockedRect.pBits) + i * LockedRect.Pitch;  
                        USHORT * pBuffer = (USHORT*) pBufferRun;  

                        for (int j = 0; j != depth.cols; ++ j)
                        {
                                ptr[j] = 255 - (uchar)(256 *pBuffer[j] / 0x0fff);
                        }
                }
        }
        
        pColorToDepthTexture->UnlockRect(0);
        //解锁
        pTexture->UnlockRect(0);
        //释放当前帧数据
        g_pNuiSensor->NuiImageStreamReleaseFrame(g_pDepthStreamHandle, &imageFrame);
        
        return hr;
}

映射后的点信息保存在NUI_DEPTH_IMAGE_POINT*depthPoints = new NUI_DEPTH_IMAGE_POINT[640 * 480];这个数组中,那么,如何去取这些点呢?见代码:

cv::Point TransformRGBtoDepthCoords(cv::Point rgb_coords)
{
        long index = rgb_coords.y * 640 + rgb_coords.x;
        NUI_DEPTH_IMAGE_POINT depthPointAtIndex = depthPoints[index];

        return cv::Point(depthPointAtIndex.x, depthPointAtIndex.y); 
}

传进去是彩色图像点的坐标,出来的就是深度图像的,注意点的越界问题。

猜你喜欢

转载自blog.csdn.net/ily6418031hwm/article/details/9838537
今日推荐