周读论文系列笔记(4)-reivew-Image registration methods: a survey

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Void_worker/article/details/86149412

1. Introduction

Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. It geometrically aligns two images—the reference and sensed images. The present differences between images are introduced due to different imaging conditions.

Image registration is a crucial step in all image analysis tasks in which the final information is gained from the combination of various data sources like in image fusion, change detection, and multichannel image restoration.

Typically, registration is required in remote sensing (multispectral classification, environmental monitoring, change detection, image mosaicing, weather forecasting, creating super-resolution images, integrating information into geographic information systems (GIS)), in medicine (combining computer tomography (CT) and NMR data to obtain more complete information about the patient, monitoring tumor growth, treatment verification, comparison of the patient’s data with anatomical atlases), in cartography (map updating), and in computer vision (target localization, automatic quality control), to name a few.

2. Image registration methodology

[four main groups]
In general, its applications can be divided into four main groups according to the manner of the image acquisition:

Different viewpoints (multiview analysis). Images of the same scene are acquired from different viewpoints. The aim is to gain larger a 2D view or a 3D representation of the scanned scene.(不同的视点(多视点分析):相同的场景获取不同视点的图片)

Different times (multitemporal analysis). Images of the same scene are acquired at different times, often on regular basis, and possibly under different conditions. The aim is to find and evaluate changes in the scene which appeared between the consecutive image acquisitions.(不同的时间:相同的场景获取不同时间的图片)
Examples of applications: Medical imaging—monitoring of the healing therapy, monitoring of the tumor evolution.(医学成像监测愈合疗法,监测肿瘤的演变)

Different sensors (multimodal analysis). Images of the same scene are acquired by different sensors. The aim is to integrate the information obtained from different source streams to gain more complex and detailed scene representation.(不同的传感器:相同的场景获取不同传感器的图片)

Scene to model registration. Images of a scene and a model of the scene are registered. The model can be a computer representation of the scene, for instance maps or digital elevation models (DEM) in GIS, another scene with similar content (another patient), ‘average’ specimen, etc. The aim is to localize the acquired image in the scene/model and/or to compare them.(模型配准)

[four steps]
Feature detection. (特征检测)Salient and distinctive objects (closed-boundary regions, edges, contours, line intersections, corners, etc.) are manually or, preferably, auto-matically detected(突出和独特的物体被手动或自动检测). For further processing, these features can be represented by their point representatives (centers of gravity, line endings, distinctive points), which are called control points (CPs)(控制点) in the literature.

Feature matching. (特征匹配)In this step, the correspondence between the features detected in the sensed image and those detected in the reference image is established.(建立关系对应感测图像中检测到的特征与参考图像检测到的特征) Various feature descriptors and similarity measures along with spatial relationships among the features are used for that purpose.

Transform model estimation. (转换模型估计)The type and parameters of the so-called mapping functions, aligning the sensed image with the reference image, are estimated. (估计映射函数的类型和参数,将感测图像与参考图像配准)The parameters of the mapping functions are computed by means of the established feature correspondence.

Image resampling and transformation.(图像重采样和转换) The sensed image is transformed by means of the mapping functions.(通过映射函数变换感测的图像) Image values in non-integer coordinates are computed by the appropriate interpolation technique.(通过适当的插值技术计算非整数坐标中的图像值)

3. Step1: Feature detection

3.1 Area-based methods

无feature detection

3.2 Feature-based methods

Region features.
Line features
Point features

4. Step2: Feature matching

4.1 Area-based methods

Correlation-like methods
Fourier methods
Mutual information methods
Optimization methods

4.2 Feature-based methods

Methods using spatial relations
Methods using invariant descriptors
Relaxation methods
Pyramids and wavelets

5. Step3: Transform model estimation

6. Step4: Image resampling and transformation

猜你喜欢

转载自blog.csdn.net/Void_worker/article/details/86149412
今日推荐