Most comprehensive review of medical image processing

The author Zhang Wei, the public number: computer vision, life, Editing member

0 Preface

Object medical image processing medical images of various imaging mechanism, the type of medical imaging is widely used in clinic mainly X- ray imaging (X-CT), magnetic resonance imaging (MRI), nuclear medicine imaging (NMI) and ultrasound imaging (UI) four categories. In the current medical diagnostic imaging, primarily by observing a group of two-dimensional slice images to find the lesion, which often require a doctor's experience to determine. Using computer image processing technology for two-dimensional slice image analysis and processing, human organs, soft tissues and lesion segmentation extraction, 3D reconstruction and three-dimensional display, the lesion can assist doctors and other areas of interest for qualitative even quantitative analysis, thus greatly improving the accuracy and reliability of medical diagnosis; teaching in medical, surgical planning, surgical simulation and various medical research can play an important supporting role [1,2]. Currently, the medical image processing mainly manifested in lesion detection, image segmentation, image registration and image fusion four aspects.

Data analysis using deep learning method showed a rapid growth trend, known as one of 10 breakthrough technologies of 2013. Learning is to improve the depth of the artificial neural network, the more layers, allowing a higher level of abstraction contains more information prediction data. So far, it has become a leader in the field of computer vision machine learning tools, intermediate and advanced neural network learning abstract feature depth obtained automatically from the raw data (image). Recent results show that the information extracted from CNN is very effective for target identification and positioning in the natural image. Medical image processing agencies around the world have been quick to enter the field, and CNN and other deep learning methods used in various medical image analysis.

In medical imaging, accurate diagnosis and assessment of disease depends on the acquisition and image interpretation of medical images. In recent years, image acquisition has been significantly improved, the device at a faster rate and higher resolution data acquisition. However, the image interpretation process, has only recently begun to benefit from computer technology. Most of the interpretation of medical images are carried out by a doctor, but medical image interpretation by a doctor subjectivity, huge differences doctors cognitive limitations and fatigue.

Typical CNN architecture for image processing by a network composed of a series of convolution, which comprises a number of data reduction i.e. cell layer. With a lower visual processing like the human brain, a convolutional network extracts the image feature detection, for example may represent a straight line or a circular edge (e.g. organ detection) or circle (colon polyp detection), then the higher-order features, such as local and global shape and texture feature extraction [3]. CNN's output is usually one or more types of probability or label.

CNN is highly parallelized algorithms. Compared with the single-core CPU processing, graphics processing unit used today (GPU) computer chips to achieve a substantial acceleration (about 40 times). In medical image processing, it is first introduced into the GPU for segmentation and reconstruction, and for machine learning. Since the emergence of new variants of the development of efficient parallel network CNN and the framework for modern GPU-optimized, the depth of the neural network has attracted commercial interest. CNN depth training from scratch is a challenge [4]. First, CNN requires a lot of training data tags, this requirement expert comments expensive and scarce medical field disease may be difficult to meet. Secondly, CNN depth training requires a lot of computing and memory resources, or training process would be very time consuming. Third, since the depth of the CNN overfitting training process complicated and convergence problems, it is often necessary to study the frame structure or network parameters adjusted repeated to ensure that all layers at a rate comparable study [5]. In view of these difficulties, some new learning programs, known as "transfer learning" and "fine tuning", proved to be more popular in order to solve the above problems.

1, lesion detection

Computer-aided detection (CAD) is to be complete the field of medical image analysis, and is very suitable for the introduction of deep learning. In CAD standard methods, typically detected by monitoring the position of a lesion candidate or the classical method of image processing techniques (e.g., filtration and morphology). Position detecting lesions in stages, and is generally described by a large number of features handmade. The classification feature vector is used to map the probability of detecting an actual candidate lesion. Depth study using the direct mode is to train a group of CNN operation center of the image data as an image candidate lesion. Setio like 3D chest CT scans detecting pulmonary nodules, and extracts of these candidates centered 2D patches [6] in nine different directions, using a combination of different CNN to classify each candidate, CAD The system shown in Figure 1. According to test results showed a slight improvement compared with the classical CAD systems previously reported for the same task. Ross et al CNN improved three prior applications of CAD system for detecting colonic polyps in CT imaging, spinal deformation hardening and lymph nodes [7]. They also use three orthogonal directions previously developed detectors and candidate 2D patches, and up to 100 randomly rotated view. "2.5D" random rotation view is a method of decomposing an image from raw 3D data. CNN these views using 2.5D image detector and then aggregated to improve detection accuracy. For CNN's three CAD systems, the accuracy of lesion detection increased 13-34%, while the use of non-depth learning classifier (such as support vector machines) is almost impossible to achieve this level of improvement. As early as 1996, Sahiner CNN, who had applied to the medical image processing. ROI is extracted from the tumor or normal tissue of the breast X-ray photographs. CNN input layer, an output layer and two hidden layers, and for back propagation. In the "GPU era" In the past, the training time is described as "computationally intensive", but did not give any time. In 1993, CNN applied to the detection of pulmonary nodules; in 1995 CNN for the detection of microcalcifications in mammography.

img

System Overview FIG 1.CAD. (A) samples taken from the nine cubes in a two-dimensional plane of symmetry of the plaque. Candidate patches located in the center of the bounding box of 50 50 mm and 64 64 px. (B) specifically by combining a solid, and the solid sub greater tuberosity of the outputs of the detectors designed to detect candidate. A false positive reduction stage as a combination of a plurality of ConvNets implemented. Each stream processing ConvNets extracted from the 2-D view of a particular patch. (C) a fusion of a different method for each stream output ConvNet. Orange and gray box indicates connection from a neuron layer connected to the first fully nodular classification output. Use softmax layer or fixed combination (production rules) a combination of a fully connected neurons. (A) volume of the object using the extracted two-dimensional view of nine patches. A schematic view (b) of the proposed system. (C) fusion method.

img

Figure 2. Detection of colon polyps: polyps of different sizes FROC curve, using CT colonography test 792 patients randomized view ConvNet observed.

2, image segmentation

Medical image segmentation process is divided into a plurality of regions of the image according to a similar or different between the regions. Currently, the main variety, tissues and organs of the image cells as object. Conventional image segmentation techniques are based on region segmentation and boundary segmentation method based on the former depends on the local spatial features of the image, such as gray scale, texture, and other statistical characteristics of uniformity and pixels, which is the use of gradient information determine the boundaries of the target. In conjunction with certain theoretical tools, image segmentation technology has been further developed. Based on three-dimensional visualization system such as binding FastMarching Watershed transformation algorithm and a medical image segmentation method can be obtained quickly and accurately the segmentation results [8].

img

Segmentation principle of the method of FIG. 3Watershed

In recent years, with the development of other emerging disciplines, produced some new image segmentation. The statistical method based on the fuzzy theory, neural network based on wavelet analysis based on a model snake model (dynamic contour model), for assembling the model to optimize the like. Although there have been new segmentation methods have been proposed, but the results are not very satisfactory. Current research focus is a knowledge-based segmentation method, i.e., by some means to a priori knowledge of the segmentation process introduced, thereby restricting segmentation process computer, so that the division result of the control in the range we can not know As for going too far. For example, in the liver tumor and normal liver inside gradation values ​​vary widely, and will not be mass and normal liver tissue as 2 separate.

Research on medical image segmentation method has the following salient features: alone any existing image segmentation algorithms are difficult to achieve satisfactory results for the general image, pay more attention to the effective integration of a variety of segmentation algorithms; complexity of human anatomy due systematic and functional, although the existing research by automatic segmentation of medical images needed to distinguish the organ, tissue or lesion is found, but the ready-made packages generally can not complete the automatic segmentation, still need anatomy manual intervention [9]. In the current situation can not be completely done by the computer graphics division of tasks, the man-machine interactive segmentation method has become the focus of research; research new segmentation methods mainly automatic, accurate, fast, adaptive and robustness in several directions utilization as a research target, classic and modern segmentation segmentation technology (integrated technology) is the future direction of development of medical image segmentation techniques [10, 11].

2891 times using echocardiography data set, Ghesu other binding depth study of spatial learning and edge detection and segmentation of medical images [12]. An "effective exploration of a large parameter space" and a method of sparsity in the depth of combining network, computational efficiency, and compared to a reference method for the same set of release, mean 13.5% reduction division error detection eight patients The results shown in FIG. Research on brain lesions in multiple sclerosis division Brosch, who use the MRI image. Developed a 3D depth convolutional encoder network, which combines convolution and deconvolution [13], Figure 5. Effect of increasing the depth of the split network performance lesion. Convolution higher learning level features, and deconvolution pixel level pre-divided network. The network disclosed is applied to two data sets and a clinical trial data set, compared with five of the disclosed method, we show the best method. Pereira et al study on brain tumor on MRI segmentation was studied using a deeper architecture, data normalization and data enhancement techniques [14]. CNN architecture for the different tumor, a suspected tumor, the method respectively the core region of image enhancement and segmentation. In 2013 the public challenge data sets obtained the highest score.

img img img
img img img

FIG 4 shows an example of an image detection results from different patients test set. Detected boundary box, the box displays a standard yellow to green. The origin is at the center of each block segment is defined a respective coordinate system

img

Figure 5. Effect of increasing the depth of the split network performance lesion. True-positive, false-negative and false-positive voxels are in green, yellow and red highlighted. Due to an increase receptive field size, and having a layer 7 does not have a shortcut CEN are better able to split large lesion CEN than 3 layers.

2018 proposed prostate segmentation method based on the full convolution representative of the German medical rehabilitation institutions. End to end training on the MRI image of the prostate with CNN, and can be done on the entire division. Proposed a new objective function is optimized according to Dice coefficient during training [15]. In this manner, the handling of the imbalance between the foreground and background, and increased non-linear transformation and histogram matching the random data applications. Experimental evaluation show that the method has achieved excellent results in the open data set, but the processing time is greatly reduced.

img

FIG 6 is a schematic network architecture

img

FIG 7 PROMISE 2012 dataset segmentation result.

3, image registration

Image Registration is a prerequisite for image fusion, is recognized as difficult image processing technology, but also the decision of medical image fusion technology development of key technologies. In clinical diagnosis, a single modality images often do not provide sufficient information required for a doctor, or a plurality of modes which require the same pattern of multiple imaging the region of interest information implemented by complementary registration fusion. Simultaneous expression of information from multiple imaging sources on an image, the doctor can make a more accurate diagnosis or develop a more appropriate treatment [16]. Medical image registration comprises positioning and image conversion, i.e. conversion so that two points corresponding to the image to achieve exactly the same spatial position on the anatomical structure and by looking for a space. Figure 6 illustrates the concept of a simple two-dimensional image registration. FIG. (A) and (b) are two MRI brain images corresponding to the same person in the same position, wherein view (a) is a proton density weighted image, view (b) is a longitudinal relaxation weighted imaging. These two images are significantly different, the difference in the first position, i.e., view (a) with respect to (b) of the horizontal and vertical directions respectively translational; second content is expressed in two images are inconsistent and FIG. (a) differentially expressed in different tissues proton content, and (b) of the projecting longitudinal relaxation differences in different tissues. FIG. (C) shows the relationship between pixel correspondence map between the two images, i.e., (a) in each point is mapped to fx (b) only one point rx. If this mapping is one to one, i.e. each point in another image space has a one corresponding point in the image space, or at least those points of interest in the medical diagnosis can be accurately or approximately correspond precisely together, we call registration [17, 18]. FIG. (D) shows the view (a) with respect to the registration image (b) of the. As can be seen from FIG. (D) of Figure (D) between the spatial position of the pixels and (b), has approximately the same. In a 1993 review Petra like two-dimensional image registration method, and the characteristics of registration of the reference, the image registration method of FIG divided external image based registration feature (with a frame) and the image based on the internal characteristics like registration (no frame) in two ways. Which is non-invasive and can be retrospective, it has become a research center due to its registration algorithm.

img

​ (a) (b) (c) (d)

Medical Image Registration Principle 8

Research 2019 University of Science and Technology launched the Love of non-rigid multimodality medical image registration based on the structure of PCANet. Proposed a structure represented PCANet based method for medical image registration multimodal [19]. Compared with the artificial design feature extraction, PCANet automatically learning the inherent characteristics of the medical image from a large multi-stage linear and non-linear transformation. The proposed method can be effective to provide a multi-modality image by using multi-level image features extracted PCANet respective layers of the structure represented. Of the Atlas, a large number of experimental data sets RIRE BrainWeb and show that compared with MIND, ESSD, WLD NMI and methods, the proposed method may provide a lower value TRE and more satisfactory results.

img

The first row in FIG. 9 are the x and y directions of the real deformation of the second line is the difference between the real situation and the PSR of the x and y directions; third row is the difference between the true value and the modification method MIND

img

CT-MR image registration in FIG. 10 PSR, MIND, ESSD, WLD and NMI methods. (A) a reference image PD; (b) floating CT image; (c) PSR method; (d) MIND method; (e) ESSD method; (f) WLD method; (g) NMI Method

In recent years, medical image registration technologies have made new progress, the application of theory and methods of informatics in the registration method, for example, applications to maximize the mutual information criterion as registration image registration based on mutual information elastic deformation model has gradually become a hot topic [20]. In the registration of objects from the development of two-dimensional images to three-dimensional multimode registration of medical images. Some new algorithms, such as wavelet transform algorithm, the statistical parameter mapping algorithm, genetic algorithm-based applications on medical images are also expanding. Improved algorithm to quickly and accurately, the use of optimization strategies to improve image registration and the study of non-rigid image registration is the future direction of development of medical image registration techniques [21, 22].

4, image fusion

The main object of the image is fused by the processing of the redundant data between the plurality of images to improve the readability of the image, complementary to the processing of information between a plurality of images to improve image sharpness. Integration of multi-modal medical image combines valuable information and precise physiological function of anatomical structures that can provide more comprehensive and accurate information [23] for clinical. Create a fusion image fusion display is divided into two parts fused image of the image data to complete. Currently, there are mainly fusion image data on a pixel basis, and a method wherein the image-based. The former is treated point by point on the image, the two images corresponding to the gray value pixel a weighted sum, whichever is greater, or gray gradation and other operations take a small, relatively simple algorithm, but to achieve the effect and efficiency relatively poor, there will be a certain degree of image blur fusion. The latter to the image feature extraction, object segmentation process, the principle of complex algorithms used, but the effect is more ideal to achieve. Conventional fused image displayed pseudo color display method, a three-dimensional display method and display the tomographic method. Pseudo-color display is generally a reference image, the gradation display gradation, another image is superimposed on the reference image, the display color gradation. Method commonly used in the tomographic display certain images may be fused to a three-dimensional cross-sectional data, coronal and sagittal tomographic image display synchronization facilitates diagnosis viewer. Three-dimensional display method is the integration of the data in the form of three-dimensional images, the viewer can focus more directly observe the spatial anatomical position, which is important in surgery and radiotherapy planning in the design.

img

Figure 11 summarizes medical image fusion stage. Two-stage process including image registration, image fusion then.

In the study of image fusion technology, there are always new method appears, wavelet transforms, nonlinear registration based on artificial intelligence techniques and finite element analysis applications in image fusion will be the future focus and direction of image fusion research. With the three-dimensional reconstruction display technology, study the three-dimensional image fusion technology more and more attention, information integration and expression of three-dimensional images, will also be a focus image fusion research.

On the basis of the computer-aided image processing, image processing method developed utilization, combined with imaging characteristics and constant portion of human disease to help doctors or simulated analysis, image analysis diagnostic system has become an inevitable trend. There are already designated the use of interactive, automatic measurement analysis, image analysis software, can be fixed or fixed-term work done some measurements and diagnosis, but far from the level of intelligence analysis and expert systems; automatic identification point and measurement analysis and integration of medical image information and text information, is the direction of the future development of computer-aided diagnosis technology.

img

Examples of multi-modal medical image fusion Fig. Image fusion technology using specific modalities combined modality 1 and 2 can be made to improve medical diagnosis and assessment

5, forecast and challenge

1) Data Dimensions problems -2D and 3D:, it is in the 2D image processing and analysis in most of the work to date in. It is often questioned whether the transition to 3D is an important step towards improved performance. Several variants exist data enhancement process, including 2.5D. For example, in the study of Roth et al., Lymph nodes or colon polyp candidate voxels body center axial images taken, the presence of coronal and sagittal images.

2) learning - Unsupervised and supervised: When we look at the network literature, it is clear that most of the work has focused on supervised CNN, in order to achieve the classification. Such a network is important for many applications, including detection, segmentation and labeling. Nevertheless, some work is still focused on unsupervised programs, which mainly for image coding. Such as unsupervised Boltzmann machine (RBM) represents such learning methods may be better than a filter, because they learn the features described directly from the training data. RBM is generated by learning objectives training; this makes it possible to network unlabeled data to learn, but not necessarily generate the most appropriate classification features. Van Tulder et al conducted a survey, combined with the advantages of generating discrimination and classification of learning objectives convolution and RBM, which is a description of the training machine learning and data classification are very good filter. The results show that the combination of learning objectives fully than generative learning.

3) transfer learning and fine-tuning: Get in the field of medical imaging and ImageNet as comprehensive annotated datasets remains a challenge. When there is not enough data, there are several ways you can continue to: 1) transfer learning: from pre-trained natural image data sets or different medical fields CNN model (supervision) for new medical tasks. In one embodiment, the pre-trained CNN applied to the input image, and extracting an output from the network layer. It outputs the extracted feature is considered to be used for training and the individual pattern classifier. 2) fine-tuning: the task at hand when the existence of medium-sized data sets, preferred is the use of a pre-trained CNN network initialization, and then subjected to further supervised training, several (or all) of a network layer, the new data tasks.

4) Data Privacy affected by social and technological issues, need to work together to solve the sociological and Technology. When discussing privacy in the health sector, we think of HIPAA (the 1996 Health Insurance Portability and Accountability Act). It provides legal rights for the protection of personally identifiable information for patients and providers assume the obligation to protect and restrict its use or disclosure of health care. In the health care data is increasing at the same time, the researchers faced with how to encrypt patient information to prevent problems which it is used or disclosed. At the same time bring to restrict access to data may be missing important information.

6 Conclusion

In recent years, compared with traditional machine learning algorithms, deep learning in their daily lives automation occupy a central place, and made considerable progress. Based on excellent performance, most researchers believe that in the next 15 years, based on the application of the depth of learning will take over humans and most daily activities. However, compared with other real-world problems, the depth of learning, especially in the field of health care development speed is very slow medical images. So far deep learning applications provide positive feedback, however, because of the sensitivity of the data and the challenges of health care, we should look for more sophisticated deep learning method, in order to deal effectively with complex medical data. With the rapid development of medical technology and computer science, medical image processing requirements put forward higher and higher. Effectively improve the level of medical image processing technology, integration and cross multi-disciplinary theory, communication between medical staff and technical personnel theory becomes more and more important. Medical image processing technology as the improvement of modern medical diagnosis of a strong basis for the implementation of low risk, less invasive surgical options possible, will play a greater role in the field of medical informatics.

references

. [1] Lin, Qiuxiao Jia image analysis techniques in medicine [J] header Medical College, 2005, 21 (3): 311-314

[2] Zhou Xianshan Survey of Medical Image Processing [J] Fujian computer, 2009 (1): 34-34.

[3]Mcinerney T , Terzopoulos D . Deformable models in medical image analysis: a survey[J]. Medical Image Analysis, 1996, 1(2):91.

[4]Litjens G , Kooi T , Bejnordi B E , et al. A survey on deep learning in medical image analysis[J]. Medical Image Analysis, 2017, 42:60-88.

[5]Deserno T M , Heinz H , Maier-Hein K H , et al. Viewpoints on Medical Image Processing: From Science to Application[J]. Current Medical Imaging Reviews, 2013, 9(2):79-88.

[6]A. Setio et al., “Pulmonary nodule detection in CT images using multiview convolutional networks,” IEEE Trans. Med. Imag., vol. 35, no. 5,pp. 1160–1169, May 2016.

[7]H. Roth et al., “Improving computer-aided detection using convolutional neural networks and random view aggregation,” IEEE Trans.Med. Imag., vol. 35, no. 5, pp. 1170–1181, May 2016

[8] Lin Yao, Tian Jie. Review of medical image segmentation [J]. Pattern Recognition and Artificial Intelligence, 2002, 15 (2).

[9]Ghesu F C , Georgescu B , Mansi T , et al. An Artificial Agent for Anatomical Landmark Detection in Medical Images[C]// International Conference on Medical Image Computing & Computer-assisted Intervention. Springer, Cham, 2016.

[10]Pham D L , Xu C , Prince J L . Current methods in medical image segmentation.[J]. Annual Review of Biomedical Engineering, 2000, 2(2):315-337.

[11]Lehmann T M , Gonner C , Spitzer K . Survey: interpolation methods in medical image processing[J]. IEEE Transactions on Medical Imaging, 1999, 18(11):1049-1075.

[12]Cootes T F , Taylor C J . Statistical Models of Appearance for Medical Image Analysis and Computer Vision[J]. Proceedings of SPIE - The International Society for Optical Engineering, 2001, 4322(1).

[13] T. Brosch et al., “Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation,” IEEE Trans. Med. Imag., vol. 35, no. 5,pp. 1229–1239, May 2016.

[14]Ghesu F C , Krubasik E , Georgescu B , et al. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing[J]. IEEE Transactions on Medical Imaging, 2016, 35(5):1217-1228.

[15]Milletari F , Navab N , Ahmadi S A . V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation[J]. 2016.

[16] Nelson Chow, Luo Shuqian a human-machine interactive rapid brain image registration system [J] Beijing Biomedical Engineering, 2002; 21 (1):... 11 to 14

[17] Yang Hu, Ma Binrong, Ren Haiping mutual information based on the human brain image registration [J]. Chinese Journal of Medical Physics, 2001; 18 (2): 69 to 73

[18] Wangjia Wang, with the blessing of healing, Jiang Xiaotong, and other quantitative study of pulmonary solitary nodule [J] China Medical Imaging Technology, 2003,19 (9): 1218 - 1219

[19]Ishihara S , Ishihara K , Nagamachi M , et al. An analysis of Kansei structure on shoes using self-organizing neural networks[J]. International Journal of Industrial Ergonomics, 1997, 19(2):93-104.

[20]Maintz J B , Viergever M A . A Survey of Medical Image Registration[J]. Computer & Digital Engineering, 2009, 33(1):140-144.

[21]Hill D L G , Batchelor P G , Holden M , et al. Medical image registration[J]. Physics in Medicine & Biology, 2008, 31(4):1-45.

[22]Razzak M I , Naz S , Zaib A . Deep Learning for Medical Image Processing: Overview, Challenges and Future[J]. 2017.

[23] Lin, Qiuxiao Jia image analysis techniques in medicine [J] header Medical College, 2005, 21 (3): 311-314

Recommended Reading

How to learn from scratch systematic visual SLAM?
Scratch together learning SLAM | Why learn SLAM?
To learn from scratch together SLAM | SLAM learn what in the end need to learn?
To learn from scratch together SLAM | SLAM what's the use?
To learn from scratch together SLAM | C ++ new features do not want to learn?
To learn from scratch together SLAM | Why use homogeneous coordinates?
To learn from scratch together SLAM | rigid body rotation of three-dimensional space
from scratch learning SLAM together | and why did they need Lie Lie algebra?
To learn from scratch together SLAM | camera imaging model
to learn from scratch with SLAM | do not push the formula, how to really understand very constrained?
To learn from scratch with SLAM | magical Homography
to learn from scratch together SLAM | Hello, point cloud
from scratch learning SLAM together | add filters to the point cloud
from scratch learning SLAM together | point cloud smooth normals estimate
to learn from scratch with SLAM | point cloud to the evolution of the grid
from scratch learning SLAM together | Figure optimizing understand, step by step with g2o you read the code
from scratch learning together SLAM | grasp g2o vertex programming routines
to learn from scratch with SLAM | grasp g2o side code routine
zero-based white, computer vision how to get started?
SLAM field of cattle, cattle laboratories, research cattle comb
I used MATLAB line and a 2D LiDAR SLAM
visual understanding of quaternions, you are no longer willing to hair loss,
the most recent year SLAM semantic representation of what work?
Visual SLAM Survey
Summary | VIO, laser SLAM collection and classification of papers related
research SLAM, how high programming requirements?
2018 SLAM, three-dimensional visual experience sharing job direction
2018 SLAM, three-dimensional visual experience sharing job direction of
depth learning experience SLAM | how to evaluate based on the depth of learning DeepVO, VINet, VidLoc?
Methods key visual SLAM
SLAM direction of public numbers, know peace, which has a large V on the blog can follow?
Laboratory SLAM
SLAM direction What outstanding domestic companies?
SLAM common interview questions
SLAM dataset research in related fields
together to learn from scratch SALM-ICP principle and application of
the liberation of hands - off-camera and online reference IMU calibration
target detection
Summary of image segmentation

Guess you like

Origin www.cnblogs.com/CV-life/p/11162916.html