Image processing and computer vision--Chapter 2--Imaging and image representation--Summary of important knowledge points

Image processing and computer vision--Chapter 2-Imaging and image representation-8 questions
1. What are the spectral wavelength distribution and its imaging characteristics?

Spectral wavelength distribution: refers to the intensity or energy distribution of light in different wavelength ranges. It can usually be represented by a spectrogram, where the horizontal axis is the wavelength and the vertical axis is the intensity or energy of the light.

Imaging characteristics: Light of different wavelengths has different behaviors and characteristics in the imaging system. For example, light in the visible wavelength range is visible to the human eye and is suitable for conventional photography and human eye observation. Infrared and ultraviolet light are useful in specific applications, such as thermal imaging and materials analysis.

2. What is the definition of reflectivity and its image calculation method?

Reflectivity is defined as follows: the ratio of the amount of light an object's surface can reflect to the amount of light it receives. Usually we need to measure the reflectivity of diffuse reflection on the surface of a material, which is more meaningful.

The image calculation method is as follows: the color image Image can be regarded as the product of reflectance Albedo and shade Shading, that is, I=AS, so we can use the database as a sample to train the CNN network and decompose the color image I into reflectance A and shade S.

3. What are the distortion factors that affect the imaging system?

There are three causes of distortion that affect the imaging system:

Barrel Distortion: Also called barrel distortion, the phenomenon is that the imaging image exhibits barrel-shaped expansion distortion.

Pincushion distortion: It is the phenomenon of the picture "shrinking" to the middle caused by the lens. When we use a telephoto lens or use the telephoto end of a zoom lens, we are most likely to notice the phenomenon of pincushion distortion.

Linear Distortion: When trying to photograph tall straight structures at close range, the parallel lines become no longer parallel. This is linear distortion.

4. What are the definitions of multispectral, hyperspectral and hyperspectral?

The definition of multispectral: When subdividing a specific spectral wavelength range, it is divided into 10 equal parts to 100 equal parts. This subdivision method is called multispectral remote sensing, in which the resolution of the spectrum is on the order of 0.1mm.

The definition of hyperspectral: When subdividing a specific spectral wavelength range, it is divided into between 100 equal parts and 1000 equal parts. This subdivision method is called hyperspectral remote sensing, in which the resolution of the spectrum is on the order of 0.01mm.

The definition of hyperspectrum: When subdividing a specific spectral wavelength range, it is divided into between 1,000 equal parts and 10,000 equal parts. This subdivision method is called hyperspectral remote sensing, where the resolution of the spectrum is on the order of 0.001mm.

5. What are the mainstream 3D scanning technologies?

Lidar scanning: Uses a laser beam to measure distances on an object's surface to generate three-dimensional point cloud data.

Structured light scanning: Uses structured light projectors and cameras to capture the three-dimensional shape of object surfaces, often used in industrial measurement and 3D printing.

Time-of-flight photography: Calculates distance by measuring the time it takes for light to travel from a camera to an object surface, often used in terrain modeling and mapping.

Medical imaging scanning: Medical imaging data is obtained by changing the density of certain chemical elements. The most basic application is to use three-dimensional scanning technology to complete bone scans or lung CT scans.

6. What is the principle of camera calibration?

Camera calibration is the process of determining the internal and external parameters of a camera in order to convert image coordinates to world coordinates or vice versa. The calibration principle includes the following steps:

Internal parameter calibration: Determine the internal parameters of the camera such as focal length, principal point coordinates, distortion coefficient, etc., usually using a calibration plate or calibration object.

External parameter calibration: Determine the position and orientation of the camera, usually by shooting a target at a known position or images from multiple perspectives.

Camera projection model: Use calibration parameters to build a camera projection model to map image coordinates to world coordinates or vice versa.

7.What is the definition of PGM file format?

The PGM file format is a 128*128 grayscale image. The size of PGM is 48kb. The PGM file format is a simple bitmap image file format used to store grayscale images. It can contain binary or ASCII format images. Data, PGM file includes image width, height, maximum pixel value information.

8. What are the characteristics of vector file format?

The characteristic of the vector file format is that the image will not be distorted after enlargement and has nothing to do with the resolution. The file has a small internal space and can be freely and unrestrictedly reorganized. At the same time, it can be printed at high resolution. The disadvantage is that it is difficult to express lifelike colors with rich levels. Image effects.

Guess you like

Origin blog.csdn.net/m0_71819746/article/details/133191684