Digital Image Processing_Gonzalez

1. What is digital image processing?

     Digital image processing is a method and technology for removing noise, enhancing, restoring, segmenting, and extracting features of an image through a computer. An image can be defined as a two-dimensional function f (x, y), x, y are spatial plane coordinates, and the amplitude f at any pair of spatial coordinates (x, y) is called the image at that point Intensity or grayscale. When x, y, f are finite discrete values, we call the image a digital image. (0<f(x)<∞)

Digital image processing: (1) Improve image information so that people can interpret. (2) Solve machine perception and make it easier for machines to understand automatically

2. The difference between digital image processing and computer vision and computer graphics

     Digital Image Processing: DIP for short. Image—>Image

     Computer Vision: CV for short. Image—>Understanding

     Computer Graphics: CG for short. Virtual scene description (3D coordinates)—>Image (2D array)

3. Image acquisition

    The principle of sensor configuration that transforms illumination into digital image: By combining input electric energy and sensor materials sensitive to special types of detection energy, the input energy is converted into voltage . The output voltage waveform is the response of the sensor. By digitizing the response of the sensor, a digital quantity is obtained from each sensor.

                              

    (1) Use a single sensor to acquire images

        In order to produce a two-dimensional image, there must be a relative displacement between the sensor and the imaging area in the x and y directions. A single sensor is installed on the lead screw to provide displacement in the direction perpendicular to the rotation, and output one line of the image for each movement.

                                                    

    (2) Use strip sensor to acquire images

      Reconstruction algorithm: Rebuild the algorithm according to the given data. The purpose of the reconstruction algorithm: to transform the perception data into meaningful profile images.

     The sensor belt provides imaging units in one direction, and the movement perpendicular to the sensor belt images in the other direction.

                                  

    (3) Use the sensor display to obtain images

      The imaging system collects the incident energy and concentrates it on an image plane. The sensor array coincident with the focal plane produces an output proportional to the total amount of light received by each sensor. Digital or analog circuits scan the output and convert them into analog signals, which are then digitized by other parts of the imaging system, and the output is a digital image.

                                         

4. Image sampling and quantization

    Sampling: digitizing the coordinate value is called sampling. That is, a continuous image is spatially divided into M×N grids, and each grid is represented by a brightness value. A grid is called a pixel. The value of M×N satisfies the sampling theorem.

  

  The larger the sampling interval, the smaller the number of pixels of the resulting image, and the lower the spatial resolution; the smaller the sampling interval, the greater the number of pixels of the resulting image, the higher the spatial resolution, and the better image quality, but the amount of data is large.

   Sampling is divided into up sampling and down sampling.

          Upsampling: The purpose is to enlarge the image and display it on a display device with a higher resolution

          Downsampling: reduce the image, purpose: 1. Make the image fit the size of the display area; 2. Generate a thumbnail of the corresponding image.

   Quantization: digitizing the amplitude is called quantization. It is the process of converting the corresponding brightness continuous change interval on the sampling point into a single specific number. After quantization, the image is represented as an integer matrix. Each pixel has two attributes: position and grayscale. The position is represented by rows and columns. The gray scale is an integer representing the brightness and darkness of the pixel position. This digital matrix M×N is used as the object of computer processing. The gray level is generally 0-255 (8bit quantization).

     

In real life, the collected images need to be discretized into digital images before they can be recognized and processed by the computer.

V. Digital image representation

    After sampling and quantization, we transform the function into a digital image. The matrix representation of the image:

    

The right side of the equation is a defined digital image. Each element in the array is called a picture element, picture element or pixel. In the future, we will use the terms image and pixel to refer to digital images and elements.

Digitization process: Make judgments based on M value, N value and discrete gray level number L. M, N (must be a positive integer), the gray level number is typically an integer power of 2.

 

 

 

Guess you like

Origin blog.csdn.net/zhangxue1232/article/details/108851573