C++ Graduation Project - Design and Implementation of License Plate Positioning and Recognition System Based on VC+++BP Neural Network + License Plate Recognition (Graduation Thesis + Program Source Code) - License Plate Positioning and Recognition System

Design and implementation of license plate positioning and recognition system based on VC+++BP neural network + license plate recognition (graduate thesis + program source code)

Hello everyone, today I will introduce to you the design and implementation of the license plate positioning and recognition system based on VC+++BP neural network + license plate recognition. At the end of the article, the thesis and source code download address of this graduation project are attached. Friends who need to download the proposal report PPT template and thesis defense PPT template, etc., can go to my blog homepage to view the self-service download method in the bottom column on the left.

Article directory:

1. Project introduction

  1. License plate recognition is one of the important components of modern intelligent transportation systems and is widely used.
    This article studies the three main technologies of license plate positioning, license plate character segmentation and license plate character recognition in the license plate recognition system.
    This article proposes a color-based BP neural network positioning algorithm for license plate positioning in complex environments. The network is trained through the license plate background color library (blue background) to enable it to distinguish blue from non-blue. Blue capabilities to achieve license plate positioning. The license plate character segmentation uses an improved vertical projection algorithm, which can segment the license plate character positions better and faster. Finally, the character recognition part still uses the BP neural network algorithm. By establishing a character library, training the network to have the ability to distinguish 34 different characters, and finally realizing the recognition of license plate characters.


2. Resource details

Project difficulty: medium difficulty
Applicable scenario: graduation project on related topics
Word count of supporting paper: 11,925 words and 24 pages< a i=3>Contains: full set of source code + completed thesisRecommended download method for ppt templates such as proposal report, thesis defense, project report, etc.:


Insert image description here


3. Keywords

BP neural network, license plate positioning, license plate recognition, image projection, character segmentation

4. Introduction to Bishe

Tip: The following is a brief introduction to the graduation thesis. The complete source code of the project and the download address of the complete graduation thesis can be found at the end of the article.

Chapter 1 Introduction
1.1 Research Significance of License Plate Recognition
Vehicle License Plate Recognition (VLPR) is the use of digital images with the help of computers. A system that uses processing, computer vision, pattern recognition and other technologies to automatically identify vehicles passing on the road and obtain their license plate information. License plate recognition belongs to the fields of computer vision and pattern recognition.
With the requirements of modern transportation development, intelligent transportation will be the development trend of future transportation systems. License plate recognition is a very critical and important part of the intelligent transportation system.

1. Application of license plate recognition in highways
As an important part of highway transportation, highways are designed to improve the transportation capacity, speed and safety of automobile transportation. Infrastructure dedicated to servicing automobile traffic. Intelligent transportation that uses various high and new technologies, especially electronic information technology, to improve management efficiency, traffic efficiency and safety has become the main direction of current traffic management development. According to surveys, 80% of cars on the highway are speeding. A large number of speeding behaviors have brought great hidden dangers to highway driving safety. "Snail cars" that occupy the passing lane have poor driving ethics and ignore traffic regulations. They not only greatly waste road resources, but also pose safety hazards to road traffic safety. The problem of such "overbearing cars" is indeed serious and poses great safety risks. According to relevant statistics, more than 70% of highway traffic accidents are caused by speeding and lane occupation. How to manage and punish highway speeding and lane occupation, for law enforcement departments, it is difficult to obtain evidence, but it has become a problem in law enforcement. The most troublesome problem. However, many highway application systems do not have the function of automatically identifying the license plate number, the "unique ID card" of the car, leaving opportunities for corruption and fraud. Automobile license plate automatic identification The identifier is a new type of highway electromechanical product that has gradually matured in recent years. Its emergence has filled a gap in collecting basic traffic information and provided a reliable basis for deepening highway operation management and improving operating efficiency. Therefore, on the roadside or gantry Speeding and road occupation monitoring equipment is installed on the vehicle to collect vehicle images and license plate information in real time and provide real-time alarm reminders. Through host management, automatic alarms can be realized at the exit. Managers can handle the situation on the spot based on the license plate information, thus creating a powerful deterrent. Strength, prompting them to consciously reduce speeding and road occupation, reduce high-speed driving risks, and produce great social benefits.

2. Application of license plate recognition in vehicle entry and exit management
Install the license plate recognition equipment at the entrance and exit, record the vehicle's license plate number, entry and exit time, and communicate with the automatic door and barrier machine The combination of control equipment enables automatic management of vehicles. When applied to parking lots, it can realize automatic timing charging, and can also automatically calculate the number of available parking spaces and give prompts, realizing automatic parking charging management, saving manpower and improving efficiency. When applied to a smart community, it can automatically determine whether an incoming vehicle belongs to the community, and implement automatic timing charging for non-internal vehicles. In some units, this application can also be combined with the vehicle dispatching system to automatically and objectively record the departure status of the unit's vehicles.

3. Other applications
Such as blacklist surveillance, the public security agency can replace traditional manual surveillance by monitoring specific suspect vehicles;
Some The agency's automatic release identifies vehicles belonging to the unit.
In addition, there is automatic registration of license plates, automatic time charging of parking lots, etc.
When a good license plate recognition system is applied to the transportation system, it can make traffic smoother, improve efficiency, and save a lot of manpower and material resources. In addition, license plate recognition belongs to the field of pattern recognition, and its research is conducive to promoting the development of computer vision.

1.2 Current status of license plate recognition systems
People have conducted extensive research on license plate recognition technology since the 1980s. Currently, there are many algorithms at home and abroad, some Practical VLPR technology has also begun to be used in traffic flow monitoring, access control, electronic toll collection, mobile inspection and other occasions. However, both VLPR algorithms and VLPR products almost always have certain limitations, and they all need to adapt to new requirements and be continuously improved. For example, existing systems are almost unable to effectively solve the technology of segmentation, positioning and effective recognition of multiple license plate images in complex backgrounds. obstacles, and it is also difficult to adapt to all-weather complex environments and high-speed requirements.

License plate character recognition is actually the recognition of printed characters attached to the license plate. Whether it can be correctly recognized is not only a problem of character recognition technology, but also considers the impact of its carrier, the license plate area. License plate character recognition technology is a comprehensive technology that coordinates text recognition technology with the license plate image itself. Due to factors such as the performance of the camera, the cleanliness of the license plate, lighting conditions, the tilt angle during shooting, and vehicle movement, the characters in the license plate may be seriously blurred, skewed, damaged or smudged, which poses a problem for character recognition. brought difficulty.

The current license plate recognition method is mainly aimed at automatic slow-stop charging, parking lot management and other occasions. The monitored area generally only has a single vehicle and the background is relatively simple. In many practical applications today, the monitoring area is relatively complex, and existing methods cannot be directly applied. For example, in mobile traffic police inspections, highway surveillance and surveillance, and urban traffic arterial surveillance and surveillance, the monitored area generally has multiple cars appearing at the same time, and the background is relatively complex, including billboards, trees, buildings, zebra crossings, and Various background text, etc. Therefore, this topic innovatively proposes a multi-license plate positioning, segmentation and recognition method in a complex background for this situation, and considers color segmentation and ColorLP algorithms, which is also the current development trend of license plate image recognition.

Of course, the specific applications of license plate recognition systems are also developing rapidly, from the original parking stationary shooting scene applications, such as toll stations, parking lots, etc., to mobile highway vehicle inspection, automatic violation alarm, overloading and red light running and other real-time monitoring applications, increasing The neural network adaptive recognition learning and training function has increasingly higher practical requirements for system response speed, networking, intelligence, recognition success rate and other practical requirements. With the research and development of the above-mentioned core technologies, the application fields and functions have also been greatly improved. However, practical research on license plate recognition still has a long way to go.

At present, there are many companies with mature products in use in the automotive interior market.
Such as: Beijing Wentong license plate recognition system, Shenzhen Haichuan license plate recognition, Shenzhen Xinlutong, etc.
According to a report released by the market research organization IMS, automatic license plate recognition technology has been showing a growth trend in the US market. After the 2008 economic crisis, large-scale vehicle recognition projects in the United States have been restarted.
At present, there are mainly the following recognition methods for license plate recognition: template matching method, feature statistical matching method and neural network recognition method. The template matching method has a relatively high recognition rate for regular characters, but its recognition ability is limited when characters are deformed; in practical applications, feature statistics
matching method, when characters appear broken or partially missing , the recognition effect is not ideal; neural network recognition can effectively identify license plates with higher resolution and clearer images, and has strong classification capabilities, fault tolerance, robustness and non-linear mapping capabilities. Many character recognition in car license plates are It is implemented using neural networks, among which BP neural network, as the essence of neural networks, is the most widely used network algorithm so far.

1.3 Research content of license plate recognition system
The license plate character recognition technology in this article mainly includes three parts: license plate positioning, character segmentation and character recognition.
After the license plate is turned into an image through electronic devices such as cameras, how to accurately find the license plate in this image is the key to the entire automatic license plate recognition system. Current license plates have certain characteristics, such as national standards for license plate background color and shape. Based on these standards and the processing methods of color and shape (license plates are mainly rectangular) in image processing, the license plate position can be accurately located.

Due to the influence of lighting, the background color of the license plate will be somewhat distorted in the actual photographed license plate compared with the national standard. By establishing a license plate color library and using BP neural network, the distorted license plate background color is trained. , so that the system has better adaptability to license plate distortion, thereby more accurately locating the license plate position.
After the license plate is located, some preprocessing must be performed on the license plate, such as noise removal, edge refinement, etc. Then the license plate characters are segmented. The character segmentation processing in this article adopts a method based on projected eigenvalues. For numbers and characters, since they are all conjoined characters, it can be achieved by just finding an endless blank area (narrow area) between the characters or numbers. Segmentation processing between numbers and characters.

After segmenting the license plate characters, the last step is the recognition of characters and numbers. License plate characters and numbers are standard characters formulated by the country. This article uses a BP neural network algorithm to build a character library and train the network to have the ability to distinguish 34 different characters, and finally realize the recognition of license plate characters.

1.4 Chapter Arrangement
This paper is divided into 5 chapters.
The introduction of Chapter 1 mainly introduces the research significance, research status and research content of automatic license plate recognition system. Some common methods of license plate recognition are briefly introduced, and then the layout structure of this paper (i.e., chapter arrangement) is outlined.
Chapter 2, based on color BP neural network license plate positioning, introduces the image display method of color image BMP format files, the conversion of RGB color space to CR CB color space, and then introduces the BP neural network Basic principles, and finally introduces the method of using BP neural network license plate positioning based on CR CB color space.
Chapter 3, license plate character positioning and segmentation. First, the image projection technology is introduced, then the license plate segmentation technology based on image projection technology is introduced, and finally the implementation method of license plate character segmentation based on image skimming technology is introduced.
Chapter 4, license plate system based on color and BP neural network. First, the establishment of the license plate character library is introduced, then the character recognition based on BP neural network is introduced, and finally the implementation based on this technology is introduced.
Chapter 5, the last chapter is the conclusion. After programming and implementing the various solutions proposed above, and using several license plate images as results, I can roughly perceive the feasibility of this method and explain some simple meanings. Finally, I would like to express my gratitude to the instructor.

Chapter 2 License Plate Positioning Based on Color and BP Neural Network
2.1 Display of Color Images
Color images are visually more complex than grayscale images Close to the real thing. In the license plate recognition system, the license plate is generally obtained through video streaming. The resulting frames containing license plates are then converted into color images. Common color image formats mainly include the following:
Jpeg, jpg, png, bmp. Each color image format has its own advantages and disadvantages in various fields. However, on Windows, if you want to display the image so that others can see it, you must first convert it to BMP format (for example, when opening a jpg image, the system internally restores each pixel, which is similar to the bmp format).
In this paper, I selected the 24-bit true color bmp format as the original vehicle image format (that is, all the original vehicle image formats to be identified are in the unified standard 640*480 bmp format) , other color images will not be studied, and the format of the image has nothing to do with the core algorithm parts of this paper such as license plate positioning, character segmentation, and character recognition.
For any color image, we mainly care about the following elements: the size of the image, the size of each pixel of the image, the header information of the image, etc.
This is no exception for a color image in bmp format. What we are most concerned about is the size of the image (i.e. length and width or height and width), and the RGB size of each pixel. If we know these key values, such as (the height size of the image is hImage, the width size is wImage, and the RGB size of each pixel of the image is stored in the array Image[] in turn), then the display of this color image will become very It's simple, we just need to display the corresponding pixels at the corresponding position.
Fortunately, the BMP format file structure is not complicated.
A typical BMP image file consists of four parts:
1: Bitmap header file data structure, which contains the type of BMP image file, display content and other information.
2: Bitmap information data structure, which contains the width, height, compression method, and defined color of the BMP image;
   3: Palette This part is optional. Some bitmaps require a palette, and some bitmaps, such as true color images (24-bit BMP), do not require a palette;
4: Bitmap data , the content of this part varies according to the number of bits used in the BMP bitmap. In 24-bit images, RGB is used directly, while others less than 24 bits use the color index value in the palette.
Therefore, by reading each part of the BMP image file, the desired data can be obtained.
During the experiment, just obtain the data of the BMP image and display the corresponding RGB pixels at the corresponding positions. The picture below is a screenshot of a license plate in BMP format after the image is displayed.
Insert image description here

2.2 Conversion from RGB to Cr Cb color space

RGB color space has various implementation methods depending on the actual system capabilities of the device used. As of 2006, the most commonly used implementation is the 24-bit implementation, which has 8 bits per channel for red, green, and blue, or 256 color levels. A color space based on such a 24-bit RGB model can represent 256×256×256 ≈ 16.7 million colors. Some implementations use 16 bits per primary color, enabling higher and more accurate color densities within the same range. This is especially important in wide color spaces, where most commonly used colors are arranged relatively closely together. 
In the RGB color space, the three colors have the same importance, so they need to be stored at the same resolution. At most, RGB565 is used to reduce the quantization accuracy, but the three colors need to be stored according to the If stored at the same resolution, the amount of data is still very large. Therefore, human eyes are more sensitive to brightness than color, and the brightness information and color information of the image are separated and stored using different resolutions. This way, the image can be stored more effectively with little impact on the subjective feeling. data.
The YCbCr color space and its variants (sometimes called YUV) are the most commonly used and efficient methods of representing color images. Y is the luminance/luma component of the image, calculated using the following formula, and is the weighted average of the R, G, and B components: Y = kr R + kgG + kbB where k is the weighting factor.
The above formula calculates the brightness information, as well as the color information, expressed using color difference (color difference/chrominance or chroma), where each color difference component is R, G, B value and brightness Difference of Y: Cb = B - Y
 Cr = R - Y
Cg = G - Y
Among them, Cb +Cr+Cg is a constant (actually an expression about Y), so only two of the values ​​are combined with the Y value to calculate the original RGB value. Therefore, we only save the brightness and color difference values ​​of blue and red, which is (Y, Cb, Cr).
Compared with RGB color space, YCbCr color space has a significant advantage. The storage of Y can use the same resolution as the original picture, but the storage of Cb and Cr can use a lower resolution. This takes up less data and does not significantly degrade image quality. Therefore, saving color information at a lower resolution than metric information is a simple and effective image compression method.
In this paper, in order to obtain more comprehensive pixel information with smaller data, I converted the RGB image to color space and only retained the red and blue colors in the YCbCr color space. Portion. The specific methods are as follows.
Cr= 128-37.797R/255-74.203G/255+112B/255;
Cb= 128+112
R/255-93.768G/255-18.214B/255;

2.3 Principle of BP neural network
BP (Back Propagation) network was proposed in 1986 by a group of scientists headed by Rumelhart and McCelland. It is a multi-layered network trained according to the error back propagation algorithm. Feedforward network is one of the most widely used neural network models currently. BP network can learn and store a large number of input-output pattern mapping relationships without revealing the mathematical equations describing this mapping relationship in advance.
The structural diagram of a neural network is shown below.
Insert image description here

The BP neural network model topology includes input layer, hidden layer and output layer. The number of neurons in the input layer is determined by the dimension of the sample attributes, and the number of neurons in the output layer is determined by the number of sample classifications. The number of hidden layers and the number of neurons in each layer are specified by the user. Each layer contains several neurons, and each neuron contains a threshold , which is used to change the activity of the neuron. The arcs in the network represent the weights between the neurons in the previous layer and the neurons in the next layer. Each neuron has inputs and outputs. The input and output of the input layer are the attribute values ​​of the training samples.
For the input of the hidden layer and the output layer, is the weight of the connection from unit i in the previous layer to unit j; is the output of unit i in the previous layer; is the weight of unit j threshold.
The output of the neurons in the neural network is calculated through the activation function. This function uses a symbolic representation of the activity of the neuron represented by the unit. The activation function generally uses the simoid function (or logistic function). The output of the neuron is:

In addition to this, there is a concept of learning rate (l) in neural networks, which usually takes a value between 0 and 1 and helps to find the global minimum. If the learning rate is too small, learning will proceed slowly. If the learning rate is too large, swings between inappropriate solutions may occur.
The basic process of the algorithm is:

Insert image description here

1. Initialize the network weights and neuron thresholds (the simplest way is random initialization)
2. Forward propagation: Calculate the hidden layer layer by layer according to the formula Neurons and Output Layers The inputs and outputs of neurons.
3. Backward propagation: Correct the weights and thresholds according to the formula
until the termination condition is met.
2. Regarding the termination condition, there can be many forms:
§ All in the previous cycle are too small, less than a specified threshold.
§ The percentage of samples that were not correctly classified in the previous cycle is less than a certain threshold.
§ Exceeded the prespecified number of periods.
§ The mean square error between the output value of the neural network and the actual output value is less than a certain threshold.
Generally, the last termination condition has a higher accuracy.
In the actual use of BP neural network, there will be some practical problems:
1. Sample processing. For output, if there are only two categories, then the output is 0 and 1. Only when tends to positive and negative infinity will 0 and 1 be output. Therefore, the conditions can be relaxed appropriately. When the output is >0.9, it is considered to be 1, and when the output is <0.1, it is considered to be 0. For input, samples also need to be normalized.
2. Selection of network structure. Mainly means that the number of hidden layers and neurons determines the size of the network, and the size of the network is closely related to the performance learning effect. A large scale requires a large amount of calculation and may lead to overfitting; but a small scale may also lead to underfitting.
3. The selection of initial weights and thresholds. The initial value has an impact on the learning results. It is also very important to choose an appropriate initial value.
4. Incremental learning and batch learning. The above algorithms and mathematical derivation are all based on batch learning. Batch learning is suitable for offline learning, and the learning effect is stable; incremental learning is used for online learning, which is relatively sensitive to the noise of the input sample and is not suitable for drastic changes. input mode.
5. There are other choices for the excitation function and error function.
Generally speaking, the BP algorithm has many options, and there is often a large room for optimization for specific training data.

2.4 BP neural network license plate positioning based on Cr Cb
Insert image description here

The picture shows some samples of license plate background colors
Establish a license plate background color library. In this paper, only blue background and white letter license plates are studied. For these license plate samples, convert them from the RGB color space to get their Cr Cb red and blue components. For each pixel of the license plate sample, its CR CB can be obtained. What we care about are only two kinds of pixels, namely blue and non-blue pixels. For blue pixels, after being converted to CR CB, the corresponding BP neural network mapping relationship is that the output is 1. For non-blue pixels, after being converted to CR CB, the corresponding BP mapping relationship is such that the output is 0. Since the BP neural network The network requires input values ​​0 to 1. Therefore, CR CB needs to be simply processed to convert it into an input that meets the requirements.
At this point, the BP neural network model used for license plate positioning has an input layer and an output layer. That is, the input layer contains two neural nodes (the CR CB component of a certain pixel), and the output layer has only one neural node (the corresponding logical relationship is whether this pixel is visually blue). The middle layer is designed to be a layer containing only 4 nodes.
After designing the BP neural network model, put the background color of the license plate into training, and it can be used after the network converges (actually during the completion of the paper, convergence did not occur, but after reaching the number of training times Stop training. The number of times 2552555 is determined after testing during the experiment and is not necessarily optimal). At this time, the neural network can be considered to have the ability to distinguish blue from non-blue (the valuable weights obtained by training are saved in a file CharBpNet.txt, so that they can be read directly when restoring the network and save a lot of training. time). Then for each pixel of any image containing a license plate, put it into the network action. If the network thinks it is blue (the node output is in the range of 0.8-1.0), then map this pixel to 255, if If the network considers it to be non-blue (the node output range is between 0-0.8), then this pixel is mapped to 0. In this way, in addition to a binary image (mapped from the above), we can also convert the license plate from complex to Separate the natural images (of course this is from a very ideal point of view, in fact it cannot be separated when the color of the vehicle is also blue).
Insert image description here

The left picture is the original natural vehicle image, and the right picture is the binary image obtained after the BP neural network action.
From the above two pictures, you can see that the license plate inside a natural image has been separated after processing. After the license plate is positioned, the general position of the characters is determined.

Chapter 3 License Plate Character Positioning and Segmentation
3.1 Image Projection Technology
Image projection technology is generally divided into horizontal projection and vertical projection. The so-called projection is the statistics of certain features of the image, and then reflects the feature intensity in the form of a histogram. Generally used for binary images, the horizontal projection is the number of non-zero pixel values ​​in each row, here it is 1 or 255, and the vertical projection is the number of non-zero pixel values ​​in each column of image data.
Image projection technology is mainly used in object detection, face detection, license plate positioning, character segmentation, etc. The characteristics of the binary image can be effectively separated by horizontal or vertical or horizontal plus vertical projection, and the desired data can be obtained.
Usually, we use histograms to reflect the data characteristics after projection.
The so-called histogram is a bar graph or mass distribution graph. It is a statistical report graph that represents the distribution of data by a series of vertical stripes or line segments of varying heights. Histograms generally have the following types: stable type, island type, bimodal type, folded tooth type, steep wall type, flat top type, etc.
3.2 License plate segmentation based on image projection technology
Since the license plate characters are on the license plate, this part can be considered as the rough positioning of the characters or the precise positioning of the license plate. This can be done algorithmically by design. Project the binary image in the horizontal and vertical directions to obtain its histogram distribution, and then draw straight lines at the peaks of the waves. In this way, there are four straight lines in the horizontal and vertical directions. They intersect to form a rectangle. This rectangle is the approximate shape of the license plate. location.
The binary image to be projected horizontally and vertically is as shown below:
Insert image description here

For a binary image for license plate location, after ideal horizontal or vertical projection, the expected histogram should be low on both sides and suddenly high in the middle.

The following introduces the horizontal and vertical projection techniques respectively.
Insert image description here

Accumulate the gray value of each pixel in rows for projection. The abscissa is the sum of the accumulated gray values, and the ordinate is the number of rows in the image. The upper and lower edges of the license plate area can be determined. The upper left picture shows the overall distribution of the projection values ​​of the grayscale projection image. As shown in the upper left figure, the lower edge of the license plate area is: the row number corresponding to the first projected wave peak value from bottom to top; similarly, the upper edge of the license plate area is: the first projected wave peak value appearing from top to bottom is the largest line number. Therefore, the car license plate area is the position between the two line
numbers. The horizontal analysis algorithm is as follows:

  1. Scan the image line by line from bottom to top, and record the number of pixels with a grayscale value of 255 in each line;
  2. Find the first row in which the number of pixels with a gray value of 255 is greater than a certain threshold (and the number of adjacent rows is greater than a certain threshold), and record the next row number, which is the bottom edge of the vehicle license plate;
  3. Then continue to scan, find the first row with a grayscale value of 255 where the number of pixels is less than the threshold (and the number of adjacent rows is greater than a certain threshold), record the row number, which is the top of the vehicle license plate. edge;
  4. At this time, scanning is no longer continued, and the original image is cropped based on the two recorded line numbers;
  5. The cropped image is obtained, which is the license plate image area positioned in the horizontal direction.
    Its vertical projection algorithm is also similar.

License plate vertical projection renderings

Accumulate the gray value of each pixel in columns for projection. The abscissa is the sum of the accumulated gray levels, and the ordinate is the number of rows of the image. The left and right edges of the license plate area can be determined. The figure above shows the overall distribution of the projection values ​​of the grayscale projection image. As shown in the figure above, the right edge of the license plate area is: the line number corresponding to the first projected wave peak value from right to left; similarly, the left edge of the license plate area is: the first projected wave peak value appearing from left to right is the largest Line number. Therefore, the car license plate area is the position between the two line
numbers. The vertical analysis algorithm is as follows:

  1. Scan the image column by column from left to right, and record the number of pixels with a grayscale value of 255 in each row;
  2. Find the first row in which the number of pixels with a gray value of 255 is greater than a certain threshold (and the number of adjacent columns is greater than a certain threshold), record the row number, which is the rightmost edge of the vehicle license plate;
  3. Then continue to scan, find the first row with a gray value of 255, where the number of pixels is less than the threshold (and the number of adjacent columns is greater than a certain threshold), record the row number, which is the maximum number of the vehicle license plate. left edge;
  4. At this time, scanning is no longer continued, and the original image is cropped based on the two recorded column numbers;
  5. The cropped image is obtained, which is the license plate image area positioned in the vertical direction.

The effects after projection respectively are as follows:
Insert image description here

It is worth mentioning that not all natural vehicle images are so perfect after BP neural network operation, which is why a threshold is set when finding each edge of the license plate (and several adjacent rows satisfy the requirement that the number is greater than a certain threshold). As shown below
Insert image description here

After horizontal and vertical projection of the binary image of the license plate, we have obtained the precise position of the license plate, that is, the area where the characters are located. At this point, we can focus our attention from the original binary image to the local area of ​​the license plate in the binary image (that is, how to segment each character from the license plate).

After the rough position of the characters is determined, the license plate characters can be segmented.
The characters on license plates must be standardized, as shown in the picture below.
Insert image description here

In addition to the proportional method for segmentation, this article adopts a more adaptable projection-based segmentation technology.
The character segmentation processing uses a method based on projected eigenvalues. For numbers and characters, since they are all conjoined characters, you only need to find an unbounded blank area between the characters or numbers ( Narrow area), you can achieve segmentation processing between numbers and characters (of course this is also an ideal situation)
Insert image description here

Obviously to segment the characters in the picture above, just make a vertical projection of the characters with a pixel size of 255 within the license plate (white rectangle). The specific methods are as follows:

Insert image description here

Accumulate the gray value of each pixel by column for projection. The abscissa is the sum of the accumulated gray values, and the ordinate is the number of columns in the image. The left and right edges of the character area can be determined.
The vertical analysis algorithm is as follows:

  1. Scan the image column by column from left to right, and record the number of pixels with a grayscale value of 0 in each column;
  2. Find the first column in which the number of pixels with a grayscale value of 0 is greater than a certain threshold (and the number of adjacent columns is greater than a certain threshold), and record the following number, which is the leftmost edge of a character;
  3. Then continue scanning, find the first column in which the number of pixels with a grayscale value of 0 is less than the threshold (and the number of adjacent columns is greater than a certain threshold), record the following number, which is the photo of one character. rightmost edge;
  4. At this time, continue scanning, record the two column numbers in sequence, and crop the original image;
  5. The cropped image is obtained, which is the vertically positioned character area.
    Then each character of will be obtained. At this time, just save the area coordinates of each character.
    Insert image description here

The effect is as shown above

Chapter 4 License Plate Recognition System Based on Color and BP Neural Network
4.1 Establishment of License Plate Character Library
The character library is established for characters To prepare for recognition, the character library requires each character to be saved according to certain standards, such as the same storage format, the same size specifications, the same number of each character, etc. In the paper, only numbers and letters were studied, and Chinese characters were not studied, so there were a total of 34 different types of characters. The number 0 and the letter o, and the number 1 and i are all considered to be the same character. Each character uses a program to obtain 10 different characters. The character library is established to train samples of BP neural network.
4.1.1 Image scaling technology
 In computer image processing, image scaling refers to the process of adjusting the size of digital images. Image scaling is a non-trivial process that requires a trade-off between processing efficiency and the smoothness and clarity of the results. As the size of an image increases, the pixels that make up the image become more visible, making the image appear "softer". Conversely, shrinking an image will enhance its smoothness and clarity.
The main image scaling algorithms include nearest neighbor interpolation and bilinear interpolation.
In this thesis research, I used the nearest neighbor interpolation method to normalize each character to a unified specification of 6*12.
The so-called nearest neighbor interpolation, the popular understanding is to copy and map each original pixel intact to the corresponding four pixels after expansion. See the effect below.
Each character before normalization.
After normalization, each character is
4.1.2 Saving characters
First number the characters, 0-9 respectively. The numbers are 00-09. A-Z are numbered 10-33
respectively. In this paper, the characters are saved in the format of .raw original image data file. The names are 000.raw to 339.raw.
That is, a three-digit number not exceeding 340 plus the format name.raw. The first two digits represent the number of the character, and the third digit n represents the nth character. For example, if a certain character is named 089.raw, it means that this character represents the number 8, and it is the 9th picture in the library (counting starts from the 0th picture). Another example is that 330.raw represents the character Z, which is the 0th picture.
In order to keep the characters in the character library and the characters to be recognized as consistent as possible, the characters are created through the program obtained by the method. Here’s how:

  1. Split each character in the license plate.
  2. Normalize each character one by one to the 6*12 unified standard
  3. Just name and save the normalized characters according to certain standards.
    Finally, 340 different files in .raw format were obtained in 34 types. Use a program to display these character libraries as shown below.
    Insert image description here

4.2 Character recognition based on BP neural network
After establishing the character library, design the BP neural network for character recognition.
First design the input layer and output layer. Each specimen in the character library is a standard binary image of 612. For character features, I use the pixel method. That is, when the pixel of a certain character is 255, the neural node input is 0.9, and when the pixel value is 0, the neural node input is 0.1. In this way, the input layer has 612 in total, 72 nodes. point. The output layer uses 34 nodes. For example, when the tutor signal is A number 10, the 10th node is set to 0.9. The other nodes are all 0.1. (Replacing 0 and 1 with 0.1 0.9 has been proven by many scholars to be more accurate. ).

At this point, the BP neural network model used for license plate positioning has an input layer and an output layer. That is, the input layer contains 72 neural nodes (corresponding to each pixel of the character), and the output layer contains 34 neural nodes (corresponding to the logical relationship of belonging to the Nth character). The middle layer is designed as a layer containing only 50 nodes (this is an empirical value obtained during the experiment and is not guaranteed to be optimal)
After designing the BP neural network model, the license plate The characters are put into training and can be used after the network converges (actually during the completion of the paper, convergence did not occur, but the training was stopped after reaching the number of training times. This number of 10000*72 was determined after testing during the experiment, and it is also not guaranteed. optimal). At this time, the neural network can be considered to have the ability to distinguish 34 different characters (the valuable weights obtained by training are saved in a file CharBpNet.txt, so that they can be read directly when restoring the network and save a lot of training time) . Then, after normalizing any character obtained from the license plate segmentation, assume that they are numbers 0 to 33 respectively. And find the error corresponding to each number, and finally find the minimum among 34 different errors. The number found is the corresponding character. For example: the minimum error number obtained after a certain segmented character is put into action is 10, then this character is considered to be A.
The string obtained after putting all the characters separated from the license plate into action is the license plate number. The effect is as follows:
Insert image description here

Chapter 5 Conclusion
This paper "License Plate Positioning and Recognition System Based on Color and BP Neural Network" has been basically completed, and the experiment has met the expected expectations.
In terms of license plate positioning, due to the color-based positioning method, the image quality of the license plate to be located is required to be clear, and the background color of the license plate is blue or not much different from the standard blue. In this case, the BP network can Very effective in distinguishing blue from non-blue and thus positioning. But there are often situations where it cannot be located. When the car body is also blue, the BP network must be unable to separate the license plate from the image.
In terms of character segmentation, the method based on vertical projection eigenvalues ​​can effectively segment characters. But there are also cases of segmentation errors. For example, when the license plate positioning is not accurate enough or the positioned license plate has noise, it is impossible to find a borderless blank area (narrow area) between characters or numbers. In this case, segmentation is often wrong (one character may be divided into multiple, noise may also be considered characters).
In terms of character recognition, in this paper, the experimental process found that when both license plate positioning and character segmentation can obtain good results, they can be effectively recognized in most cases. However, when the character segmentation is incorrect (or inaccurate), there are many noise points to be recognized, or the characters are severely tilted, misrecognition or even rejection will often occur. For example, B may mistake it for 8, B may mistake it for R, etc.

Reference materials
[1] Baidu Encyclopedia license plate recognition system http://baike.baidu.com/view/6256962.htm;
[ 2] Bo BP Neural Network Learning http://blog.csdn.net/sealyao/article/details/6538361
[3] Digital Image Processing Tutorial/Edited by Chang Qing. —Shanghai: East China University of Science and Technology Press, 2009;
[4] Computer Vision and Pattern Recognition [Monograph]/by Zheng Nanning. —Beijing: National Defense Industry Press, 1998;
[5] Proficient in Visual C++ digital image processing technology and engineering cases/Wang Zhanquan, Xu Hui, editors. —Beijing
[6] Interesting Algorithms: C Language Implementation/Compiled by Yang Feng. —Beijing
[7] Neural networks control and MATLAB simulation=Neural networks control and MATLAB simulation/Compiled by Zhang Zexu. —Harbin
[8] Hybrid neural network technology/edited by Tian Yubo. —Beijing
[9] Weights direct determination of neural networks/Zhang Yunong, Yang Yiwen, Li Wei
[10 ] Product Concept Design and Application Based on Computational Intelligence / Written by Bo Ruifeng. —Beijing
[11] Introduction to Artificial Intelligence/Edited by Wang Wanliang. —3rd edition. —Beijing
[12] Image Emotional Semantic Analysis Technology/Chen Junjie [et al.]. —Beijing

Please pay
Omitted


5. Resource download

The source code and complete paper of this project are as follows. Friends in need can click to download. If the link does not work, you can click on the card below to scan the code and download it yourself.

serial number A complete set of graduation project resources (click to download)
Source code of this project Design and implementation of license plate positioning and recognition system based on VC++++BP neural network + license plate recognition (source code + documentation)_VC++_BP_license plate positioning and recognition system.zip

6. More C# graduation design projects

Selected 83 sets of C# graduation projects - source code + complete resources of thesis

Guess you like

Origin blog.csdn.net/m0_66238867/article/details/131128133