Design of Traffic Light Control System Based on Machine Vision

Design of Traffic Light Control System Based on Machine Vision

Summary

With the development of industrial automation and the automobile industry, the number of cars has skyrocketed, causing traffic breakdowns and urban traffic jams to occur more and more frequently. Even if the road is widened day by day, it still cannot solve the existing problems, and the traffic environment problem is still becoming more and more serious. In order to solve this problem, this paper designs and studies the traffic light timing controller, and proposes a traffic light control method based on machine vision. The traffic lights can perform intelligent timing based on real-time traffic flow information, thereby reducing the unreasonable retention of vehicles at traffic intersections and improving the traffic efficiency at intersections.
This system has carried on the detailed design to the hardware structure and the software algorithm of the intelligent traffic light. The processing core of the intelligent traffic light hardware platform based on machine vision is coordinated by STM32. Functionally, the system consists of image acquisition module, image processing module and power supply module. In the image acquisition module, the image acquisition of the intersection is performed by two cameras. The image processing module takes the opencv on the Raspberry Pi as the core, performs median filter denoising, background extraction and update, and background difference algorithm on the image, and uses threshold to segment the binary image of the foreground of the moving vehicle. Through the improved weighted area method, the traffic flow information at the intersection is counted from the binarized foreground image, including the presence or absence of vehicles and the number of vehicles. Integrating the traffic flow information of each intersection, the optimal time allocation of traffic lights at the intersection is carried out. The power module provides stable and reliable multiple working levels for the system. Finally, a physical model is established to verify the function of the intelligent traffic light. Results The surface image acquisition module can collect normally. The system can correctly extract the vehicle information at the intersection through image processing. Under typical road conditions, the intelligent traffic lights can carry out reasonable traffic light timing.
Keywords: machine vision image processing OpenCv traffic light STM32 communication

Abstract

With the development of industrial automation and the automotive industry, the number of cars has increased rapidly, leading to traffic breakdowns and urban traffic congestion more and more frequently. Although the roads are being improved, the fundamental problem cannot be solved, and the traffic environment problem is still getting more and more serious. In order to solve this problem, this paper designs and studies a traffic light timing controller and proposes a vision-based traffic light control method. The traffic light can be intelligently timed according to the real-time traffic flow information, thus reducing the unexplained detention of vehicles at traffic intersections and improving the traffic efficiency of intersections.
This project will provide a detailed design of the hardware structure and software algorithm of the intelligent traffic light. The hardware platform processing core of the vision-based intelligent traffic light is collaborated by STM32. Functionally, the system consists of an image acquisition module, an image processing module and a power supply module. In the image acquisition module, the image acquisition of the intersection is carried out by two cameras. The image processing module uses opencv on board the Raspberry Pi as the core, and after median filtering and denoising, background extraction and updating, and background differencing algorithm, the binarized image of the foreground of moving vehicles is segmented using threshold values. The information of moving traffic flow at intersections is counted from the binarized foreground image by the improved weighted area method, including the presence or absence of vehicles and how many states of vehicles. The traffic flow information of each intersection is integrated to optimize the traffic light timing of the intersection. The power supply module provides stable and reliable multiple operating levels to the system. Finally, a physical model is established to verify the function of this intelligent traffic light. The result is that the surface image acquisition module can properly acquire the system can correctly extract the intersection vehicle information through image processing, and the intelligent traffic light can carry out reasonable red light timing under typical road conditions.
Keywords:Machine Vision; Defect Detection; Image Processing; RaspberryPi; OpenCv

introduction

In the previous traffic signal management system, the signal lights were separated separately, the control signal lights had to be controlled at the traffic scene, the traffic situation in the whole urban area could not be fed back in time, and the overall traffic signal balance management ability was lacking, which led to the increasingly low traffic efficiency. Since most of the traffic lights in many cities were built a few years ago, at the initial stage of urban development at that time, both the flow of people and the flow of vehicles were relatively small.
However, in recent years, the central cities have developed rapidly, and the flow of people and vehicles has increased sharply. The original traffic light settings have gradually failed to meet the needs, so there are certain unreasonable problems in the traffic lights at some intersections. In addition, due to the rapid increase of vehicles, the increase is high every year, which has an impact on road management, especially the problem of setting traffic lights. Many traffic leaders realize that in view of the increasingly prominent problem of traffic lights, the traffic department will start to carry out overall research on the traffic lights in the whole district, thoroughly investigate, comprehensively analyze, and regularly check the traffic lights at each intersection. According to the analysis of the traffic flow of road vehicles, it can ensure the smooth traffic in the main direction. With the development of social economy, there have been a series of problems such as traffic jams, high frequency of traffic accidents, traffic environmental pollution, and traffic chaos, which seriously affect the current social and economic development and people's lives. At the same time, the construction of traffic facilities has In the general lagging situation, solving these problems is currently needed and urgent. At this time, the intelligent traffic signal management system plays an important role here. The
current intelligent traffic light control system can change the passing time according to the predetermined time period, and can change the signal light jump time by measuring the vehicle residence time through a piezoelectric sensor. There is also a method of controlling the jump of traffic lights by laying ground induction coils to detect traffic flow. These methods either cause waste of isochronism or hinder their development due to high construction costs. Use video detection technology to detect the traffic flow of each lane at the intersection, summarize the vehicle information of each lane, and through the regulation of the intelligent control system, change the jump cycle of traffic lights at the intersection in time to minimize vehicle waiting, which can effectively improve urban roads to a certain extent. traffic efficiency. In addition, the system can be implemented directly on the existing traffic system, and has the characteristics of convenient installation and low development cost.
Using Raspberry Pi as the main control can effectively process and analyze images.
Using STM32 as the control chip for simulating traffic lights can more intuitively simulate the status of traffic lights at actual intersections.
Using the opencv open source and free platform, the amount of code written by the python programming language is reduced, and it is easy to read and write.
Combining the above-mentioned excellent software and hardware platforms to complete the design of the intelligent system, and through the individual tests of each module and the overall system test, the system has certain stability and accuracy.
This article covers the subject background, demand analysis, brief introduction of each framework, explanation of each module, function design and algorithm design. Aiming at the intelligent traffic light control system based on machine vision, it can realize autonomous adjustment of traffic light time and optimize the problem of urban intersections.

1 Subject background

The content of this chapter explains the research background of this project, introduces the methods and differences of traffic light control in the traditional mode and the traffic light control system of machine vision, and analyzes the future practical application scenarios and development needs of the traffic control system based on machine vision , describing the tasks to be completed in the project and the thesis structure of this article.

1.1 Overview

Transportation is a highly comprehensive industry, which involves many aspects including: roads, vehicles, people, energy, environment, etc. It has played a vital role in the process of urban economic development, and it can be said that it is convenient, Harmonious traffic is a necessary condition for the modernization of a city. Studies have shown that the automobile industry can promote the development of upstream industries at a ratio of 1:2, and in more developed cities around the world, the automobile industry can almost 100% drive the industries related to the middle and lower reaches of the city[2] . And economic development can promote the development of automobiles. With the rapid development of science and technology, industry and economy, my country has just experienced the fastest development of motor vehicles for more than ten years, and the number of motor vehicles, especially private cars, has increased significantly [3]. Most cities in our country have entered the automobile society, which has brought more and more traffic failures and urban traffic congestion, which directly leads to a series of traffic problems such as poor operating conditions of automobiles, low traffic timeliness, energy waste, and excessive emission of automobile exhaust. problems, causing huge losses to the economy. The reasons for this series of traffic problems mainly contain the following three points:
poor intersection conditions, such as narrow roads and uneven road surfaces, directly restrict the size of the traffic flow;
excessive traffic flow, such as some prosperous commercial areas, the traffic flow is huge , the general intersection cannot afford;
the traffic lights are unreasonable, and vehicles will always be stranded unnecessarily.
In our country, in recent years, we have been vigorously strengthening the improvement of traffic roads, and the continuous expansion of road infrastructure has played a certain role in improving the current situation of traffic in our country. However, as most cities enter the automobile society, the increase in the number of automobiles has far exceeded the speed of road construction. Simply relying on the limited urban land resources to widen roads and build elevated roads has gradually failed to satisfy people's demand for road traffic. demand. The traffic situation in domestic cities is still deteriorating day by day. The government has also introduced some policies to reduce private car travel, such as restricting the travel of vehicles with single and double license plates, and issuing limited license plates. To a certain extent, these policies have played a role in temporarily relieving traffic pressure, but they cannot fundamentally solve the problem, and will also restrict the development of the automobile industry.
Aiming at the unreasonable timing of traffic lights, with the high development of high-tech fields such as computer technology, sensor technology, electronic technology and intelligent control, the intelligent transportation system (Intelligent Transport System, ITS) realized by means of information technology Management and scheduling of traffic resources provides an effective way to solve the current road traffic failure problem [4]. The traffic lights at almost all traffic intersections in China adopt a timed distribution method. In the suburbs or at night when there is not much traffic, the following situations often occur at intersections: when there are no vehicles waiting in the left and right lanes or they have all been released. , but this lane still needs to stop unnecessarily and wait for the green light to pass; in addition, when there are no vehicles in the other three quadrants of the intersection, a car is about to pass through the intersection normally, but the traffic light suddenly turns red, and the result is A cycle that results in the vehicle having to stop and wait for a red light. Realizing the intelligent timing of traffic lights at intersections based on real-time traffic flow information at intersections combined with intelligent control algorithms can effectively solve this problem.
Traffic flow detection technology is the basis for intelligent traffic system decision-making and the basis of intelligent traffic lights. In the early days, after decades of research by experts and scholars, the traditional traffic flow detection methods mainly include magnetic induction coils, ultrasonic waves, microwaves, audio frequencies, infrared rays, etc. [5]. However, with the rapid development of image sensor and image processing technology, the method of obtaining traffic flow information based on machine vision is gradually replacing the traditional method.
Vision-based intelligent traffic lights collect and store image information of traffic intersections through image sensors, and then use these collected image information combined with digital image processing methods to obtain traffic flow information at intersections, and finally integrate traffic conditions at each intersection through microprocessors. The control algorithm intelligently controls the optimal timing of traffic lights. It has the following advantages [6]:
1) The system only involves the image sensor and processing unit, so the installation and maintenance are easy. During the installation and maintenance period, there is no need to overhaul and damage the road, and it will not affect the normal driving of the vehicle;
2) It can provide image monitoring and various traffic data information at the same time;
3) The detection range is large, the reliability is high, and the real-time performance is good;
4) The service life is long, Green environmental protection and no pollution.

1.2 Traditional traffic lights at intersections

Traditional traffic intersections often use a fixed-time traffic light mode, that is, the time of each traffic light is the same, and the time of traffic lights can only be modified artificially, and it is very troublesome if you need to modify the time of traffic lights. Today's increasingly complex traffic conditions are obviously difficult to meet the traffic demand at intersections.
Traditional traffic intersections are not capable of dealing with emergencies. For example, there are no vehicles or only a few cars in a certain direction, but because the fixed traffic light time is set in advance, the traffic pressure on the other side is very high at this time, but it must wait, which will inevitably cause a sharp increase in traffic pressure. Another reason is that at the current intersection, when there are special vehicles (ambulances, fire trucks, police cars, etc. performing tasks), they can only pass through the way of courtesy to the owners in front, and the signal lights will not change by themselves. Delay the rescue time. Therefore it is necessary to change the traditional intersection traffic lights.

1.3 Traffic light control intelligent system based on machine vision

1.3.1 Domestic Research Status

The process of automobileization in foreign countries is decades earlier than that in our country, and traffic problems appear earlier, so some foreign professionals have begun to study intelligent transportation systems based on image methods since the end of the 1970s. With the development of the computer industry and the electronic information industry, the JetPropulsion Laboratory in Pasadena, California proposed a new method of using machine vision to detect traffic information for the first time, and predicted its good future development prospects. At the same time, Europe and Japan also started extensive research on video vehicle detection technology [7]. In 1987, the first video detection system AUTOSCOPE prototype developed by the American ISS company (Image Sensing System) was born, which adopted the video detection algorithm of the Department of Transportation Engineering of the University of Minnesota [8]. AUTOSCOPE is a representative product of machine vision applied to traffic detection. The hardware core of the system is mainly composed of a 286 (or 386) microcomputer, multiple circuit modules and an ordinary industrial TV camera [9]. In this video detection system, the microcomputer uses the image information on the traffic road collected by the camera to perform digital image processing and analysis, and can simultaneously detect the traffic information of multiple lanes, including: traffic flow, vehicle storage, driving speed, motor vehicle type, Vehicles line up, etc., and images can be stored on videotape for off-camera analysis [l0]. In 1989, ISS officially launched the first-generation product AUTOSCOPE 2002 video vehicle detection system for outfield vehicle detection. In 1993, AUTOSCOPE 2003 was able to realize all-weather traffic information detection. The AUTOSCOPE system has very high accuracy and reliability, which was later pointed out in the test results given by the US SRF Consulting Group to the Minnesota Department of Transportation [1]. The AUTOSCOPE system not only proves the feasibility and reliability of visual detection of traffic information, but also shows the great advantages of this method compared with traditional detection methods and the development direction of future traffic detection.

1.3.2 Research Status Abroad

After more than 20 years of development, the AUTOSCOPE system of the American ISS company has developed many successful products, and has gradually become mature and reliable. It is compatible with various industrial standards and has good use effects. It has become the most widely installed and used video vehicle detection system in the world. system. Its products are widely used in intelligent transportation systems in many countries such as Europe, America, and Asia, and have won unanimous praise from professionals all over the world. In addition, common video vehicle detection systems in the current market include the vantage of the American ITERIS company, the IDS of the British PEEK company, and the cetrae TMS2000 of the Singapore Electronic Technology Company. These systems are also relatively mature and have good performance[12] .

1.3.3 Current mainstream methods

At this stage, most of them use PCs as the processing core, mainly because PCs have the most high-speed computing power and mature technology at present. Many universities and scientific research institutions in China are also doing in-depth research on the topic of traffic video detection. Most of them are studying digital image processing algorithms based on PC platforms, how to optimally extract moving objects from traffic videos, and thus extract traffic flow. Information; there is also research on the traffic flow video detection system based on TI's C6000 series DSP. Since the domestic auto industry and computer industry started relatively late. At present, many domestic research institutions such as Hikvision, Zhejiang University Supcon and other companies have launched traffic detection systems based on video surveillance [13], but the functions and performance of these systems are generally inferior to foreign products. At present, most traffic intersections in China are equipped with video surveillance systems, which can transmit the traffic image information at the intersection to the server of the traffic control center, and automatically detect violations of license plates and traffic videos. However, traffic accidents, intelligent traffic dispatching and other tasks require the cooperation of staff to complete. There is still a gap between domestic and foreign countries in the field of traffic video detection, but as a popular research direction, with more and more investment, it is developing steadily and rapidly and tends to be perfected.

1.4 Task Analysis

This topic is aimed at the current situation of this problem. This article mainly introduces a low-cost, easy-to-install, and concise design and implementation method of vision-based intelligent traffic lights. The intelligent traffic light takes hardware control and image processing as the core, and can operate without a PC without laying communication lines. The system can independently and real-time detect traffic flow information at traffic intersections, and intelligently perform traffic light timing according to vehicle stranded conditions. The vision-based intelligent traffic light system designed in this paper has a series of advantages such as miniaturization, low cost, low power consumption, and easy installation and maintenance.
Specifically complete the following tasks:

Hardware parts:

  1. Vehicles on the two intersection roads of the north-south lane and the east-west lane run alternately. The time for each pass is set to 23 seconds, and then the red light time is 20 seconds. The time can be set and modified.
  2. When the green light turns to red light, the yellow light is required to be on for 3 seconds before changing the running lane;
  3. The traffic light time can be adjusted according to the information transmitted by the Raspberry Pi, which is mainly divided into the following situations:
    (1) There are more vehicles in a certain direction than in the other direction, and the time of the red light in this direction can be reduced in the next traffic light cycle, increasing Green light time.
    (2) There is a car in one direction, but there is no car in the other direction. The traffic light in the direction with cars can be directly turned into a green light for vehicles to pass, and the traffic light in the other direction can be directly turned into a red light.
  4. In addition to the red, yellow, and green lights for the east-west and north-south lanes, digital tubes are used to display (by timing).
  5. There are special vehicles (controlled by switches K1 and K2 during the experiment), the traffic light control system can immediately allow the direction to pass, and the direction without special vehicles is prohibited.
  6. In the event of an emergency (controlled by switch K3 in the experiment), the traffic police can manually realize the state of prohibition of vehicles at all intersections (that is, all directions are red lights).

Image Processing:

  1. It can statically identify vehicles (using model cars instead of real vehicles), count the number of vehicles in two directions and send different codes to STM32 according to different situations.
  2. It can count the number of vehicles in the actual road video to realize dynamic recognition.

communication:

  1. Realize serial communication between STM32 and Raspberry Pi
  2. Raspberry Pi can send corresponding information to STM32, and STM32 can feedback information to ensure receipt

1.5 Paper structure

The structure of this paper is detailed as follows:
Abstract: A summary of the full text of this paper.
Introduction: Describe the overall idea of ​​this subject, outline and inspire the leadership, and take a look at the overall situation.
The first chapter background of the subject: emphatically explained the research background, the significance of the project, the research status at home and abroad, and a text structure of this article. The second chapter is the
introduction of the development environment and related technologies: from the visual processing and hardware design respectively.
Chapter Three Demand Analysis: Emphatically discuss from the aspects of feasibility and functionality.
Chapter 4 Outline Design: Based on requirement analysis, transform it into a system prototype.
Chapter 5 Detailed Design: Describe the specific implementation method of the entire project in a comprehensive and detailed manner.
Chapter 6 System Test: This chapter tests the system deployment to see if it fits with the tasks that this topic needs to achieve.
Chapter VII Summary and Prospects: Some conclusions and gains of this topic, and the prospect of this project in the future.
references.

2 Introduction of development environment and related technologies

This chapter mainly introduces the development technology and tools used in the traffic light control intelligent system based on machine vision, from visual detection and hardware control respectively.

2.1 Visual inspection

2.1.1 Raspberry Pi

The Raspberry Pi is developed by the "Raspberry Pi Foundation", a charitable organization registered in the UK, with Eben Upton/E Upton as the project leader. In March 2012, Eben Epton (Eben Epton, Cambridge University) officially launched the world's smallest desktop computer, also known as a card computer, which is only the size of a credit card but has all the basic functions of a computer. This is the Raspberry Pi Computer board, the Chinese translation is "raspberry pie".
Since its inception, it has been sought after by many computer enthusiasts and makers, and once a "school" was hard to find. Don't look at its "petite" appearance, but its "heart" is very strong. It has all functions such as video and audio. It can be said that "although the sparrow is small, it has all internal organs." Since the advent of the Raspberry Pi, it has experienced the evolution of Type A, Type A+, Type B, Type B+, Type 2B, Type 3B, Type 3B+, Type 4B, etc. On June 25, 2019, the Raspberry Pi Foundation announced the release of the Raspberry Pi 4B version. The function introduction of the start-up board is shown in the figure.

insert image description here

The Raspberry Pi 4b development board adopts the most popular Type-C power supply interface and is easy to use; it provides a network card for connecting to the wife, and can connect to the Raspberry Pi through the remote desktop connection service Xrdp, so that it can also be developed in the windows environment. Saves the cost of an external display; it provides a total of four usb interfaces, Usb3.0 2 high-speed port and Usb2.0 2 port, and the version of the Usb interface is backward compatible, which is very suitable for the three Usb2.0 used in this project Camera.

insert image description here

The above is the GPIO pin diagram of the Raspberry Pi 4b, a total of 40pins, distributed in a 20*2 array. Contains 2 5V and 2 3.3V constant voltage power supply pins, 8 GND pins. In principle, each pin of the Raspberry Pi can be used as input and output, but it is not recommended to do so, so the remaining 28 pins are preferred as general signal pins.
The pin has 4 modes, normal input (INPUT), normal output (OUTPUT), PWM output (PWM_OUTPUT), and supports CLOCK output mode. Generally, when used as an output mode, when it is set to a high level (HIGH) state, it provides a constant voltage of 3.3V, and when it is set to a low level (LOW) state, the voltage is close to 0V. Because the power output of the Io port is low and the output capacity is limited, it usually cannot directly supply power to the original components, which may easily cause the pins to be burned. For example, when driving a motor, an external driver board and power supply are required.
When used as PWM output, the frequency and pulse width of PWM can be configured by software, which can control the steering gear very simply and effectively. But it should be noted that except for the GPIO 18 pin, the output of the PWM wave of the Raspberry Pi is completed by software, and the output capability is weak.
When used as an input mode, it can monitor the high and low level changes on the pin, so as to respond to the signal sent by the sensor and call the callback function to make corresponding operations.
After the pins are initialized in different ways in the software, they are controlled by different codes. It is necessary to keep the program running during PWM output.

2.1.2 OpenCV

OpenCV, full name Open Source Computer Vision Library, is an open source computer vision library funded by Intel Corporation. It consists of a series of C functions and a small number of C++ classes. It provides frame extraction functions and many standard image processing functions for various forms of image and video source files (such as: bitmap images, video files and real-time cameras), and realizes Many general algorithms in image processing and computer vision. Its important features include:
(1) It has a cross-platform middle and high-level API including more than 300 C functions. It does not depend on other external libraries - although some external libraries can also be used.
(2) Free for both non-commercial and commercial applications.
(3) Provides a transparent interface for Integrated Performance Primitives (IPP). This means that if there are IPP libraries optimized for a specific processor, OpenCV will automatically load these libraries at runtime. The figure below shows the composition of the OpenCv module.

insert image description here

The advantages of choosing opencv for this course design are:

  1. Python code, open source.
  2. Rich function functions, powerful image and matrix computing capabilities: OpenCV provides basic structures such as arrays, sequences, matrices, and trees, and also includes many advanced mathematical calculation functions such as differential equation solving, Fourier analysis, integral operations, and special functions, as well as Various image processing operations and advanced vision functions such as object tracking, camera calibration, 3D reconstruction, etc.
  3. Platform independent. Programs developed based on OpenCV can be directly ported between Windows, Unix, Linux, MacOS x, Solaris, HP and other platforms without any modification to the code.
  4. Convenient and flexible user interface. As an open computer vision function library, OpenCV is not as convenient to use as Matlab to explain and execute, and Softlmegration combines CH and OpenCV to launch CH OpenCV, which solves this bottleneck in use.
  5. Embeddability: Different from C/C++-compiler. Ch can be embedded in C/C++ applications and hardware machine scripts. It relieves the user of the heavy burden of developing and maintaining the huge machine code of the application. Unified structure and function definition, optimized code developed based on Intel processor instruction set. It can be seen that, as a basic open source project of image processing, computer vision and pattern recognition, OpenCV can be directly applied to many fields as an ideal tool for secondary development.

2.1.3 USB camera

In computer vision, the camera is regarded as the eyes of the machine, which plays the role of directly obtaining images. It is usually composed of a lens and an image acquisition card, which work together to realize the transformation of optical signals into digital signals recognizable by the machine, compress and upload to the machine. It is necessary to select the appropriate focal length, resolution, frame rate, etc. according to different application scenarios.
This topic chooses AX-2930-176V1.0 USB high-definition wide dynamic 1080P camera module. The camera output interface is USB2.0, and the lens millimeter is 3.6mm, which is suitable for the shooting distance of 0-3m. The picture below shows the AX-2930-176V1.0 camera module. The physical picture of the camera is shown in the figure below.

insert image description here

Table 2.1 AX-2930-176V1.0 camera parameters

category parameter
resolution 640480/1280720/12801024/19201080
pixel 2 million
frame rate 30fps
SNR 39db
output format MJPG/UVC/YUY2(YUYV)
Interface Type USB2.0 free drive
power 2W
Operating Voltage 5V
Applicable operating system Windows/Android/Linux

By searching for information, the parameters of the camera are shown in Chart 2.1. The following is an analysis of several important parameters of the camera in combination with the shooting environment of this subject.

  1. Frame rate: Since the detection speed of this system is required to be 3s/piece, the frame rate of 30fps can meet the requirements.
  2. Interface type: Raspberry Pi provides 4 usb interfaces to ensure that 3 cameras can be connected at the same time.
  3. Working voltage: The system uses Type-c power supply, the voltage is 5V, so the working voltage of the camera is 5V to meet the system requirements.
  4. Applicable operating system: This camera is suitable for Windows/Android/Linux system, which is convenient for computer debugging in the early stage of system development.
  5. Resolution: This camera has multiple resolutions, so it is convenient for us to adjust the resolution to meet the requirements of the system.

2.2 Hardware Control Technology

2.2.1 STM32

STM32 microcontroller is mainly a microcontroller designed by STMicroelectronics, which has the characteristics of low power consumption, low cost and high performance, and is suitable for embedded applications. It adopts ARM Cortex-O core, and according to its core architecture, it can be divided into a series of products. The current mainstream products include STM32F0, STM32F1, STM32F3, and products with ultra-low power consumption include STM32L0, STM32L1, STM32L4, etc. Since the core used in the STM32 microcontroller has an advanced architecture, it has strong performance in terms of implementation performance and power consumption control, so it has great advantages in integration and integration, and it is more convenient to develop. Single-chip microcomputers can be developed and put into the market very quickly. This type of single-chip microcomputers is very common in the current market, and there are various types, including basic type, intelligent type and advanced type, etc., and are widely used [1]. In this course design, we combined the actual situation with the existing materials, and finally chose the STM32F103 chip. The schematic diagram of the STM32F103 chip architecture is shown in the figure below.

insert image description here

It can be seen from the block diagram of the STM32F103 system that the two APB2 and APB1 buses extended from the AHB bus are mounted with various special peripherals of the STM32. The peripherals such as GPIO, serial port, I2C, and SPI that we often say are mounted on these two buses. This is the focus of our learning of STM32. In this course design process, we need to use various peripherals To simulate the actual change of traffic lights at intersections.

insert image description here

2.2.2 74HC595 digital tube module

The 74HC595 digital tube module is equipped with SM74HC595. SM74HC595 is a CMOS device with a silicon structure, compatible with low-voltage TTL circuits, and is an 8-bit serial shift register with storage registers and three-state outputs. There are separate clocks for the shift and storage registers. Comply with JEDEC standards. Its main function is an 8-bit serial output shift register, that is, serial to parallel. Its package diagram is shown in the figure below

insert image description here

The description corresponding to each pin symbol of SM74HC595 is shown in Table 2.2:
Table 2.2 SM74HC595 Pin Function Table

symbol pin describe
Q0~Q7 15th foot, 1st~7th foot 8-bit parallel data output
GND pin 8 land
Q7' foot 9 Serial data output, cascaded output terminal, connected to the DS terminal of the next 595
MR foot 10 Active low level, clear the existing data in the shift register, generally not used, just connect high level
SH_CP or SCK foot 11 Shift register clock pin, when rising edge, the data in the shift register is shifted backward as a whole, and new data is accepted (input from DS)
ST_CP or RCK foot 12 Storage register clock input pin. On the rising edge, data is dumped from the shift register to the storage register.
OE foot 13 Active low, output enable control pin, so connect to GND
DS foot 14 Serial data input pin
VCC foot 16 power supply

The 74HC595 digital tube module is an 8-bit serial input, parallel output shift register: the parallel output is a three-state output. On the rising edge of SCK, the single-line data is input from SDL to the internal 8-bit shift register, and output by Q7, while the parallel output is to store the data in the 8-bit shift register to 8 bits on the rising edge of LCK. Parallel output buffers. When the control signal of the serial data input terminal OE is enabled low, the output value of the parallel output terminal is equal to the value stored in the parallel output buffer. And when OE is high potential, that is, when the output is turned off, the parallel output terminal will maintain a high impedance state. Serial input, parallel output.
In the actual process of our course design, due to the need to simulate the intersection situation, four sets of two-digit digital tubes are needed. Considering that the use of ordinary digital tubes will greatly occupy the IO resources of the hardware, and the various circuits are very complicated, it is not necessary to Conducive to our completion of this course design. Although the use of 74HC595 digital tube module will increase the budget, it will guarantee the follow-up work of our team. The picture below is the physical picture of the 74HC595 digital tube module we use.

insert image description here

2.2.3 Signal lights

In the actual crossroad traffic process, there are not only digital tube countdowns, but also traffic lights. In order to better simulate the situation of the intersection, we purchased the LED traffic light module to simulate the traffic light situation of the actual road. The module is powered by 5V voltage, and the common cathode red, yellow and green lights are controlled separately. Its actual figure is shown in the figure below

insert image description here

3 needs analysis

This chapter mainly analyzes the needs of the topic, and discusses the actual market demand of the designed intelligent traffic light system based on machine vision from multiple dimensions such as economy, technology, functionality and non-functionality.

3.1 Feasibility analysis

3.1.1 Technical Feasibility Analysis

The intelligent traffic light system based on machine vision designed in this system is mainly divided into two parts: the hardware control structure and the vision algorithm part.
For the visual algorithm part, the Raspberry Pi 4b development board, the camera module, and the algorithm part are completed by opencv. These modules and algorithms have practical experience in the previous project team members, and some usage methods and documentation can be found in a wealth of information, which can support the completion of this course design. The technology of the algorithm part is feasible of.
For hardware control, we use STM32 as the control chip of the hardware part, and use 74HC595 digital tube module and LED traffic light module to realize the simulation of the actual application of traffic signal light, which requires the input and output control of IO port and the digital tube of TME3 timer Countdown, serial communication between Raspberry Pi and STM32, and interrupt control of buttons in emergency situations in special cases. These basic single-chip control are involved in previous courses, and there are many materials and documents that can be used for reference. Techniques are available in hardware control structures.

3.1.2 Economic Feasibility Analysis

This project aims to provide a machine vision-based intelligent traffic light system to solve the problem that the time of traffic lights at urban intersections cannot cope with the actual situation. The main cost of this project is on the hardware chip. The specific costs are shown in Table 3.1 below.
Table 3.1 Subject Cost

project Price (Unit: Yuan)
Raspberry Pi 4b 818
STM32F103ZET6 398
camera*2 174
74HC595*4 4.6
LED red and green module*4 14
Car model*several 20
total cost 1425.6

The main cost of this project is mainly caused by the control chip Raspberry Pi and STM32. Since the modification project requires real-time identification and detection, and requires high accuracy for the processing results, the performance of the chip The requirements are relatively high, but in the future project process, other chips that meet the requirements can be used to replace Raspberry Pi and STM32, so that the cost will be relatively reduced. In the actual intersection, traffic lights, cameras and other equipment already exist, and only a chip capable of performing vision algorithms is needed to complete the intelligent control of traffic lights. Compared with the traffic lights adjustment at intersections and emergencies through traffic police or dispatch centers, this saves huge human resource costs.
To sum up, comparing the two with each other, the economic feasibility is feasible.

3.1.3 Social Feasibility Analysis

With the continuous improvement of our living standards, many of us have started to buy cars. Well, there are more and more cars in today's society, and our cars are running very slowly on the current roads, because the number of our cars is constantly increasing, and our roads are becoming more and more congested. Now No matter what the situation is when we go out, we have to encounter traffic lights, so traffic lights are also a kind of signal lights to direct our traffic operation, and they are often set at some intersections, which is also for the smooth flow of our traffic jams in an orderly manner set. The road with the most traffic lights in China has 30 traffic lights per kilometer. This shows that the number of traffic lights at crossroads in my country is very large.
There are a large number of traffic lights at intersections and there are insufficient traffic lights. Therefore, under such social background conditions, there is a huge demand for such a vision-based intelligent traffic light system. In today's fast-paced life, time is productivity. The vision-based intelligent transportation system brought by this course can greatly improve the efficiency of crossing intersections, so the social feasibility analysis is correct.

3.2 Functional requirements

3.2.1 Overview

This topic aims to design a vision-based intelligent traffic light system, through the identification of waiting vehicles at the intersection, analyze each requirement for passing through the intersection, and adjust the traffic light time. At the same time, it is also possible to manually control the status of traffic lights in consideration of some emergencies.

  1. traffic light simulation
  2. Image Acquisition
  3. Image Processing
  4. Analyze crossing demand

3.2.2 Traffic light control

The system will simulate the way of the actual traffic lights at the crossing, with digital tube countdown, indicator lights with red, yellow and green lights and so on. The change time control of traffic lights is determined by the number of vehicles identified at each intersection, and the passing time of vehicles in the direction with more vehicles will increase accordingly. Conversely, if there are fewer road vehicles in this direction, the travel time will decrease accordingly. Finally, consider fast-tracking methods for special situations such as special vehicles.
Therefore, the traffic light control designed in this project is highly similar to the traffic logic structure of the actual road, which greatly restores the actual traffic intersection.

3.2.3 Image Acquisition

This part is a key part of the whole system. The quality of the collected picture data frame directly affects the next part of the algorithm, which is the analysis and processing of the picture. Therefore, obtaining high-quality photos is necessary for the entire system to accurately determine the passing needs of each intersection.
We all know that light has a huge impact on the camera reading photos, so it is necessary to maintain the consistency of the light source during the image recognition process. This can greatly increase the accuracy of recognition.

3.2.4 Image processing algorithm

The images of waiting vehicles at each intersection collected by the camera, in order to facilitate image processing and observation, firstly, the images need to be compressed to a specified size. Through image processing, a series of processing is performed to distinguish the color of the car from the color of the road. There is a certain distance between the vehicle and the vehicle. Therefore, the processed image maintains the contours of the vehicles, so we only need to count the contours to determine how many cars are waiting to pass at the intersection, and thus we can analyze the urgent need for traffic in this direction. no.
Raspberry Pi can complete image processing quickly, so it can meet the requirements of real-time monitoring. It can also be detected once according to a predetermined time.
In summary, the designed algorithm meets the requirements of the subject.

3.2.5 Analyzing the needs of crossings

We already know the data of a traffic flow at each intersection above, and we analyze the difference in traffic flow in different directions, such as the north-south direction and the east-west direction. The difference in the compared values ​​reflects the direction in which the traffic flow is heavy and thus requires more travel time to ease the traffic flow in that direction. Match the difference with the preset threshold to obtain a passing time of the final intersection, which greatly improves the passing efficiency of the intersection.
To sum up, the basic requirements of the intelligent traffic light system are met through the demand of such an intersection.

3.3 Non-functional requirements

3.3.1 Environmental requirements

The project relies on the realization of a visual intelligent traffic light system on the hardware, so it is necessary to add an environment to the hardware to complete this project, which is a project combining software and hardware.
We need to configure an environment of python-opencv on the Raspberry Pi, and also need to register vml to visually control the Raspberry Pi.

3.3.2 System Improvement Requirements

The design of the system cannot be completed in one step, and it is necessary to continuously make its own modifications and update the system according to the different actual application occasions. Therefore, we need to constantly improve and update our system.

4 Outline Design

This chapter transforms it into a system prototype based on the requirement analysis. Briefly introduces the overall design, including traffic light control, image acquisition, image processing and demand analysis, without involving specific implementation.

4.1 Overall Design

4.1.1 Overview

This topic aims to design a vision-based intelligent traffic light system solution, which is mainly composed of traffic light control, image acquisition, image processing and demand analysis: motion control is mainly to
simulate the traffic light status of the actual traffic intersection.
Image acquisition is mainly responsible for waiting vehicles on the road data collected as input to the algorithm.
Image processing is to obtain the vehicle information at the intersection after processing the input image.
Demand analysis is based on the analysis of vehicles passing through the intersection at each intersection to calculate the passing time in the direction of the intersection.

4.1.2 Traffic light control

Figure 4-1 Hardware control flow chart
insert image description here

As shown in Figure 4-1, it can be seen from the flow chart that the traffic light control part mainly completes the following functions:

  1. Vehicles on two intersecting roads in the north-south direction (arterial road) and east-west direction (branch road) run alternately. The starting time of the main and branch roads is set to 30 seconds, and the time is determined according to the traffic demand of each intersection obtained from the identification analysis. Revise.
  2. When the green light turns to red light, the yellow light is required to be on for 5 seconds before changing the running lane;
  3. When the yellow light is on, it is required to flash once every second.
  4. In addition to the red, yellow, and green lights for the east-west and north-south lanes, digital tubes are used to display (by timing).
  5. There are special vehicles (controlled by switches K1 and K2 during the experiment), the traffic light control system can immediately let the lanes pass, and the lanes without special vehicles are prohibited from passing.
  6. In the case of an emergency (controlled by switch K3 in the experiment), the traffic police can realize the state that all vehicles are prohibited from traveling and pedestrians are allowed to pass through the intersection.

4.1.3 Image Acquisition

The key part of this project, the quality of image acquisition, is directly related to the accuracy of the next graphics processing process. The image acquisition process is as follows:
Figure 4-2 Image acquisition flow chart

insert image description here

The above figure is a schematic diagram of the image acquisition process and flow direction. First, open the camera first, if the camera cannot be opened, an error message will be reported, and "failed to open the camera" will be printed, and then the camera will capture the current frame of the picture, save it locally and transmit it to the next image processing part. If the acquisition fails, it will print "acquisition camera open failed" and then close the camera.

4.1.4 Image processing

The intelligent traffic light system based on machine vision researched in this project, the vision part is mainly the recognition and target extraction of vehicle models. The image processing process is shown in Figure 4-3:

Figure 4-3 Image processing flow chart

insert image description here

This topic mainly obtains the outline of the vehicle by binarizing the image, and after some judgment processing of the outline information, some interference information is eliminated. First, the color
image information content is too large, and the calculation time for vehicle target detection is too long. At the same time Grayscale images can satisfy this experiment, so color images need to be converted into grayscale images.
The grayscale image also contains many other feature information, so for better recognition, it is necessary to convert the grayscale image into a binary image and set the corresponding interval.
After the binarization is completed, some interference information needs to be processed to exclude information such as noise on the image.
Finally, count the number of contours of the vehicle.

4.1.5 Demand Analysis

Demand analysis refers to the demand of each direction of the intersection to pass through the intersection, that is, to count the traffic flow that needs to pass through the intersection. The direction of the intersection with a large traffic flow indicates that this direction has a greater demand for traffic, and vice versa. The requirements analysis takes the following aspects into consideration.

  1. When there are no cars in the east and west directions, there are a large number of vehicles in the south and north directions that need to pass. At this time, the demand from the south and the north reaches the maximum.
  2. When there are no cars in the south and north directions, a large number of vehicles need to pass in the east and west directions. At this time, the demand of east and west reaches the maximum value.
  3. When there are no vehicles in the four directions of east, west, south and north, it indicates that there is no need to pass through the intersection in any direction at this time.
  4. When the traffic flow in the east and west is greater than the traffic flow in the south and north, it means that the traffic in the east and west directions is greater than that in the south and north directions at this time, and the size of the demand depends on the size of the difference in vehicle volume between the two.
  5. When the traffic flow in the south and north is greater than the traffic flow in the east and west, it means that the traffic in the south and north directions is greater than that in the east and west directions at this time, and the size of the demand depends on the difference in the amount of vehicles between the two.

5 detailed design

Detailed design is an essential step in system development, which is the refinement of outline design. This chapter will further explain the implementation of each module and take into account the details and various issues in the implementation process.

5.1 System hardware control design

5.1.1 Traffic lights

The whole system simulates real traffic lights at intersections. Under normal circumstances, the red light time of the whole system in each direction is 23 seconds, the green light time is 20 seconds, and the yellow light time is 3 seconds. The specific logic is shown in Table 5.1 below: Table 5.1 Traffic lights under normal
conditions state table

signal light east-west direction north-south direction
Beacon Status 1 red light green light
Beacon Status 2 red light yellow light
Beacon Status 3 green light red light
Beacon Status 4 yellow light red light

At the same time, the traffic lights of this system can be adjusted. Judging by the information transmitted by the Raspberry Pi, the time adjustment control of the traffic lights will be carried out in the next traffic light cycle. The main settings are several situations in Table 5.2: Table 5.2 Traffic light adjustment
table

Situation description perform an operation
There are fewer east-west vehicles and more north-south vehicles The time of green light in east-west direction is reduced by 5 seconds, and the time of red light is increased by 5 seconds; the time of green light in north-south direction is increased by 5 seconds, and the time of red light is reduced by 5 seconds
There are more east-west vehicles and fewer north-south vehicles East-west green light time increases by 5 seconds, red light time decreases by 5 seconds; north-south green light time decreases by 5 seconds, red light time increases by 5 seconds
There are no cars in the east-west alleys, but there are cars in the north-south lanes and the traffic lights are red The east-west direction directly turns into a red light, and the north-south direction directly turns into a green light.
There are cars on the east-west alley and the red light is on, and there are no cars on the north-south lane 东西向直接变成绿灯,南北向直接变成红灯

5.1.2紧急情况的中断

根据实际情况,本系统设计了在特殊情况下的处理办法,并采用按键控制的方式进行控制,主要包括:特殊车辆(110警车、120急救车、119消防车等)通过路口时、路口发生紧急情况时。具体如表5.3所示:
表5.3 紧急情况处理模式表

特殊情况描述 处理方式 处理过后
东西方向有特殊车辆经过且东西方向为红灯 东西方向直接变成绿灯,南北方向变为红灯,数码管显示“99”并停止计时 东西方向恢复绿灯计时
南北方向有特殊车辆经过且南北方向为红灯 南北方向直接变成绿灯,东西方向变为红灯,数码管显示“99”并停止计时 南北方向恢复绿灯计时
路口发生紧急情况 东西方向及南北方向均变成绿灯,数码管显示“99”并停止计时 东西方向恢复绿灯计时

5.1.3 串口通信

通过发送一行字符串进行控制,根据字符串中“n”的位置的不同来控制信号灯执行不同的操作,具体逻辑如表5.4所示:
表5.4 串口通信协议表

字符内容 代表的路口情况 STM32执行操作
“#n1111123” 东西向车辆少,南北向车辆多 东西向绿灯时间减少5秒,红灯时间增加5秒,南北向绿灯时间增加5秒,红灯时间减少5秒
“#1n111123” 东西向车辆多,南北向车辆少 东西向绿灯时间增加5秒,红灯时间减少5秒,南北向绿灯时间减少5秒,红灯时间增加5秒
“#11n11123” 东西巷没有车,南北向有车且处于红灯状态 东西向直接变成红灯,南北向直接变成绿灯
“#111n1123” 东西巷有车且处于红灯状态,南北向没有车 东西向直接变成绿灯,南北向直接变成红灯

5.2视觉算法设计

5.2.1图像采集

本课程由于考虑到经费的问题,我们使用到两个摄像头,用来获取两个路口的信息。我们将两个摄像头搭载在树莓派上,使用opencv驱动摄像头。能够实现摄像头的打开、获取视频流、获取当前帧等操作。

图5-1 模拟道路小车

insert image description here

如图5-1,使用小车模型来模拟道路上的车辆情况,由opencv调用摄像头抓取图片,并且返回输出的情况,由于需要将摄像头固定在一个较高的位置。我们使用纸壳箱作为固定摄像头的支架,因此在获取图像帧的过程中时,图像上会存在阴影。这可能会导致实验过程中出现误差。

5.2.2图像处理

在获取图像后,我们需要对图像进行一系列的处理,达到我们能够在道路上识别到我们车辆轮廓的一个过程。
首先将从摄像头获取到的图像读取出来,如图5-2所示:

图5-2 摄像头获取图像

insert image description here

为了获得更好的输出,我们需要将图像做进一步的转换,在这里我们需要将图像先转化为灰度图像,再转化为二值化化图像。效果如图5-3所示:

图5-3 二值化图像

insert image description here

由图像我们可以看到,在图像上还有许多的黑点点,这是由于外部的干扰所导致的,同时车辆的轮廓中间也存在着许多不连接的部分。因此,为了进一步的分析、提高最终的检测结果,我们需要进行一系列的处理。第一步,需要进行高斯滤波,将图片的边缘信息变得更加的平滑,在这里我们使用3*3的高斯矩阵和标准差为5进行处理。第二步,需要进行腐蚀处理,将一下不连续的小黑点或者小细线腐蚀。第三部需要膨胀处理,将中心不连续部分能够连在一起。最后进行闭运算。处理的结果如图5-4所示:

图5-4 处理后图像

insert image description here

由处理后图像我们看到,一些细小的干扰项被消除了。车辆的轮廓信息也更加明显,为后续对轮廓信息的提取提供保障。这也是本课程设计最为关键的一个环节。

5.2.3 数据统计与分析

我们得到处理后的图像之后,需要进行对车辆轮廓信息的一个提取。Opencv库里面自带一个提取轮廓信息的函数FindContours,它的输入是一幅二值图像,输出是每一个每一个区域的轮廓点集合。
当返回一个轮廓信息后,我们需要对这些信息作为一个筛选,因为不是所有的轮廓都是车辆的轮廓,因此我们需要过滤掉不是车辆信息的轮廓。我们采用的是检测车辆返回轮廓的大小来确定这个是不是汽车的轮廓。在实际的道路上,汽车是作为道路上的最大物体,因此我们只需要设置一个阈值,比较这个轮廓的大小是不是大于我们设置的阈值,就可以判断该轮廓是否为我们所需要的汽车轮廓。满足后,我们对满足要求的轮廓进行计数,即为道路车辆的数量。效果如图5-5所示。

图5-5 以视频为输入的检测情况

insert image description here

为方便确认计数的轮廓信息是否为正确的,我们将上面测试结果的中心点信息打印出来,可以方便我们分析系统的可靠性。如图5-6所示:

图5-6 识别车辆(x,y)坐标

insert image description here

同时获取两个路口的车辆信息后,我们通过比对两个路口车流量的差异,然后进行输出,并且将这些信息传递给STM32。如图5-7所示:

图5-7 根据车流量确定时间

insert image description here

6 系统测试

本章对系统进行检验,是否符合课题设计要求,检查运行是否符合预期。

6.1硬件模块测试

主要是对数码管、红绿灯、摄像头模块功能进行测试。

6.1.1正常情况下

表6.1 正常情况下交通灯测试表

测试编号 东西方向理论信号灯 南北方向理论信号灯 东西方向实际信号灯 东西方向实际信号灯 数码管是否正常显示
1 绿 绿
2
3 绿 绿
4 绿 绿
5
6 绿 绿
7
8 绿 绿
9 绿 绿
10 绿 绿

通过测试,正常情况下两个方向的红绿灯结合性较好,能够及时改变亮灯的顺序,灯和数码管的误差也小于一秒,实际测试过程中的测试图如图6-1-1、6-1-2所示

图6-1-1 东西方向

insert image description here

图6-1-2 南北方向

insert image description here

6.1.2紧急情况下

这里将测试三种情况——东西方向有特殊车辆、南北方向有特殊车辆和路口出现紧急情况,如表6.2、6.3、6.4所示。
表6.2 东西方向有特殊车辆

测试编号 east-west direction traffic light color North-south direction traffic light color digital tube display numbers
1 green red “99”
2 green red “99”
3 green red “99”
4 green red “99”
5 green red “99”

Table 6.3 There are special vehicles in the north-south direction

test number east-west direction traffic light color North-south direction traffic light color digital tube display numbers
1 red green “99”
2 red green “99”
3 red green “99”
4 red green “99”
5 red green “99”

Table 6.4 Emergency situations at intersections

test number east-west direction traffic light color North-south direction traffic light color digital tube display numbers
1 red red “99”
2 red red “99”
3 red red “99”
4 red red “99”
5 red red “99”

Through the test, it can be seen that when changing different special situations by pressing the button, the system can change the state of the traffic light well and complete the task to the greatest extent. The system test process is shown in Figure 6-2-1, 6-2-2, 6-2-3 shown.

Figure 6-2-1 There are special vehicles in the east-west direction

insert image description here

Figure 6-2-2 There are special vehicles in the north-south direction

insert image description here

Figure 6-2-3 Emergency situation at intersection

insert image description here

6.1.3 Changing traffic flow at intersections

By changing the information of the simulated vehicle, test whether the system identification is correct, the test cases and results are shown in Table 6.5.
Table 6.5 Vehicle Change Test Form

test number The number of vehicles placed in the east-west direction The number of vehicles placed in the north-south direction The number of vehicles identified in the east-west direction The number of vehicles identified in the east-west direction Whether the recognition result is the same as the actual result
1 0 0 0 0 yes
2 0 1 0 1 yes
3 1 1 1 1 yes
4 1 2 1 2 yes
5 2 2 2 2 yes
6 2 3 2 3 yes
7 3 3 3 3 yes
8 3 4 3 4 yes
9 4 4 4 4 yes
10 4 0 4 0 yes

Through the test, when there are different vehicles in the two directions, the system can basically identify them, and can adjust the traffic light time according to the number of vehicles. The process is shown in Figure 6-3-1, 6-3-2, 6-3- 3. As shown in 6-3-4.

Figure 6-3-1 Shooting results and processing of east-west cameras

insert image description here

Figure 6-3-2 Shooting results and processing of north-south cameras

insert image description here

Figure 6-3-3 Output result

insert image description here

Figure 6-3-4 Signal light adjustment results

insert image description here

6.2 Dynamic identification module test

Through the test, the system can basically count the vehicles passing the set line in the video and display it in the middle of the screen, basically achieving the set goal. The process is shown in Figure 6-4.

Figure 6-4 Dynamic Vehicle Identification

insert image description here

Code Link
Link: Design of Traffic Light Control System Based on Machine Vision

Guess you like

Origin blog.csdn.net/weixin_46627856/article/details/129593099