AR-HUD

Table of contents

一、C-HUD、W-HUD、AR-HUD

1. Actually consider the environmental information and relative position outside the vehicle

2. Consider the position of the driver's eyes in real time

2. ADAS

1. Definition

2. ADAS key nodes

3. Main functions

A. Information assistance

The first type of traffic monitoring

The second type of hazard warning

The third category is driving convenience

Overall List of IA Information Auxiliary Classes 

B. Control assistance

First class emergency response class

The second category is driving convenience

The third category is lane keeping

The fourth category is intelligent lighting

Overall List of CA Control Auxiliary Classes

4. Detailed explanation of the eight subsystems of ADAS

1. Lane departure warning system + lane keeping

2. Pre-collision system + emergency braking system

3. Blind spot monitoring system

4. Automatic parking system

5. Adaptive cruise system

6. Driver fatigue monitoring system

7. Adaptive lighting control

8. Night vision system

3. Four mainstream AR HUD imaging solutions TFT

1.TFT

2.DLP

3.LCOS

4. LBS-MEMS laser projection

5. Comparison of several imaging methods



一、C-HUD、W-HUD、AR-HUD

Why are C-HUD and W-HUD called traditional HUD, and AR-HUD is deliberately distinguished. What is the fundamental difference?

Another question: Apart from the light emission, reflection, and imaging that everyone has, are there any unique features in the technical principle of AR-HUD?

AR-HUD has a major uniqueness, which greatly increases its technical difficulty, and also makes it very closely related to ADAD functions: that is, AR-HUD must consider the environmental information and relative position outside the vehicle in real time, and at the same time Consider the position of the driver's eyes.

1. Actually consider the environmental information and relative position outside the vehicle

The arrows along the lane and the warning information on the heads of pedestrians must be integrated with the environment outside the car, otherwise it will be disorderly? However, C-HUD and W-HUD do not display this information.

2. Consider the position of the driver's eyes in real time

As we all know, light travels in straight lines . For the various symbols projected by the AR-HUD, if you want to give the driver the illusion that "it is just under the car and above the pedestrians", you must adjust the position of the big thick red line in real time according to the position of the human eye .

Isn't the so-called sensing the environment outside the vehicle and sensing the driver's state the basic operation of ADAS?

Therefore, the relationship between AR-HUD and ADAS is very close, and the interaction is reflected in two points:

  1. The working principle of AR-HUD depends on the perception function of ADAS system.
  2. The ADAS system increases the driver's cognitive load, and the use of AR-HUD can improve interaction efficiency and make driving safer and more comfortable.

Generally speaking, not to mention AR-HUD, the penetration rate of traditional HUD is not high, far from the mainstream :

With the advancement of technology, AR-HUD can carry more and more interactive information and become the main way of human-computer interaction. However, it is not currently a replacement for meters due to its sensitivity to light intensity .

2. ADAS

1. Definition

      ADAS (Advanced Driving Assistance System) is an advanced driving assistance system that uses various sensors installed on the car to collect environmental data inside and outside the car in the first time , and carry out identification, detection and tracking of static and dynamic objects, etc. Technical processing, so as to provide information assistance, early warning, auxiliary control and active safety technology for convenient driving.

       The sensors used in ADAS mainly include cameras, radars, lasers and ultrasonics, etc., which can detect light, heat, pressure or other variables used to monitor the state of the car. They are usually located in the front and rear bumpers, side mirrors, inside the steering column or windshield of the vehicle. on the glass.

       Early ADAS technology was mainly based on passive alarms. When a vehicle detects a potential danger, an alarm will be issued to remind the driver to pay attention to abnormal vehicles or road conditions. Proactive intervention is also common with the latest ADAS technologies.

Advanced driver assistance systems can be divided into 2 major categories and 36 subcategories of functions.

       Translated into vernacular, it is to use various sensors installed on the car to collect data and combine it with map data for system calculations, so as to pre-judge possible dangers for drivers and ensure driving safety.

       Here we need to clarify a concept. ADAS is not the popular autonomous driving. It can be said that the research focuses of the two are completely different. ADAS is assisted driving, the core is environment perception, while automatic driving is artificial intelligence, and the system is very different.

       However, ADAS can also be regarded as a prerequisite for self-driving cars. Let’s look at a stage map of car automation released by the U.S. Highway Traffic Safety Administration (NHTSA):

 ADAS achieves level 3, while autonomous driving achieves level 4. If you want to develop from level 3 to level 4, you need more cars equipped with autonomous driving technology, and you have to cooperate with road infrastructure construction (cameras on the road, clear lane lines), and the need for car interconnection, car Mobile phone interconnection, etc., is a very huge project.

2. ADAS key nodes

We can imagine the entire process required for ADAS to perform tasks: perception, judgment, and execution, and it is not difficult to get a general industrial chain.

1. Perception

        Cars do not have facial features like humans. The way cars perceive environmental data relies on various sensors. The more sensors there are, the more information the car can collect.

       At present, the main sensors used in ADAS are cameras, radars, lasers and ultrasonics, etc., which can detect light, heat, pressure or other variables used to monitor the state of the car. on the windshield.

       Most ADAS adopt the combination of camera + radar to realize the complementarity of radar ranging and camera image recognition functions. Active and passive infrared night vision systems are two mainstream technical routes. The active type accepts the corresponding sensitive spectrum imaging reflected by the object through the CCD, while the passive infrared focal plane detector accepts the infrared radiation imaging of the object. Both have their own advantages and will coexist for a long time.

2. Judgment

        As mentioned earlier, sensors allow cars to perceive like humans, and the core soul that allows cars to make judgments is the algorithm. According to the input data from sensors, etc., the trip computer can replace the driver's initiative to issue control instructions.

       Algorithms are the decisive factors for the reliability and accuracy of ADAS systems, mainly including camera/radar ranging, pedestrian recognition, road traffic sign recognition, etc. For pre-installation applications, the reliability requirements are high, and a large number of scene tests and calibrations are required. Among them, radar calibration has the highest threshold.

3. Execution

        The ADAS system acquires data through sensors, and after the main chip completes the judgment, the primary application warns the driver through sound, image, and vibration. After combining with the electronic control function, it gradually evolves to the automatic control of the vehicle.

3. Main functions

A. Information assistance

The first category is IA (Information Assist) information assistance functions, a total of 21 items, these functions do not include the control of driving behavior;

These features can be further broken down into three categories:

The first type of traffic monitoring

In different usage scenarios of the vehicle, it collects and displays road information such as traffic signs, road speed limits, and images around the vehicle, which is convenient for drivers to use.

The second type of hazard warning

The vehicle judges or predicts various potential dangerous events and provides early warning or reminders by obtaining status information inside the vehicle (driver) and outside the vehicle (vehicles, pedestrians, obstacle collisions, speed limits, road conditions). The low-speed driving assistance here is a comprehensive function, which includes some other basic functions. The specific definitions are as follows:

MALSO low-speed driving assistance: When the vehicle is driving, it detects obstacles around it, and provides images or warning information to the driver when the vehicle is close to the obstacle;

The third category is driving convenience

These functions can improve the convenience and safety of driving in specific scenarios such as no light/dark light, reducing the driver's head down, and reversing.

Overall List of IA Information Auxiliary Classes 

(Information Assist)

  • DFM (Driver Fatigue Monitoring) driver fatigue monitoring
  • DAM (Driver Attention Monitoring) driver attention monitoring
  • TSR (Traffic Signs Recognition) traffic sign recognition
  • ISLI (Intelligent Speed ​​Limit Information) intelligent speed limit prompt
  • CSW (Curve Speed ​​Warning) curve speed warning
  • HUD (Head Up Display) head up display
  • AVM (Around View Monitoring) holographic image monitoring
  • NV (Night Vision) night vision
  • FDM (Front Distance Monitoring) forward distance monitoring
  • FCW (Front Collision Warning) forward collision warning
  • RCW (Rear Collision Warning) rear collision warning
  • LDW (Lane Departure Warning) lane departure warning
  • LCW (Lane Changing Warning) lane change collision warning
  • BSD (Blind Spot Detection) blind spot detection
  • SBSD (Side Blind Spot Detection) side blind spot detection
  • STBSD (Steering Blind Spot Detection) steering blind spot detection
  • RTCA (Rear Traffic Cross Alert) Rear Traffic Crossing Alert
  • FTCA (Front Traffic Cross Alert) ahead traffic crossing reminder
  • DOW (Door Open Warning) door open warning
  • RCA (Reversing Condition Assist) reversing assistance
  • MALSO (Maneuvering Aid For Low Speed ​​Operation) low-speed driving assistance

B. Control assistance

The second category is CA (Control Assist) control assistance functions, with a total of 15 items. The difference from the first category is that this type of function will intervene in the control of vehicle driving behavior under certain circumstances.

This category of functions can also be further subdivided into four categories according to the differences in control behavior:

First class emergency response class

The main reason is that in an emergency, the vehicle will perform actions such as deceleration, braking, and steering to avoid collisions or other dangerous behaviors.

Among them, the first two AEB and EBA are very similar literally, and the specific difference can be seen in the detailed definition of their functions;

AEB Automatic Emergency Braking : Real-time monitoring of the driving environment ahead, and automatically activates the vehicle braking system to slow down the vehicle when there is a possible collision risk , so as to avoid collision or reduce the consequences of collision;

EBA Emergency Brake Assist : Real-time monitoring of the driving environment in front of the vehicle, taking measures in advance to reduce the braking response time when a collision risk may occur, and assisting in increasing the braking pressure when the driver takes a braking operation to avoid a collision or reduce the consequences of a collision ;

Looking at it this way, it is clear that the first one is to automatically trigger the braking function in dangerous situations ; the second is to assist in reducing the braking intervention time of people in dangerous situations while enhancing the braking effect ; the first focuses on automatic, the second The focus is on assistance, and at the same time everyone needs to pay attention, this function is to avoid or reduce collisions, do not try the function by yourself.

In addition, the difference between AES and ESA can also be understood by referring to the difference between the above two functions.

The second category is driving convenience

Through the auxiliary function, the convenience of speed limit, parking, cruise, congested road conditions, accidental stepping and other usage scenarios can be realized, and the driving safety can also be partially improved.

The third category is lane keeping

Strictly speaking, it also belongs to driving convenience, but they all belong to the small category of lane control, so it is easy to understand its function if it is taken out separately.

The fourth category is intelligent lighting

With the continuous deepening of vehicle intelligence, technologies related to car lights are also continuously derived. Adaptive high beams and headlights belong to the auxiliary driving functions of car lights.

In addition, at present, more advanced ISD (Intelligent Interactive Light) and DLP (Digital Signal Light) technologies have also appeared. It seems that the light can realize some of the information transmission and interactive functions that the previous screens are capable of.

Author: Luokung Technology
Link: https://www.zhihu.com/question/419176306/answer/2401611862
Source: Zhihu
The copyright belongs to the author. For commercial reprint, please contact the author for authorization, for non-commercial reprint, please indicate the source.
 

Overall List of CA Control Auxiliary Classes

(Control Assist)

  • AEB (Automatic Emergency Braking) automatic emergency braking
  • EBA (Emergency Braking Assist) emergency brake assist
  • AES (Automatic Emergency Steering) automatic emergency steering
  • ESA (Emergency Steering Assist) Emergency Steering Assist
  • ISLC (Intelligent Speed ​​Limit Control) intelligent speed limit control
  • LKA (Lane Keeping Assist) Lane Keeping Assist
  • LCC (Lane Centering Control) lane centering control
  • LDP (Lane Departure Prevention) lane departure suppression
  • IPA (Intelligent Parking Assist) Intelligent Parking Assist
  • ACC (Adaptive Cruise Control) adaptive cruise control
  • FSRA (Full Speed ​​Range Adaptive Cruise Control) full speed adaptive cruise control
  • TJA (Traffic Jam Assist) traffic jam assistance
  • AMAP (Anti-maloperation for Accelerator Pedal) accelerator pedal anti-misstepping
  • ADB (Adaptive Driving Beam) adaptive high beam
  • AFL (Adaptive Front Light) adaptive front light

4. Detailed explanation of the eight subsystems of ADAS

Studying carefully, ADAS contains many different technologies, such as: ACC adaptive cruise, AEB automatic emergency braking, TSR/TSI traffic sign recognition, BSD/BLIS blind spot detection, LCA/LCMA lane change assist, LDW lane departure warning. Let's read it in detail below.

1. Lane departure warning system + lane keeping

The system consists of cameras, sensors and controllers. The principle is to use a camera on the side of the vehicle body or in the rearview mirror to sample the marking line of the current driving lane, and then obtain the current car position in the lane through image processing. When the car deviates from the lane, the controller will send out a warning signal. From sensing to issuing a warning, the process only takes about 0.5 seconds to remind and wake up the driver in real time to avoid accidents.

2. Pre-collision system + emergency braking system

Radar cameras and other sensors installed in the car detect its own speed as well as the distance and speed of vehicles ahead. At the beginning, a warning sound will sound to remind the driver to pay attention to the distance. If the driver does not respond, the brakes will be applied as the distance continues to narrow. The system will intervene to provide some braking effect to prevent a collision.

3. Blind spot monitoring system

The blind spot of a car driver refers to the invisible area on the left, right and inside of the rearview mirror. I believe many drivers are impressed by blind spots. This is also one of the common accidents in many accidents. The blind spot detection system uses radar and sensors to detect blind spots behind the vehicle. When the blind spot detects an approaching vehicle, it will alert the driver to help minimize the possibility of an accident.

4. Automatic parking system

 The APA automatic parking system uses infrared light or sensors to detect objects around the car. These sensors will emit a signal. When the signals hit obstacles around the body, they bounce back. The on-board computer will use the time it takes to receive the signal to determine the location of the obstacle, and then the computer system will take over the steering wheel, which the computer will use to turn the steering wheel through the power steering system and dump the car fully into the parking space.

5. Adaptive cruise system

The ACC adaptive cruise system mainly uses the distance sensor installed in front of the vehicle to continuously scan the road ahead of the vehicle to understand the speed and relative distance of the vehicle ahead. It will automatically detect the speed of the vehicle while driving, and adjust its own speed accordingly to maintain a safe distance from the vehicle in front to reduce the possibility of a collision accident.

6. Driver fatigue monitoring system

Most current systems use cameras to detect the driver's face to determine concentration and signs of drowsiness. Some systems use the frequency at which a driver's eyes open and close to identify safety levels and provide appropriate warnings, or assist in maneuvering. If the driver's facial expression does not change much, or even closes his eyes, the vehicle will warn the owner through sound and light to reduce accidents.

7. Adaptive lighting control

The system can automatically adjust the lighting range and angle of the lights according to different road conditions, environments, vehicle speeds and weather conditions, so that the lighting range of the lights is deeper without affecting the sight of other passers-by, so as to provide drivers with oncoming cars safer , more comfortable lighting.

8. Night vision system

It can help the driver automatically identify obstacles or large foreign objects at night when the vision is unclear or the weather is bad, and at the same time warn the driver of the road conditions ahead to avoid accidents. The recognition method is to use infrared to perceive the difference of heat, distinguish the difference between people, animals, vehicles and the environment, and convert it into an image after processing, so that the original unclear objects are clearly presented in front of the driver's eyes, to reduce driving risk.

Each ADAS system mainly includes three levels:

The first is the sensing layer. That is, information gathering. Different systems need to use different types of vehicle sensors (including cameras, radars, etc.) to collect the vehicle's operating status and changes in the surrounding environment, and feed them back to the control unit in time.

Second, the control layer. Analyze and process the information collected by the sensor, and then output the control signal to the controlled equipment.

Third, the execution layer. According to the signal output by the trip computer, let the car complete the specified action.

3. Four mainstream AR HUD imaging solutions TFT

1.TFT

TFT is the abbreviation of TFT-LCD. This solution is currently the most common and most mature projection technology in the HUD industry. The principle is that the light emitted by the LED passes through the liquid crystal unit and projects the information on the screen.

Thin Film Transistor Liquid Crystal Display

English: Thin film transistor liquid crystal display , often referred to as TFT-LCD

TFT-LCDs are a type of most LCDs that use thin-film transistor technology to improve image quality. Although TFT-LCD is collectively referred to as LCD , it is an active-matrix LCD that is used in TVs , flat-panel displays, and projectors .

Simply put, a TFT-LCD panel can be regarded as a layer of liquid crystal sandwiched between two glass substrates . The upper glass substrate is a color filter , while the lower glass is embedded with transistors . When the current passes through the transistor, the electric field changes, causing the liquid crystal molecules to deflect, so as to change the polarity of the light, and then use the polarizer to determine the light and dark state of the pixel . In addition, the upper layer of glass is bonded with the color filter, so that each pixel contains three colors of red, blue and green, and these pixels emitting red, blue and green colors constitute the video screen on the panel.

TFT LCD optical path:
1. LED light, 2. Lens, 3. TFT LCD, 4. Reflector 1, 5. Reflector 2, 6. Windshield, 7. Eye box

advantage:

The scheme is mature and the cost is relatively low

shortcoming:

1. The longer the projection distance, the more difficult it is to solve the backflow of sunlight.
2. The light is polarized light, which is a problem with sunglasses.
3. The light efficiency is low, and the brightness of the product is lacking.

2.DLP

DLP is the abbreviation of Digital Light Processing, which uses TI's patented product-DMD chip (Digital Micromirror Device, which means digital micromirror device in Chinese). The DMD consists of millions of individual micro-mirrors of highly reflective aluminum, each of which can be controlled in angle by a huge number of ultra-small digital light switches. These switches accept data bytes represented by electrical signals and produce optical byte outputs.

    Digital Light Processing (Digital Light Processing, abbreviation: DLP) is a display technology used in projectors and rear projection TVs . DLP technology was first developed by Texas Instruments . It remains a major supplier of this technology to this day. The Fraunhofer Institute of Dresden , Germany also produces special-purpose digital light processors , which they call Spatial Light Modulators (SLM).

       Micronic Laser Systems of Sweden, for example, uses spatial light modulators from Fraunhofer to generate extreme ultraviolet images in its Sigma plate silicon stencil printer.

In a DLP projector , the image is generated by a DMD (Digital Micromirror Device, Digital Micromirror Device ). DMD is a matrix composed of microlenses (precise, miniature mirrors) arranged on a semiconductor chip , and each microlens controls a pixel in the projection screen. The number of microlenses matches the resolution of the projected screen, 800×600, 1024×768, 1280×720, and 1920 x 1080 (HDTV) are some common DMD sizes.

       These micromirrors can change the angle quickly under the control of digital driving signals. Once the corresponding signal is received, the micromirrors will be tilted by 10°, thereby changing the reflection direction of the incident light. A micromirror in the projecting state is shown as "on" and is tilted +10° with the digital signal; if the micromirror is in the non-projecting state, it is shown as "off" and is tilted -10°. At the same time, in the "on" state, the reflected incident light passes through the projection lens to project the image onto the screen; while in the "off" state, the incident light reflected on the micro mirror is absorbed by the light absorber. [1-2]  

     Essentially, the angle of the microlens has only two states: "on" and "off". The frequency at which the microlenses switch between the two states can be varied, allowing light reflected from the DMD to appear in various shades between black (the microlenses are “off”) and white (the microlenses are “on”). grayscale. DLP projectors mainly produce color images through two methods, which are used in single-chip DLP projectors and three-chip DLP projectors, respectively.

 Schematic diagram of DLP image source structure

advantage:

  1. Compared with TFT, DLP is easier to obtain high brightness. The introduction on TI's official website is greater than 15K cd/m2. In addition, DLP can also maintain consistent image quality at different temperatures
  2. Due to the advantages of DLP itself in terms of material and structure, it can better deal with the problem of sunlight backflow.
  3. DLP does not use polarized light so the display can be seen even while wearing sunglasses
  4. AR HUD design supporting optical waveguide and holography

shortcoming:

  1. The cost is relatively high, but TI has developed low-cost automotive grade products
  2. It is a patented product of Texas Instruments TI, and there is only one supplier

   

3.LCOS

LCoS (Liquid Crystal on Silicon), translated as liquid crystal attached to silicon, also known as liquid crystal on silicon, is a relatively small matrix liquid crystal display device based on reflection mode. This matrix is ​​fabricated on a silicon chip using CMOS technology. It belongs to the new reflective micro LCD projection technology.

advantage:

  1. High resolution, DLP can achieve a true resolution of 1920×1080, while LCoS can achieve a true resolution of 4k or even 8k.
  2. Small in size, the 0.69-inch LCoS can already match the 3.1-inch TFT
  3. right cost
  4. Compared with DLP, there are more chip suppliers, and the possibility of delivery and cost reduction is higher. In addition, the more important thing is that the chip can be replaced by localization
  5. lower power consumption

shortcoming:

  1. Light source is polarized light, sunglasses problem
  2. When the LED light source is used, the light efficiency is low, so the brightness is not enough. The follow-up needs to use laser light source, and laser light source currently has few car-level manufacturers and is expensive
  3. speckle problem

4. LBS-MEMS laser projection

LBS is the abbreviation of Laser Beam Scanning, which is the "MEMS micro-laser projection" solution. This solution is a projection display technology solution that combines RGB three-color laser modules with micro-electro-mechanical systems (Micro-Electro-Mechanical Systems, MEMS). From the driving point of view, MEMS micro-laser projection is a scanning projection display. It uses micro-electromechanical two-dimensional micro-scanning mirrors and RGB three-color lasers to form images in the form of laser scanning. The output resolution depends on the scanning of MEMS micro-mirrors. frequency.

Structure diagram of MEMS micro-laser projection equipment

advantage:

  1. The optical engine is greatly simplified, and the volume can be optimized
  2. The product contrast ratio is high and can easily reach 7000:1, far exceeding DLP
  3. high brightness
  4. Wide color gamut (>150%), lower power consumption (<4-6W), low heat generation
  5. MEMS lasers can be turned off when displaying pure black pixels, so it is easier to improve contrast and have zero screen door effect.

MEMS projection principle

shortcoming:

  1. The resolution is not high, about 720P. Increasing the resolution requires a relatively high cost, but the cost can be reduced through mass production.
  2. Laser diodes are sensitive to temperature, and there are still difficulties to be overcome to achieve automotive grade. At present, only Nichia is more mature in laser light source

5. Comparison of several imaging methods

Guess you like

Origin blog.csdn.net/m0_65075758/article/details/127863199