Application of Embedded Vision Technology in Automotive Industry

From 2018, regulations in some countries will require new cars to be equipped with rear-view cameras to help drivers see what's behind the vehicle. Embedded vision technology solution provider Long Ruizhike (www.loongv.com) believes that in view of the rapid growth of electronic applications in the automotive field, the automotive market is undoubtedly the most potential development area for embedded machine vision applications . One of the emerging automotive application areas is driver monitoring systems that use vision applications to track driver head and body movements for fatigue status recognition. Another is a vision system that improves vehicle operation safety by monitoring potential driver distractions such as reading text messages or eating and drinking. New applications like lane departure warning systems combine video and lane detection algorithms to assess a car's position. In addition, market demand has also driven the development of functions such as reading warning signs, impact buffering, blind spot detection, automatic parking, and reversing assistance. All of these features help make driving safer. The development of automotive vision and sensing systems has laid the foundation for true autonomous driving. For example, Cadillac will integrate its embedded vision subsystem into the CT6 sedan in 2018 for SuperCruise, the industry's first hands-free driving technology. This new technology will make driving safer by continuously analyzing driver and road conditions, accurate LIDAR databases providing road conditions, and advanced cameras, sensors and GPS reflecting details of dynamic road conditions in real time. Overall, automakers have developed a consensus that ADAS systems in modern vehicles will require front-facing cameras for lane detection, pedestrian detection, traffic sign recognition and emergency braking functions. Side and rear cameras are also required to support park assist, blind-spot detection and cross-traffic alert. One challenge automakers face is the limited amount of I/O in existing electronics.





At present, mainstream processors only have two camera interfaces. However, many ADAS systems require up to eight cameras to meet image quality requirements. Design engineers need solutions that provide them with co-processing resources to stitch together video streams from multiple cameras, or perform image processing functions such as white balance, fisheye correction, and dehazing based on camera input, and A single data stream sends data to an application processor (AP).
For example, many automakers offer a bird's-eye view feature in ADAS systems, where the driver can see live video down 20 feet above the vehicle. ADAS systems do this by stitching data from 4 or more cameras into a wide-field image.
Previously, design engineers used a processor to drive each display. Now, design engineers can replace multiple processors with a single FPGA, aggregate all the camera data, stitch the images together, perform pre- and post-processing functions, and send the images to the system processor.
  Vehicle 360-degree bird's-eye view camera system
Figure 3 shows a simplified architecture of a 360-degree bird's-eye view car camera system that collects data from 4 cameras (front, rear, and sides) located around the car. The system uses a single FPGA for various preprocessing and postprocessing functions and video data stitching, providing a 360-degree view of the vehicle's surroundings. In this case, a single FPGA will replace multiple processors.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325795298&siteId=291194637