Is 30 enough? Autonomous driving simulation framework & simulator summary

Author | eyesighting Editor | Autobot

Original link: https://zhuanlan.zhihu.com/p/674193516

Click on the card below to follow the " Heart of Autonomous Driving " public account

ADAS giant volume of dry information is now available

Click to enter → Heart of Autonomous Driving [Simulation Test] Technical Exchange Group

This article is for academic sharing only. If there is any infringement, please contact us to delete the article.

Preface

For many years, autonomous driving simulation has basically used traditional simulation software. For many years, car companies have completed closed-loop tools and methods, but the scope of application is limited to active safety functions, and there are few applications for high-end/L3/L4/NOP/NOA functions. .

In recent years, new technologies such as world model/reinforcement learning/NeRF/diffusion model/LLM/Agent have appeared in autonomous driving simulation. More and more suppliers have begun to develop emerging autonomous driving systems based on these new technologies and gradually get rid of traditional simulation software. Driving simulation framework & simulator brings new opportunities for simulation.

This article summarizes traditional simulation software, new simulation frameworks, simulation platforms, optical simulations, and simulation engines, which can be used as reference materials for learning, research, and development.

1.Simulation engine

Unity

Unity is a cross-platform game engine developed by Unity Technologies. It was first announced and released as a Mac OS X game engine at the Apple Worldwide Developers Conference in June 2005. The engine has since been expanded to support a variety of desktop, mobile, console and virtual reality platforms. Unity is particularly popular for iOS and Android mobile game development, is considered easy to use for beginner developers, and is popular for indie game development. The engine can be used to create three-dimensional (3D) and two-dimensional (2D) games, as well as interactive simulations and other experiences.

Unity home page: https://unity.com/cn

Unity Doc:https://docs.unity.com/

Unity API:https://docs.unity3d.com/ScriptReference/

Unity Man:https://docs.unity3d.com/Manual/index.html

Unity code: https://github.com/Unity-Technologies

Unity Wiki:https://en.wikipedia.org/wiki/Unity_(game_engine)

UnityTechnologies Wiki: https://en.wikipedia.org/wiki/Unity_Technologies

UnrealEngine

Unreal Engine (UE) is a series of 3D computer graphics game engines developed by Epic Games, which debuted in the 1998 first-person shooter game Unreal. It was originally developed for PC first-person shooters and has since been used in various types of games and adopted by other industries, especially the film and television industries. Written in C++, Unreal Engine is highly portable and supports a wide range of desktop, mobile, console and virtual reality platforms.

The latest generation of Unreal Engine 5 will be launched in April 2022. Its source code is available on GitHub, and commercial use is granted based on a royalty model, with Epic charging 5% of revenue over $1 million, with games released on the Epic Games Store exempt from this fee. \Epic has added features from acquired companies such as Quixel into its engine, which is believed to have benefited from Fortnite revenue. In 2014, Unreal Engine was named the world's "most successful video game engine" by Guinness World Records

UnrealEngine home page: https://www.unrealengine.com

UnrealEngine code: https://github.com/folgerwang/UnrealEngine

UnrealEngine code: https://github.com/20tab/UnrealEnginePython

UnrealEngine Wiki:https://en.wikipedia.org/wiki/Unreal_Engine

EpicGames:https://en.wikipedia.org/wiki/Epic_Games

Sister in law

Cognata homepage: https://www.cognata.com/

Cognata introduction: https://www.cognata.com/simulation/

Cognata introduction: https://www.cognata.com/autonomous-vehicles/

OptiX

Nvidia OptiX (OptiX Application Acceleration Engine) is a ray tracing API first developed around 2009. Computation is offloaded to the GPU through low-level or high-level APIs introduced through CUDA. CUDA only works with Nvidia's graphics products.

Nvidia OptiX is part of Nvidia GameWorks. OptiX is a high-level or algorithmic API, which means it is designed to encapsulate the entire algorithm that ray tracing is a part of, not just ray tracing itself. This is to allow the OptiX engine to execute larger algorithms with great flexibility without requiring application-side changes.

2c3457b8ad9cc2265a6ce53cf7ec4b1c.png

OptiX homepage: https://developer.nvidia.com/rtx/ray-tracing/optix

OptiX download: https://developer.nvidia.com/designworks/optix/download

OptiX Wiki:https://en.wikipedia.org/wiki/OptiX

2. Simulation software

VTD

VTD is the world's most widely used open platform for creating, configuring and animating virtual environments and scenarios for training, testing and validation of ADAS and autonomous vehicles. VTD provides open interfaces for third-party components and a plug-in concept with APIs for third-party modules. Its main features are various applications in vehicle control, perception, driver training, training data generation for artificial intelligence systems and vehicle test benches.

VTD has strong capabilities in sensor simulation, complex scene creation, vehicle and pedestrian modeling, and vehicle dynamics.

VTD home page: https://hexagon.com/products/virtual-test-drive

VTD ASAM:https://www.asam.net/members/product-directory/detail/virtual-test-drive-vtd/

VTD Huawei Cloud Octopus: https://support.huaweicloud.com/usermanual-octopus/octopus-03-0011.html

CARLA

CARLA is an open source simulator for autonomous driving research. CARLA was developed to support the development, training and validation of autonomous driving systems. In addition to open source code and protocols, CARLA also provides open digital assets (city layouts, buildings, vehicles) created for this purpose and can be used freely. The simulation platform supports flexible sensor suites and environmental condition specifications.

dcb75fb0d168014a86905613aad308d3.png

CARLA homepage: https://carla.org//

CARLA code: https://github.com/carla-simulator/carla

CARLA documentation: https://carla.readthedocs.io/en/latest/start_introduction/

CARLA documentation: https://carla.readthedocs.io/en/latest/

CARLA paper: https://arxiv.org/abs/1711.03938

CarSim

CarSim is a simulation software specifically designed for vehicle dynamics. The CarSim model runs 3-6 times faster on the computer than in real time. It can simulate the vehicle's response to the driver, road surface and aerodynamic input. It is mainly used to predict and simulate cars. The vehicle's handling stability, braking, smoothness, power and economy are also widely used in the development of modern automobile control systems. CarSim can conveniently and flexibly define the test environment and test process, and define the characteristic parameters and characteristic files of each system of the vehicle in detail.

f31fc079da9cd9820a19376276c382a8.png

CarSim home page: https://www.carsim.com/

CarSim homepage: https://www.carsim.com/products/carsim/index.php

CarSim introduction: https://www.carsim.com/products/carsim/

CarSim Introduction: https://www.carsim.com/downloads/pdf/CarSim_Introduction.pdf

CarSim documentation: https://carsim.readthedocs.io/en/latest/

BikeSim

BikeSim provides the most accurate, detailed and efficient way to simulate the performance of two- and three-wheeled vehicles. With more than two decades of field-proven experience, BikeSim has become the universal tool of choice for analyzing motorcycle dynamics, developing active controllers, calculating overall system performance, and designing next-generation active safety systems.

5c2ffd6cad1a5815261b949e1ee82be1.png

BikeSim home page: https://www.carsim.com/products/bikesim/index.php

TruckSim

TruckSim is a vehicle dynamics model simulation software that can be connected to Simulink to achieve vehicle control. The advantage of TruckSim is that the process of building the dynamics model is simple, and the vehicle parameters can be configured parametrically. The dynamics model can be built more scientifically, but the disadvantage is that it is not flexible enough. , there is no motor-based new energy vehicle dynamics simulation model.

TruckSim home page: https://www.carsim.com/products/trucksim/index.php

TruckSim introduction: https://www.carsim.com/users/pdf/release_notes/trucksim/TruckSim2024_New_Features.pdf

TruckSim introduction: https://blog.csdn.net/qq_31239495/article/details/86679859

SuspensionSim

SuspensionSim simulation is applied to quasi-static kinematics and compliance (K&C) testing of suspension systems. SuspensionSim differs from the vehicle simulation products BikeSim, CarSim, and TruckSim in several ways. Instead of using a predefined parametric program with a specific multibody model, SuspensionSim uses a multibody solver program that builds a model based on a user data set

SuspensionSim home page: https://www.carsim.com/products/suspensionsim/index.php

Introduction to SuspensionSim: https://www.carsim.com/downloads/pdf/SuspensionSim_Handout_Letter.pdf

VehicleSim

Mechanical Simulation Corporation produces and distributes software tools for simulating and analyzing the dynamic behavior of motor vehicles in response to steering, braking, throttle, road and aerodynamic inputs. VS SDK is a software development kit. This means it contains all the tools, libraries, documentation, and sample projects you need to work on your projects with as little configuration as possible.

VehicleSim home page: https://www.carsim.com/products/supporting/vehiclesim/vs_api.php

VehicleSim SDK:https://www.carsim.com/users/vs_sdk/index.php

CarMaker

The simulation solution CarMaker is designed for the development and seamless testing of automobiles and light vehicles in all development stages (MIL, SIL, HIL, VIL). The open integration and test platform allows the implementation of virtual test scenarios for application areas such as autonomous driving, ADAS, powertrain and vehicle dynamics. With MovieNX, a high-resolution 3D visualization tool, photorealistic quality is provided. Various supported standards and interfaces also ensure smooth integration with existing tool environments.

1148f45108e2a2cc965da9059fecb833.png 7d64bd062234d95832fe68e899f116c8.png 012afdfe4ad7b56581590e6c255ae08d.png c8080c047cf9c6128148645470c1be19.png e474b68005c6647514372f7458d9e04a.png

CarMaker homepage: https://ipg-automotive.com/cn/products-solutions/software/carmaker/

CarMaker Tutorials: https://ipg-automotive.com/en/know-how/multimedia/online-tutorials/

CarMaker works with Simulink: https://www.mathworks.com/products/connections/product_detail/carmaker.html

TruckMaker

TruckMaker simulation solutions are specifically tailored for the development and testing requirements of heavy-duty vehicles such as trucks, construction vehicles, buses, medium trucks, heavy-duty trucks and specialty vehicles. TruckMaker accurately models real-world test scenarios in the virtual world and increases the agility of the development process. In line with automotive systems engineering methods, virtual vehicle testing with TruckMaker enables seamless development, calibration, testing and validation of the entire vehicle's entire system in real-life scenarios.

TruckMaker homepage: https://ipg-automotive.com/cn/products-solutions/software/truckmaker/

MotorcycleMaker

Virtual test drives help meet today’s vehicle development challenges. MotorcycleMaker is specifically tailored to the requirements of developing and testing motorized two-wheelers such as motorcycles, e-bikes or scooters. MotorcycleMaker enables accurate modeling of real-world testing scenarios in the virtual world and increases the agility of the development process. Based on automotive systems engineering methods, MotorcycleMaker's Virtual Test Drive enables seamless development, calibration, testing and validation of the entire system of the entire vehicle in real-life scenarios.

MotorcycleMaker homepage: https://ipg-automotive.com/en/products-solutions/software/motorcyclemaker/

AirSim

AirSim (Aviation Informatics and Robotics Simulation) is an open source cross-platform simulator for drones, ground vehicles such as cars, and a variety of other objects built on Epic Games' proprietary Unreal Engine 4 as an artificial intelligence research platform. Developed by Microsoft, it can be used to experiment with deep learning, computer vision and reinforcement learning algorithms for self-driving cars. This allows testing of autonomous solutions without fear of real-world damage.

AirSim provides approximately 12 kilometers of roads and 20 city blocks as well as APIs to retrieve data and control vehicles in a platform-independent manner. The API is accessible through a variety of programming languages, including C++, C#, Python, and Java. AirSim supports hardware-in-the-loop with drive wheels and flight controllers (e.g. PX4) for physically and visually realistic simulations. The platform also supports common robotics platforms such as Robot Operating System (ROS). It was developed as an Unreal plugin that can be dropped into any Unreal environment. An experimental version of the Unity plugin has also been released.

3a7fcb31ec1a4c9d4422ba9ddfa4ff95.png

AirSim homepage: https://www.unrealengine.com/

AirSim introduction: https://microsoft.github.io/AirSim/

AirSim documentation: https://amov-wiki.readthedocs.io/zh-cn/latest/docs/AirSim%E4%BB%BF%E7%9C%9F.html

AirSim code: https://github.com/microsoft/AirSim

AirSim Wiki:https://en.wikipedia.org/wiki/AirSim

PreScan

Simcenter Prescan is a physics-based simulation platform used in industries involving automation for the development of Advanced Driver Assistance Systems (ADAS) and Autonomous Driving Systems (ADS) based on sensor technologies such as radar, lidar, cameras, ultrasonic sensors and GPS. .

Simcenter Prescan is also used to test vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication applications. Simcenter Prescan provides access to validated sensor models and material physics responses as part of a range of meaningful fidelity levels. Additionally, accurate vehicle dynamics models are included and traffic can be easily populated. Simcenter Prescan helps integrate the system under test into the simulation loop and can deploy it at scale into a cluster or cloud, providing the necessary coverage for verification.

Simcenter Prescan provides an editor to define scenarios and a runtime environment to execute scenarios.

PreScan homepage: https://plm.sw.siemens.com/en-US/simcenter/autonomous-vehicle-solutions/prescan/

PreScan Mathworks:https://www.mathworks.com/products/connections/product_detail/prescan.html

Constellation

NVIDIA Omniverse is a modular development platform for building 3D workflows, tools, applications and services. Based on Pixar's Universal Scene Description (OpenUSD), NVIDIA RTX and NVIDIA AI technology, developers use Omniverse to build real-time 3D simulation solutions for industrial digitization and perception AI applications.

Constellation home page: https://developer.nvidia.com/omniverse

Constellation code: https://github.com/NVIDIA-Omniverse

Constellation Python API:https://docs.omniverse.nvidia.com/isaacsim/latest/reference_python_api.html

Constellation Doc:https://docs.omniverse.nvidia.com/

Constellation Guide:https://docs.omniverse.nvidia.com/dev-guide/latest/index.html

LGSVL

e26fe65371f226c1a88ab99bddac2d5d.png 91c5eb58bbd9963c64ce134f797b33e0.png

LGSVL home page: https://www.svlsimulator.com/

LGSVL code: https://github.com/lgsvl/simulator

LGSVL documentation: https://www.svlsimulator.com/docs/

LGSVL paper: https://arxiv.org/abs/2005.03778

Omniverse

NVIDIA Omniverse is a modular development platform for building 3D workflows, tools, applications and services. Based on Pixar's Universal Scene Description (OpenUSD), NVIDIA RTX and NVIDIA AI technology, developers use Omniverse to build real-time 3D simulation solutions for industrial digitization and perception AI applications.

a0945da07869a069417906d9c30e80c8.png 30c5d9728296062b4aa22bc6fcac999b.png

Omniverse home page: https://developer.nvidia.com/omniverse

Omniverse code: https://github.com/NVIDIA-Omniverse

Panosim

PanoSim is an intelligent driving vehicle simulation software platform developed by a domestic startup company in conjunction with resources from universities such as Jilin University and Beihang University. The software is developed with the full-stack simulation of intelligent driving vehicles as the development goal. It has a complete scene model, sensor model and vehicle model, which can be used for the rapid development and verification of intelligent driving algorithms.

7fa08f1154a2ad52fcd76a090e93e6b4.png 4a816b3bb24beb1928d3a8cae2978bf5.png daf7534e5158ecb2db6e900ad5eea76b.png

Panosim homepage: https://www.panosim.com/

Introduction to PanoSim autonomous driving simulation test platform: https://www.zhihu.com/column/c_1510268115455426560

Introduction to the development tutorial of autonomous valet parking AVP system based on PanoSim5.0 virtual simulation platform: https://blog.csdn.net/Countery/article/details/120577528

SUMMIT

SUMMIT (Simulator of Urban Driving in Massively Mixed Traffic) is an open source simulator focused on generating high-fidelity interactive data for unregulated, dense urban traffic on complex real-world maps. It is used with map data in the form of OSM files and SUMO networks to generate a heterogeneous population of traffic agents with complex and realistic unregulated behavior. SUMMIT can use map data obtained from online sources, providing an almost unlimited source of complex environments.

SUMMIT also exposes interfaces for interacting with contextual information provided by map data. It also provides a powerful set of geometry utilities for use by external programs. Through these, SUMMIT aims to enable applications in a wide range of areas such as perception, vehicle control and planning, and end-to-end learning. SUMMIT is built on the very successful CARLA. Updates to CARLA are continually merged into SUMMIT to ensure that users of SUMMIT have access to the high-quality work that comes with CARLA, such as its high-fidelity physics, rendering, and sensors; however, it should be noted that not all components of SUMMIT work with CARLA components are compatible because they are designed for different use cases.

c55af79cf9d36d43cfc535aee57a8d90.png

SUMMIT Doc:https://adacompnus.github.io/summit-docs/

SUMMIT code: https://github.com/AdaCompNUS/summit

SUMO

Urban Traffic Simulation (SUMO) is an open source, highly portable, microscopic and continuous traffic simulation software package designed to handle large networks. It allows intermodal simulations, including pedestrians, and comes with a host of tools for scenario creation. It was developed primarily by employees at the Institute of Transportation Systems at the German Aerospace Center.

4131708f0b2ac64bd11ea09c1834a168.png

SUMO homepage: https://sumo.dlr.de/docs/index.html

SUMO home page: https://eclipse.dev/sumo/

SUMO documentation: https://sumo.dlr.de/docs/index.html

SUMO paper: https://elib.dlr.de/127994/

SUMO code: https://github.com/eclipse-sumo/sumo

OpenCDA

OpenCDA is a simulation tool that integrates a prototype collaborative driving automation (CDA; see SAE J3216) pipeline with conventional autonomous driving components (e.g., perception, localization, planning, control). The tool integrates autonomous driving simulation (CARLA), traffic simulation (SUMO) and joint simulation (CARLA + SUMO).

OpenCDA is based on the standard Autonomous Driving System (ADS) platform and focuses on the exchange and cooperation of various types of data between vehicles, infrastructure and other road users (such as pedestrians). OpenCDA uses Python language entirely. The purpose is to enable researchers to rapidly prototype, simulate, and test CDA algorithms and functionality. By applying our simulation tools, users can easily conduct task-specific evaluations (e.g., object detection accuracy) and pipeline-level evaluations (e.g., traffic safety) of their customized algorithms.

1f6486cb53b68d578bf46ce397444cda.png

OpenCDA documentation: https://opencda-documentation.readthedocs.io/en/latest/index.html

OpenCDA introduction: https://opencda-documentation.readthedocs.io/en/latest/md_files/introduction.html

OpenCDA code: https://github.com/ucla-mobility/OpenCDA

OpenCDA paper: https://ieeexplore.ieee.org/document/9564825

OpenCDA paper: https://arxiv.org/abs/2301.07325

OpenCDA-ROS

OpenCDA-ROS, which is based on the advantages of the open source framework OpenCDA and Robot Operating System (ROS), seamlessly integrates the actual deployment capabilities of ROS with OpenCDA's mature CDA research framework and simulation-based evaluation to fill the above gaps.

OpenCDA-ROS will leverage the strengths of ROS and OpenCDA to facilitate the prototyping and deployment of key CDA capabilities in simulation and the real world, specifically collaborative sensing, mapping and digital twins, collaborative decision-making and motion planning, and smart infrastructure services.

OpenCDA-ROS paper: https://ieeexplore.ieee.org/document/10192346

RoadRunner

RoadRunner is an interactive editor that allows developers to design 3D scenes for simulating and testing autonomous driving systems. Developers can customize road scenes by creating region-specific road signs and markings. Signs, lights, guardrails and road damage can be inserted, as well as foliage, buildings and other 3D models.

RoadRunner provides tools for setting up and configuring intersection traffic signal timing, phasing, and vehicle routing. RoadRunner supports the visualization of lidar point clouds, aerial imagery, and GIS data. Developers can use OpenDRIVE to import and export road networks. 3D scenes built with RoadRunner can be exported in FBX, glTF, OpenFlight, OpenSceneGraph, OBJ and USD formats. Exported scenes can be used in autonomous driving simulators and game engines, including CARLA, Vires VTD, NVIDIA DRIVE Sim, rFpro, Baidu Apollo, Cognata, Unity and Unreal Engine.

6392432198fcf1a75b0b419a1d56d0ff.png

RoadRunner home page: https://www.mathworks.com/products/roadrunner.html

TADSim

TADSim Encyclopedia: https://baike.baidu.com/item/TAD%20Sim/63745889?fr=ge_ala

Tencent releases autonomous driving simulation platform TAD Sim 2.0: https://zhuanlan.zhihu.com/p/150694950

TADSim paper: https://dl.acm.org/doi/10.1145/2699715

51SimOne

51World homepage: https://wdp.51aes.com/

51World introduction: https://www.51vr.com.au/technology/city

The open source version of 51SimOne has been officially released to help build a domestic independent intelligent driving simulation platform: https://zhuanlan.zhihu.com/p/475293607

PTV-Vissim

PTV Vissim is a microscopic multimodal traffic flow simulation software package developed by PTV Planung Transport Verkehr AG in Karlsruhe, Germany. PTV Vissim was first developed in 1992 and today is the global market leader.

PTV-Vissim homepage: https://www.ptvgroup.com/en/products/ptv-vissim

[TV-Vissim Wiki:https://en.wikipedia.org/wiki/PTV_VISSIM

PTV-Visum

PTV Visum is the world's leading traffic planning software. It is the standard for macro-simulation and macro-modeling of transport networks and traffic needs, public transport planning, and the development of transport strategies and solutions. With PTV Visum, developers can create traffic models that provide insights for long-term strategic planning and short-term operational use.

PTV-Visum homepage: https://www.ptvgroup.com/en/products/ptv-vissim

PTV-Flows

PTV Flows enables traffic operators to easily monitor and predict traffic in real-time. By leveraging machine learning, state-of-the-art algorithms and automated alerts, PTV Flows enables cities and road authorities to optimize their traffic management without requiring extensive resources or complex infrastructure.

Therefore, PTV Flows comes with automatically updated network maps and Floating Car Data (FCD) from a wide range of major providers. The software can be run from a browser or integrated into existing systems via API.

PTV-Flows homepage: https://www.ptvgroup.com/en/products/ptv-flows

Dyna4

DYNA4 is an open simulation environment for virtual test drives of automobiles and commercial vehicles. Physical models include vehicle dynamics, powertrain, combustion engine, electric motors, sensors and traffic. Using DYNA4 for virtual test drives facilitates safe and efficient functional development and testing. Closed-loop simulation on PC runs faster than real-time, for example for early development stages (MIL, SIL), or can be performed on a hardware-in-the-loop system (HIL) when the ECU is available.

DYNA4's 3D environment simulation of road infrastructure and traffic provides a virtual testing ground for assisted and autonomous driving where environmental awareness plays a key role.

Dyna4 ASAM:https://www.asam.net/members/product-directory/detail/dyna4/

Dyna4 Vector:https://www.vector.com/int/en/products/products-a-z/software/dyna4/

3. Simulation framework

PGDrive

To better evaluate and improve the generalization ability of end-to-end driving, an open and highly configurable driving simulator called PGDrive is introduced that follows key features of procedural generation. Different road networks are first generated by sampling basic road blocks through the proposed generation algorithm. They are then transformed into an interactive training environment where traffic flows of nearby vehicles are presented with realistic kinematics.

52ca578281b77c38c323865e5830d419.png

PGDrive homepage: https://www.pgdrive.com/

PGDrive introduction: https://decisionforce.github.io/pgdrive/

PGDrive Doc:https://pgdrive.readthedocs.io/en/latest/

PGDrive code: https://github.com/decisionforce/pgdrive

PGDrive paper: https://arxiv.org/abs/2012.13681

MetaDrive

MetaDrive is a driving simulator with the following main features:

Combination: It supports the generation of unlimited scenarios with various road maps and traffic settings for research on generalizable reinforcement learning.

Lightweight: easy to install and run. It can run up to +1000 FPS on a standard PC.

Realistic: Accurate physics simulations and multiple sensory inputs, including lidar, RGB images, top-down semantic maps, and first-person view images.

8c19e32164bae02dffdb145e331758eb.png

MetaDrive home page: https://metadriverse.github.io//metadrive/

MetaDrive code: https://github.com/metadriverse/metadrive

MetaDrive paper: https://arxiv.org/abs/2109.12674

SimulationCity

SimulationCity homepage: https://www.theverge.com/2021/7/6/22565448/waymo-simulation-city-autonomous-vehicle-testing-virtual

SimulationCity homepage: https://waymo.com/blog/2021/06/SimulationCity.html

CarCraft

Waymo simulations are teaching self-driving cars valuable skills: https://www.engadget.com/2017-09-11-waymo-self-driving-car-simulator-intersection.html

Enter the secret world of WAYMO training self-driving cars: https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/

Simulate how one flashing yellow light turns into thousands of hours of experience: https://medium.com/waymo/simulation-how-one-flashing-yellow-light-turns-into-thousands-of-hours-of -experience-a7a1cb475565

UniSim

UniSim, a neural sensor simulator that takes a single recording log captured by a sensor-equipped vehicle and converts it into a realistic closed-loop multi-sensor simulation.

UniSim builds neural feature meshes to reconstruct the static background and dynamic characters in the scene and combines them to simulate LiDAR and camera data at new viewpoints (with characters added or removed and at new locations). To better handle extrapolated views, learnable priors for dynamic objects are incorporated and convolutional networks are utilized to complete unseen regions. Experiments show that UniSim can simulate real sensor data with small domain gaps on downstream tasks.

19660306d4ed0ac009cc1e59814964b8.png

UniSim project: https://waabi.ai/unisim/

UniSim paper: https://arxiv.org/abs/2308.01898

UniSim interpretation: https://zhuanlan.zhihu.com/p/636695025

MARS

An autonomous driving simulator based on neural radiation fields (NeRF) is proposed. Compared with existing works, our work has three distinctive features:

1). Instance-aware: This simulator uses separate networks to model foreground instances and background environments separately so that static (e.g., size and appearance) and dynamic (e.g., trajectory) properties of instances can be controlled independently.

2). Modularity: The simulator allows flexible switching between different modern NeRF-related backbones, sampling strategies, input modes, etc. This modular design can promote academic progress and industrial deployment of NeRF-based autonomous driving simulations.

3). Realistic: Considering the best module selection, our simulator is set up with new state-of-the-art photo-realistic effects.

50781aeda479398e120b32dd0bcbc1f7.png

MARS paper: https://arxiv.org/abs/2307.15058

MARS code: https://github.com/OPEN-AIR-SUN/mars

MARS project: https://open-air-sun.github.io/mars/

MARS Author: https://sites.google.com/view/fromandto

MARS interpretation: https://zhuanlan.zhihu.com/p/653536221

MagicDrive

MagicDrive, a novel street scene generation framework that provides multiple 3D geometric controls, including camera poses, road maps, and 3D bounding boxes, as well as textual descriptions via custom encoding strategies. Furthermore, the design incorporates a cross-view attention module to ensure consistency across multiple camera views. With MagicDrive, high-fidelity street scene synthesis is achieved, capturing nuanced 3D geometries and various scene descriptions, thereby enhancing tasks such as BEV segmentation and 3D object detection.

28ef2c084d31bc81e8a02813eee548d5.png

MagicDrive paper: https://arxiv.org/abs/2310.02601

MagicDrive code: https://github.com/cure-lab/MagicDrive

MagicDrive project: https://gaoruiyuan.com/magicdrive/

MagicDrive Interpretation: https://zhuanlan.zhihu.com/p/663261335

DrivingGaussian

This is an efficient and effective framework for dynamic autonomous driving scenarios. For complex scenes with moving objects, the static background of the entire scene is first modeled sequentially and incrementally using incremental static 3D Gaussian functions. Then, a composite dynamic Gaussian map is utilized to process multiple moving objects, reconstructing each object individually and recovering their accurate position and occlusion relationships in the scene.

One step uses LiDAR priors for Gaussian scattering to reconstruct the scene with more detail and maintain panoramic consistency. DrivingGaussian outperforms existing methods in driving scene reconstruction and enables realistic surround-view synthesis with high fidelity and multi-camera consistency.

54686f08e899ffd2e7266814845dc9e3.png

DrivingGaussian paper: https://arxiv.org/abs/2312.07920

DrivingGaussian project: https://pkuvdig.github.io/DrivingGaussian/

NeuRAD

propose NeuRAD, a robust and novel view synthesis method tailored for dynamic AD data. Its performance is verified on five popular AD datasets, achieving state-of-the-art performance across the board. The approach features a simple network design, extensive sensor modeling for cameras and lidar (including rolling shutter, beam divergence, and light drop), and works on multiple datasets out of the box.

dee05309f5151dabb347d077e09a9701.png e716ce0fd716ef9a939df319f71bffc2.png

NeuRAD paper: https://arxiv.org/abs/2311.15260

NeuRAD code: https://github.com/georghess/NeuRAD

NeuRAD interpretation: https://zhuanlan.zhihu.com/p/673873117

EmerNeRF

EmerNeRF is based on NeRF and can self-supervisedly capture the geometry, appearance, motion and semantics of wild scenes simultaneously. EmerNeRF layers the scene into static fields and dynamic fields. Based on instant-NGP's hashing of the three-dimensional space, it enhances the rendering accuracy of dynamic objects at multiple scales. By combining static, dynamic and optical flow (scene flow) fields, EmerNeRF is able to represent highly dynamic scenes without relying on supervised dynamic object segmentation or optical flow estimation and achieve state-of-the-art performance.

a12fc6ba7dbdeb290efbd27ea0644c3b.png

EmerNeRF 论文:EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision

EmerNeRF project: https://emernerf.github.io/

EmerNeRF code: https://github.com/NVlabs/EmerNeRF

EmerNeRF interpretation: https://zhuanlan.zhihu.com/p/674024253

Panacea

Panacea, an innovative method for generating panoramic and controllable videos in driving scenes, is capable of generating an unlimited number of diverse, annotated samples, which is critical for the advancement of autonomous driving. Panacea solves two key challenges: consistency and controllability. Consistency ensures consistency across time and across views, while controllability ensures alignment of generated content with corresponding annotations. Our approach integrates novel 4D attention and a two-stage generation pipeline for consistency, and is supplemented by the ControlNet framework for granular control via Bird's Eye View (BEV) layout.

Extensive qualitative and quantitative evaluations of Panacea on the nuScenes dataset demonstrate its effectiveness in generating high-quality multi-view driving scene videos. This work significantly advances the field of autonomous driving by effectively enhancing training data sets for advanced BEV perception technology.

5d683fbf05c18f01acca920d7f6230f0.png

Panacea paper: https://arxiv.org/abs/2311.16813

Panacea Interpretation: https://zhuanlan.zhihu.com/p/671567561

LimSim

LimSim, a long-term interactive multi-scenario traffic simulator, is proposed, aiming to provide long-term continuous simulation capabilities under urban road networks. LimSim can simulate fine-grained dynamic scenarios and focus on diverse interactions between multiple vehicles in traffic flows. This article introduces the framework and features of LimSim in detail, and demonstrates its performance through case studies and experiments.

17cfbeb1db5a2322aa69976f24552f2f.png 4eba08a8335a690116ede14aa9b61084.png

LimSim paper: https://arxiv.org/abs/2307.06648

LimSim interpretation: https://zhuanlan.zhihu.com/p/657727848

GenSim

It is proposed to automatically generate rich simulation environments and expert demonstrations by leveraging the foundation and coding capabilities of large language models (LLMs). The approach, called GenSim, comes in two modes: goal-directed generation, in which the LLM is provided with a target task and the LLM proposes a course of tasks to solve the target task; and exploratory generation, in which the LLM bootstraps from previous tasks and iteratively proposes helpful Novel tasks for solving more complex tasks.

0a510b71843824130617a9da1fce4e8d.png

GenSim paper: https://arxiv.org/abs/2310.01361

GenSim code: https://github.com/liruiw/GenSim

GenSim project: https://huggingface.co/spaces/Gen-Sim/Gen-Sim

GenSim introduction: https://zhuanlan.zhihu.com/p/661690326

UNav-Sim

UNv-Sim is the first simulator to integrate the efficient and high-detail rendering of Unreal Engine 5 (UE5). UNV-Sim is open source and includes a vision-based autonomous navigation stack. By supporting standard robotics tools such as ROS, UNav-Sim enables researchers to efficiently develop and test algorithms for underwater environments.

2407e665801f4b6b96d8f3b16bb7a2d1.png

UNav-Sim paper: https://arxiv.org/abs/2310.11927

UAV-Sim

Leverage recent advances in neural rendering to improve static and dynamic image synthesis based on NoveView drones, specifically to capture salient scene attributes from high altitudes. Considerable performance improvements can be achieved when state-of-the-art detection models are optimized primarily on a mixed set of real and synthetic data rather than real or synthetic data alone.

1126776c00d71cda88377c88e596b7b6.png

UAV-Sim paper: https://arxiv.org/abs/2310.16255

PegasusSimulator

Pegasus Simulator, a modular framework implemented as an extension to NVIDIA Isaac Sim, can simulate multiple multi-rotor aircraft in real-time in a realistic environment while providing integration with the widely adopted PX4-Autopilot and ROS2 through its modular implementation and intuitive GUI.

280bc0ecd2c4918d16c34c8466a5db67.png

PegasusSimulator paper: https://arxiv.org/abs/2307.05263

MTR

proposed a simple yet effective autoregressive method to simulate multi-agent behavior, which is based on a well-known multimodal motion prediction framework called Motion Transformer (MTR) [5] and applies a post-processing algorithm. Our submission, titled MTR+++, achieved 0.4697 on the realism meta-metric of WOSAC 2023. In addition, an improved MTR-based model MTR_E was also proposed after the challenge, with a score of 0.4911, ranking third on the WOSAC rankings as of June 25, 2023.

5b868fca9e4af20b704a5e1f752fbc18.png

MTR paper: https://arxiv.org/abs/2306.15914

MVTA

The MultiVerse Transformer (MVTA) proposed for agent simulation effectively utilizes Transformer-based motion prediction methods and is specifically tailored for closed-loop simulation of agents. In order to produce highly realistic simulations, novel training and sampling methods are designed and a back-off level prediction mechanism is implemented. Furthermore, a variable-length history aggregation method is introduced to mitigate compound errors that may arise during closed-loop autoregressive execution.

f8b5d02b086f5a4acb90e82089d94f6d.png

MVTA paper: https://arxiv.org/abs/2306.11868

MVTA project: https://multiverse-transformer.github.io/sim-agents/

SimOnWheels

Introducing Sim-on-Wheels, a safe and realistic vehicle-in-the-loop framework for testing the performance of autonomous vehicles in real-world safety-critical scenarios. The wheel simulation runs on autonomous vehicles operating in the real world.

It creates virtual traffic participants with risky behaviors and seamlessly inserts virtual events into images perceived in real time from the physical world. The processed images are fed into autonomous systems, allowing self-driving vehicles to react to such virtual events. The complete pipeline runs on an actual vehicle and interacts with the physical world, but the safety-critical events it sees are virtual. Sim-on-Wheels are safe, interactive, authentic and easy to use. These experiments demonstrate the potential of wheel simulation to facilitate testing of autonomous driving processes in challenging real-world scenarios with high fidelity and low risk.

be27ce2fc7cd13747700948cc4c8ff59.png

SimOnWheels paper: https://arxiv.org/abs/2306.08807

AutoVRL

AutoVRL, an open source high-fidelity simulator built on the Bullet physics engine, uses OpenAI Gym and Stable Baselines3 in PyTorch to train the AGV DRL agent to achieve strategy transfer from simulation to reality.

AutoVRL comes with sensor implementations for GPS, IMU, LiDAR and cameras, actuators for AGV control and realistic environments, with scalability for new environments and AGV models. The simulator provides access to state-of-the-art DRL algorithms, leveraging a Python interface for simple algorithm and environment customization and simulation execution.

8478536e126d524d4fbaca3f78349f22.png

AutoVRL paper: https://arxiv.org/abs/2304.11496

AptSim2Real

The approximate pairing method AptSim2Real exploits the fact that a simulator can generate scenes that are roughly similar to real-world scenes in terms of lighting, environment, and composition. Our novel training strategy leads to significant qualitative and quantitative improvements, improving FID scores by 24% compared to state-of-the-art unpaired image translation methods.

a9ef6850e58e2896f32cd211e4fd5e54.png

AptSim2Real paper: https://arxiv.org/abs/2303.12704

AdaptSim

AdaptSim project: https://irom-lab.github.io/AdaptSim/

AdaptSim paper: https://arxiv.org/abs/2302.04903

AdaptSim code: https://irom-lab.github.io/AdaptSim/

WaymoX

Waymax is a lightweight, multi-agent, JAX-based simulator for autonomous driving research based on the Waymo Open Motion Dataset. Waymax is designed to support all aspects of autonomous driving behavior research: from closed-loop simulation of planning and simulated agent studies to open-loop behavior prediction. Objects (e.g., vehicles, pedestrians) are represented as bounding boxes rather than raw sensor output in order to distill behavioral studies into their simplest form. Since all components are written entirely in JAX, Waymax can be easily distributed and deployed on hardware accelerators such as GPUs and TPUs.

f67a586452eb904dc563397620079ac2.png

WaymoX paper: https://arxiv.org/abs/2310.08710

WaymoX code: https://github.com/waymo-research/waymax

WaymoX homepage: https://waymo.com/intl/zh-cn/re

4. Simulation platform

Huawei-Octopus

Autonomous driving cloud service (Octopus) is a fully managed platform for car companies and research institutes. It provides autonomous driving data cloud services, autonomous driving annotation cloud services, autonomous driving training cloud services, autonomous driving simulation cloud services, and configuration management on Huawei Cloud. Services to help car companies and research institutes quickly develop autonomous driving products.

8205079dbbb69178ba863b64807646d7.png babe73d95b713c8338ca04873be26a9f.png

Octopus homepage: https://support.huaweicloud.com/octopus/index.html

What is Octopus: https://support.huaweicloud.com/productdesc-octopus/octopus-01-0001.html

Introduction to simulation service: https://support.huaweicloud.com/usermanual-octopus/octopus-03-0009.html

Baidu-ApolloCloud

ApolloCloud homepage: https://apollocloud.baidu.com/

Apollo simulation platform scene editor: https://apollo.baidu.com/community/article/120

Cloud simulation testing solution: https://apollocloud.baidu.com/solution/test

Apollo simulation platform: https://developer.apollo.auto/platform/simulation_cn.html

Tencent-TDASim

5b2f4eebc7048ef6b606acb7f90b099c.png

Intelligent network connection solution: https://cloud.tencent.com/solution/intelligent-vehicle-road-cooperation

TADSim Encyclopedia: https://baike.baidu.com/item/TAD%20Sim/63745889?fr=ge_ala

Tencent releases autonomous driving simulation platform TAD Sim 2.0: https://zhuanlan.zhihu.com/p/150694950

TADSim paper: https://dl.acm.org/doi/10.1145/2699715

Ali-IVOCC

IVOCC:https://www.aliyun.com/product/iovcc

NVIDIA-DriveSim

NVIDIA DRIVE Sim is an end-to-end simulation platform built from the ground up to run large-scale, physics-based, multi-sensor simulations. It is open, extensible, and modular, supporting AV development and validation from concept to deployment, improving developer productivity and accelerating release times.

DriveSim home page: https://developer.nvidia.com/drive/simulation

DriveSim introduction: https://www.nvidia.com/en-sg/self-driving-cars/simulation/

5. Optical simulation

3DOptix

3DOptix official website: https://www.3doptix.com/

3DOptix homepage: https://www.3doptix.com/design-simulation-software/

OpticStudio

Ansys Zemax OpticStudio optical workflow and design software home page: https://www.ansys.com/products/optics/ansys-zemax-opticstudio

LightTools

LightTools enables you to quickly create lighting designs that work on the first try, reducing prototype iterations. Increase your productivity and reduce time to market with LightTools' smart, easy-to-use tools.

LightTools home page: https://www.synopsys.com/optical-solutions/lighttools.html

Summarize

In fact, many ADAS companies do not pay much attention to simulation and focus on recirculation and road testing. Even if they build a simulation tool chain, it plays a very small role in mass production projects and is only used to analyze and reproduce problems/BUGs.

There are still no fully mature solutions for sensor data & signal simulation, vehicle kinematics & dynamics simulation, perception & control joint simulation, and data twin data mining for mass production projects/function development, and full-stack integration for WorldSim is urgently needed. Simulation & Emulator. It would be even more perfect if chip hardware simulation and software algorithm simulation could be done together.

The contributing author is a special guest of " Autonomous Driving Heart Knowledge Planet ", welcome to join the exchange!

① Exclusive video courses on the entire network

BEV perception , millimeter wave radar vision fusion , multi-sensor calibration , multi-sensor fusion , multi-modal 3D target detection , lane line detection , trajectory prediction , online high-precision map , world model , point cloud 3D target detection , target tracking , Occupancy, CUDA and TensorRT model deployment , large models and autonomous driving , Nerf , semantic segmentation , autonomous driving simulation, sensor deployment, decision planning, trajectory prediction and other learning videos ( scan the QR code to learn )

a65d2730b6a4dbd8b5eff9c150f940af.png Video official website: www.zdjszx.com

② The first autonomous driving learning community in China

A communication community of nearly 2,400 people, involving 30+ autonomous driving technology stack learning routes. Want to know more about autonomous driving perception (2D detection, segmentation, 2D/3D lane lines, BEV perception, 3D target detection, Occupancy, multi-sensor fusion, Technical solutions in the fields of multi-sensor calibration, target tracking, optical flow estimation), autonomous driving positioning and mapping (SLAM, high-precision maps, local online maps), autonomous driving planning control/trajectory prediction, AI model deployment and implementation, industry trends, Job postings are posted. Welcome to scan the QR code below and join the Knowledge Planet of the Heart of Autonomous Driving. This is a truly informative place where you can communicate with industry leaders about various problems related to getting started, studying, working, and job-hopping, and share papers and code on a daily basis. +Video , looking forward to communication!

6521a8219cbe3689a4a7f64d450fe5a7.png

③【Heart of Autonomous Driving】Technical Exchange Group

The Heart of Autonomous Driving is the first autonomous driving developer community, focusing on target detection, semantic segmentation, panoramic segmentation, instance segmentation, key point detection, lane lines, target tracking, 3D target detection, BEV perception, multi-modal perception, Occupancy, Multi-sensor fusion, transformer, large model, point cloud processing, end-to-end autonomous driving, SLAM, optical flow estimation, depth estimation, trajectory prediction, high-precision map, NeRF, planning control, model deployment and implementation, autonomous driving simulation testing, products Managers, hardware configuration, AI job search exchanges , etc. Scan the QR code to add Autobot Assistant WeChat invitation to join the group, note: school/company + direction + nickname (quick way to join the group)

9311f5d781bc365923cf999386f03fbd.jpeg

④【Heart of Autonomous Driving】Platform Matrix, welcome to contact us!

65dfbb2cd88b663f149101a6fe072524.jpeg

Guess you like

Origin blog.csdn.net/CV_Autobot/article/details/135421058