[Simulation] Carla's quick tutorial on collecting data (with complete code)

Visual display of the collection process, and then enter the text:

References and Preface

Seeing that the simulation group has a large demand for this type of task ( collecting data with carla and then doing training, etc.), I will write one right away. First, I need to think clearly about collecting data:

  1. What data is collected and what data format is required

  2. The timestamps between the data must be synchronized, which means that there is a certain awareness of the time settings of carla

    [Simulation] Time in Carla's world[2]

  3. Generally, if there is no accident when collecting data, we tend to drive the car automatically. Sometimes we may think about ignoring traffic lights, speeding up, and so on. This means having a certain understanding of traffic manager

    [Simulation] Carla's Traffic Manager [3]

I always thought that the column written by CARLA has clearly written how to use it, but... everyone seems to tend to do it first, which led to the following problems more than once in our group:

  1. How to ensure synchronization between sensors → Synchronization mode setting
  2. Why does my CARLA seem to be lagging → see if the GPU can keep up with the bro

Next, we will complete such a task: collect the front two camera images and the top radar point cloud while the vehicle is driving, and save its own IMU and GNSS data at the same time (note that GPS is different from the location directly taken from carla!) GNSS The data needs to be converted to be the same as carla location

Some of the following parts are very basic. If you are too lazy to read the text, you can just read the code directly. The code address: tutorial/collect_data.py Zhang Congming/CarlaPythonAPI - Gitee.com

Relevant reference links and teaching are released together in the preface, and will not be copied separately in the future:

  1. Zhihu CARLA Tutorial Column: Xiaofei Autopilot Series Sharing- Know about

  2. The blogger's own CSDN tutorial column: https://blog.csdn.net/qq_39537898/category_11562137.html

  3. the best! ! ! Or CARLA official document! ! ! Everyone check out the official documents! ! PS Remember to match your CARLA version

    CARLA Simulator

    Simple focus, all of the following have corresponding parts of official documents:

    1. How the time in the CARLA world works and is stipulated: Synchrony and time-step - CARLA Simulator
    2. Which sensors are available inside: Sensors reference - CARLA Simulator

0. World settings

Sync Time Settings

Note that the CARLA synchronization mode must be turned on to collect data, and if you want to use trafficmanager, because the synchronization mode is turned on, trafficmanager also needs to be synchronized together. This piece of knowledge is linked in the preface

Let’s take a look at the time setting: CARLA time setting

The following is directly intercepted, please click the preface for the complete code:

python

def main(args):
    # We start creating the client
    client = carla.Client(args.host, args.port)
    client.set_timeout(5.0)
    
    # world = client.get_world()
    world = client.load_world('Town01')
    blueprint_library = world.get_blueprint_library()
    try:
        original_settings = world.get_settings()
        settings = world.get_settings()

        # We set CARLA syncronous mode
        settings.fixed_delta_seconds = 0.05
        settings.synchronous_mode = True
        world.apply_settings(settings)
        spectator = world.get_spectator()

        # 手动规定
        # transform_vehicle = carla.Transform(carla.Location(0, 10, 0), carla.Rotation(0, 0, 0))
        # 自动选择
        transform_vehicle = random.choice(world.get_map().get_spawn_points())
        ego_vehicle = world.spawn_actor(random.choice(blueprint_library.filter("model3")), transform_vehicle)
        actor_list.append(ego_vehicle)
  1. client and server connect
  2. get_world is CARLA's current map on this interface, the world is that; load world is that you can choose the default CARLA built-in towns
  3. Enable sync mode
  4. Put a Tesla on it

Auto mode on

For the sake of simplicity, I don’t need to implement special rules or use carla’s behavior agent, and directly use the traffic manager to set it to the automatic driving mode. For more settings, please refer to the official document. For example, the following list: Ignore traffic lights and speed limits

python

# Set traffic manager 
tm = client.get_trafficmanager(args.tm_port) 
tm.set_synchronous_mode(True) 
# Whether to ignore traffic lights 
# tm.ignore_lights_percentage(ego_vehicle, 100) 
# If the speed limit is 30km/h -> 30*(1-10%) =27km/h 
tm.global_percentage_speed_difference(10.0) 
ego_vehicle.set_autopilot(True, tm.get_port())

What needs to be paid attention to is that the synchronous traffic manager also needs to be set to synchronous, and it must be set back when it is destroyed. When I just wrote the tutorial, I didn’t find the bug for a long time and only saw the car not moving; the former is to help students find problems. It is found that if a script is set to synchronize traffic manager and not set to synchronize CARLA, the overall NPC will be stuck


If the synchronization mode is not set, the phenomenon of one card and one card will appear on the GPU that is not so good , I drew it a long time ago (the second one obviously has frames and frame drops):

1. Arranging the sensors

Here we refer to the internal example of carla. Here, I would like to thank Mr. Li for the reminder hhh. At the beginning, I planned to be more violent. I thought that if they are all synchronized, there should be no need to go to the queue, but it is better to use the frame to be on the safe side:

python

#-------------------------- 进入传感器部分 --------------------------#
sensor_queue = Queue()
cam_bp = blueprint_library.find('sensor.camera.rgb')
lidar_bp = blueprint_library.find('sensor.lidar.ray_cast')
imu_bp = blueprint_library.find('sensor.other.imu')
gnss_bp = blueprint_library.find('sensor.other.gnss')

# 可以设置一些参数 set the attribute of camera
cam_bp.set_attribute("image_size_x", "{}".format(IM_WIDTH))
cam_bp.set_attribute("image_size_y", "{}".format(IM_HEIGHT))
cam_bp.set_attribute("fov", "60")
# cam_bp.set_attribute('sensor_tick', '0.1')

cam01 = world.spawn_actor(cam_bp, carla.Transform(carla.Location(z=args.sensor_h),carla.Rotation(yaw=0)), attach_to=ego_vehicle)
cam01.listen(lambda data: sensor_callback(data, sensor_queue, "rgb_front"))
sensor_list.append(cam01)

cam02 = world.spawn_actor(cam_bp, carla.Transform(carla.Location(z=args.sensor_h),carla.Rotation(yaw=60)), attach_to=ego_vehicle)
cam02.listen(lambda data: sensor_callback(data, sensor_queue, "rgb_left"))
sensor_list.append(cam02)

lidar_bp.set_attribute('channels', '64')
lidar_bp.set_attribute('points_per_second', '200000')
lidar_bp.set_attribute('range', '32')
lidar_bp.set_attribute('rotation_frequency', str(int(1/settings.fixed_delta_seconds))) #

lidar01 = world.spawn_actor(lidar_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
lidar01.listen(lambda data: sensor_callback(data, sensor_queue, "lidar"))
sensor_list.append(lidar01)

imu01 = world.spawn_actor(imu_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
imu01.listen(lambda data: sensor_callback(data, sensor_queue, "imu"))
sensor_list.append(imu01)

gnss01 = world.spawn_actor(gnss_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
gnss01.listen(lambda data: sensor_callback(data, sensor_queue, "gnss"))
sensor_list.append(gnss01)
#-------------------------- 传感器设置完毕 --------------------------#

The above are mainly:

  1. Go to Curry and find such a sensor
  2. Make some settings for the sensor, such as the FOV of the camera, the number of channels of the lidar
  3. Then put the sensor in the car! So there is an attch to your car

The main thing to pay attention to is the setting of the lidar:

  1. points_per_second The more points, the denser it is, and it is related to the number of radar channels (optional I remember: 32, 64, 128)

  2. Be sure to note that rotation_frequency is the frequency of your own fixed_delta_seconds, otherwise it will appear that only half of the face is collected, such as this picture:

2. Collect data

Mainly refer to sensor_synchronization.py in the official example of carla , the following is the interception in the while loop

python

while True:
    # Tick the server
    world.tick()

    # 将CARLA界面摄像头跟随车动
    loc = ego_vehicle.get_transform().location
    spectator.set_transform(carla.Transform(carla.Location(x=loc.x,y=loc.y,z=35),carla.Rotation(yaw=0,pitch=-90,roll=0)))

    w_frame = world.get_snapshot().frame
    print("\nWorld's frame: %d" % w_frame)
    try:
        rgbs = []

        for i in range (0, len(sensor_list)):
            s_frame, s_name, s_data = sensor_queue.get(True, 1.0)
            print("    Frame: %d   Sensor: %s" % (s_frame, s_name))
            sensor_type = s_name.split('_')[0]
            if sensor_type == 'rgb':
                rgbs.append(_parse_image_cb(s_data))
            elif sensor_type == 'lidar':
                lidar = _parse_lidar_cb(s_data)
            elif sensor_type == 'imu':
                imu_yaw = s_data.compass
            elif sensor_type == 'gnss':
                gnss = s_data
        
        # 仅用来可视化 可注释
        rgb=np.concatenate(rgbs, axis=1)[...,:3]
        cv2.imshow('vizs', visualize_data(rgb, lidar, imu_yaw, gnss))
        cv2.waitKey(100)
    except Empty:
        print("    Some of the sensor information is missed")

def sensor_callback(sensor_data, sensor_queue, sensor_name):
    # Do stuff with the sensor_data data like save it to disk
    # Then you just need to add to the queue
    sensor_queue.put((sensor_data.frame, sensor_name, sensor_data))

So far, the data collection part has been completed, and the complete code can be run at the same time to see the following dynamics:

3. Save data

This is the corresponding save, and the display effect is as follows:

python

if rgb is None or args.save_path is not None: 
	  # Check if there is a folder for the respective sensor mkdir_folder 
	  (args.save_path) 
	
	  filename = args.save_path +'rgb/'+str(w_frame)+'.png' 
	  cv2.imwrite (filename, np.array(rgb[...,::-1])) 
	  filename = args.save_path +'lidar/'+str(w_frame)+'.npy' 
	  np.save(filename, lidar)

For point clouds, if you want to do other operations, it is recommended to use open3d, such as:

python

import numpy as np
import open3d as o3d
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(np.load('217.npy')[:,:3])
o3d.visualization.draw_geometries([pcd])

Summarize

The above is mainly a simple implementation of the simple version of the data collection script inside CARLA, and the long-term version:

  1. To know what is the purpose of using CARLA
  2. Read more official documents, many API official explanations are in place
  3. Look at the official examples, many of them are treasures hhh

In addition, the complete code is in: gitee external link

Reprinted from [Simulation] Carla's quick tutorial on data collection (with complete code) [7] - Kin_Zhang - 博客园

Guess you like

Origin blog.csdn.net/weixin_48936263/article/details/124253467