Interpretation of the paper--High Resolution Radar-Based Occupancy Grid Mapping and FreeSpace Detection

Summary

High-resolution radar sensors are able to sense the vehicle's surroundings very accurately by detecting thousands of reflection points per measurement cycle. This paper proposes a new occupancy grid mapping method for modeling static environments. The reflection amplitudes of all data points are compensated, normalized, and converted to detection probability values ​​based on a predefined radar sensor model. Based on the motion of the test vehicle, the posterior occupancy probability after several measurement cycles is calculated, and an occupancy grid map is established. Subsequently, this occupancy grid map is transformed into a binary grid map, where grid cells containing obstacles are defined as occupied. These occupied grid cells were clustered by a connected component labeling algorithm, eliminating all outliers with only a small number of grid cells. Then, the boundaries of clustered grid cells were identified using the Moore neighbor tracking algorithm. Based on these boundaries, Bresenham's straight line algorithm is used to determine the free space based on the interval model. The occupancy grid map drawn from the radar measured data and the free space detection results are in good agreement with the actual road scene.

1 Introduction

Due to their all-weather ruggedness and relatively low cost, radar sensors are widely used in the automotive industry, especially in the field of advanced driver assistance systems (ADAS). For example, in adaptive cruise control (ACC) systems, radar sensors can detect objects over a wide range. After obtaining the target distance value, the vehicle can be automatically accelerated or decelerated by the ACC system.

The development of automated driving assistance control systems continues to increase the requirements for high-resolution radar sensors. To handle complex applications and traffic situations, radar sensors require high angular and distance resolution to capture sufficient environmental information. Additionally, high-resolution radar requires data fusion with lidar or camera sensors at the pixel level.

A fast chirp continuous wave radar system with an antenna array (chirp sequence radar) has proven to be one of the most suitable solutions. The radar system provides high-resolution situational awareness based on thousands of reflection points detected within a single measurement cycle.

In the field of environment modeling with high-resolution data, one of the commonly used methods is occupancy grid maps , which originally originated in probabilistic robotics [2,3]. This method divides the environment into a uniform pattern of grid cells, and then fills the detection points into the corresponding grid cells. Grid cells are tracked over time instead of points, so measurement noise and uncertainty are eliminated. At the same time, the probability that each grid cell is occupied is calculated. The method is able to detect reflection points from static objects at the same physical location in successive measurement cycles, resulting in a stable occupancy grid map sufficient for modeling static environments.

From the occupancy grid map, free space areas can be identified. During vehicle trajectory planning, free space should be estimated as precisely as possible, otherwise collisions with nearby obstacles may occur, especially after evasive maneuvers [4].

The paper is organized as follows: Section 2 presents the state of the art in occupancy grid mapping and free space detection. Section III presents data preparation tasks such as the radar sensor and coordinate system used. In Section 4, a method for mapping occupancy grids using single-front high-resolution radar data is first described. Then, in Section 5, occupancy grid maps with multiple complementary fused radar sensors are presented. Based on the occupancy grid graph, the algorithm required to detect regions of free space is presented in Section 6. Finally, the paper is concluded and future prospects are proposed.

2. Related work

This section describes work related to occupancy grid mapping and free space detection.

2.1. Bayes Theorem

Based on Bayes' theorem, the new data in the current measurement period is combined with the previous data to calculate the posterior probability p(m|R1:t,V1:t) of the map, where m is the grid map and R1:t is the sensor measurement data set from time 1 to t, and V1:t is the vehicle position data set from time 1 to t.

 (1)

 The log odds ratio of the posterior probability ℓt in equation (1) can be calculated as

(2)

Among them, p(m|Rt, Vt) represents the detection probability of the sensor data Rt and the vehicle data Vt processing the current measured value. Before any measurement ℓ0 is processed, the log odds ratio of the detection probability is usually assumed to be 0, since nothing is known about the surrounding environment until the first measurement.

2.2. Occupancy Grid Mapping

Occupancy grid mapping was previously achieved with lidar sensors [5] and camera sensors [6]. Utilizing an advanced forward-inverse sensor model, the reflection data from the lidar sensor is converted into occupancy probabilities as detection probabilities in Bayesian theorem [7]. If the lidar sensor detects an object, the grid cell where the object is located is identified as occupied (see Figure 1). Between occupied grid cells and the LiDAR sensor, grid cells within a certain radial distance from the LiDAR sensor are marked as free. The occupancy probability of a grid cell at a distance threshold is calculated as a linear function of the distance between the grid cell and the target. Grid cells without any measurement information (gray in Fig. 1) are marked as unknown.

Figure 1: Laser sensor model

Since radar sensors are able to sense objects behind obstacles, a different sensor model is required to calculate occupancy probabilities. In [8], Degerman et al. extract the signal-to-noise ratio (SNR) and calculate the probability of detection together with the Swerling 1 model. Using a static radar, Clarke et al. calculated the occupancy probability as a function of reflected power, fast Fourier transform (FFT) bin number of the range, and bearing [9]. Werber et al. developed an amplitude-based method using information about the radar cross section (RCS) with occupancy grid mapping [10]. Considering the different characteristics and modulation characteristics of radar sensors, a general radar sensor model can be established by converting the reflection intensity of detection points into occupancy probability .

Since previous automotive radar sensors can only provide little reflection data, mainly at the target level, occupancy grid maps are usually created by simultaneous localization and mapping (SLAM) algorithms from multiple measurements over a limited area. Combining all measurements results, an occupancy grid map of the entire measurement area is established, which helps to locate the vehicle's position. Grid mapping is also used to classify objects stored on the cell level [11]. However, this approach is not suitable for occupancy grid mapping within real-time measurements.

2.3. Free space detection

Based on occupancy grid maps, free space detection capabilities have been developed in some previous work using laser and vision sensors.

For the lidar sensor model, free space is defined as a function of the distance between the sensor and the target [12]. Further work focuses on road boundary recognition with classification capabilities from camera sensor data [13,14]. Konrad et al. proposed a method for estimating road alignments using a multi-layer laser scanner [15]. Lundquist et al. employed a curve fitting method to detect road boundaries on expressways [16]. Schreier et al. developed a parametric free-space graph that describes the B-spline profile of an arbitrarily shaped external free-space boundary around an ego carrier, adding additional properties of the boundary type [17]. In complex vehicular environments, a large number of curve parameters need to be estimated.

Due to noise and uncertainty in radar-specific data, the created occupancy grid map needs to be adjusted accordingly before free-space detection. Since radar detections can only cover a limited area, it is necessary to develop a free-space model that focuses on the area along the future vehicle trajectory.

3. Measurement configuration and data preparation

A developed high-performance radar system was installed on the test vehicle and the measurement data was recorded. The vehicle motion model is simulated using vehicle dynamic data on the Controller Area Network (CAN) bus. The coordinate system of the vehicle and the grid map are adapted to each other.

3.1. Radar sensor

A 77 GHz FMCW experimental high-performance radar system was developed and installed at the front of the vehicle (see Figure 2). A bandwidth of B = 2.4 GHz, an observation cycle time of T = 50 ms and a 16-channel receiving antenna array are used.

Figure 2: Experimental radar sensor and FPGA development board

 The measured raw data dimensions are 4096 samples, 1024 ramps and 16 channels. The signal processing algorithm is realized by using a field programmable gate array (FPGA) development board. Perform FFT detection on the sample to determine the distance information (range) of the detection point. For radial velocity detection, a second FFT on the ramp was calculated. In both dimensions, Chebyshev windows are used. An ordered statistical constant false alarm rate (OS-CFAR) algorithm generates a threshold for target extraction of the computed 2D range-Doppler spectrum. Objects above a threshold level are processed and their direction (angle of arrival) is calculated using a deterministic maximum likelihood (DML) algorithm.

Set a velocity threshold to select relevant target points from the static environment. Convert the range and angle of the reflection point in the radar polar coordinate system to xr,i and yr,i in the Cartesian coordinate system. The middle position of the rear axle of the vehicle is defined as the origin of the coordinate system. Use the above signal processing algorithm to calculate the reflection amplitude Ar,i of each point. Therefore, the information of the reflection point Rt at time t can be expressed as

(3)

 Wherein, N is the number of reflection points.

3.2. Vehicle motion model

Figure 3 is the vehicle coordinate system defined by ISO 8855:2011. From the CAN-Bus, vehicle dynamic data such as speed v, acceleration a, and turning speed φ are recorded. Calculate ego motion based on constant turning acceleration (CTRA) model [19]

,(4) 

Figure 3: Vehicle motion model 

By integrating Equation (4), the calculation sum of ego-vehicle position is expressed as

(5)

 Based on the location of the ego vehicle, the grid map is tracked.

3.3. Grid map coordinate system

In general, the coordinate system of an occupancy grid map can be defined in two ways:

1) Ground fixed coordinate system. The ego vehicle moves at different points in this coordinate system. This method is suitable for the measurement of limited locations, such as parking lots, otherwise it is recommended to use a larger grid map to ensure that the ego vehicle is always in the map.

2) Vehicle - fixed coordinate system. Move and rotate the grid map to keep the origin staying at the midpoint of the vehicle's rear axle. However, unwanted offsets can occur during translation and rotation. After the ego vehicle moves, one grid cell in the past map may occupy several new grid cells in the moving and rotating map, which makes the grid map unstable or inaccurate.

Figure 4: Grid map coordinate system

 In order to model and visualize the environment around the vehicle anywhere, the grid map coordinate system needs to move with the ego vehicle as in method 2. At the same time, some corrections were made to it to solve the offset problem. The grid map simply shifts integer rows and columns in the x and y directions according to the vehicle position. The remaining differences between the grid map's origin and the ego positions xv' and yv' are preserved (see Figure 4). The orientation of the grid map is fixed by using the ego orientation from the first measurement. During vehicle motion, the grid map is not rotated, but the orientation of the ego vehicle φv is preserved. These values ​​are used to update the points in the grid map's coordinate system. This method can move the grid map without offset when tracking the grid map.

The length and width of the entire grid map are adapted to the detection range of the radar sensor. The size of a single grid cell is comparable to the resolution of a radar sensor .

Convert the coordinates of the radar detection point in the vehicle coordinate system to the grid map coordinate system by the following formula

(6)

 4. Occupancy grid map

Depending on the location, radar reflection points are assigned to the corresponding grid cells. At each time step, the occupancy grid is updated taking into account the current measurements from the radar sensor and the previous values ​​of the grid . This reduces measurement uncertainty and error, as true obstacles are usually detected in consecutive measurement cycles and mapped in the same grid cells over time.

The reflection intensity at each new point is converted to a normalized value. Combines the values ​​of all points in a single cell to calculate the probability of detection in that cell . In each cycle, the probabilities are calculated and combined with each other to obtain the posterior probabilities and establish the final effective occupancy grid map . The next section presents methods for detecting probabilities and posterior probabilities.

4.1. Detection probability

Figure 5 shows an image of one measurement cycle of a parking lot, and a bird's-eye view of the raw radar data is shown in Figure 6. In the next part, the reflection amplitudes at all detection points are converted to detection probabilities in each grid cell.

Free space loss compensation. Free space loss describes the reduction of the power density of electromagnetic waves during propagation in free space, in accordance with the distance law, without considering additional attenuation factors (such as rain or fog). The magnitude of the reflection decreases with distance from the radar sensor.

Free-space loss is compensated in order to make the reflection strength of obstacles and the transformed detection probability independent of distance. The relationship between the reflection amplitude and the radial distance of each point is given in equation (7). The amplitudes of all points are converted to equivalent values ​​d^N at a reference distance Ar,i^N from the radar sensor.

(7)

 Among them

 Figure 5: Real parking lot scene image

  Figure 6: Aerial view of radar reflection point

Antenna gain compensation . The reflection amplitude at these points is also affected by the angle between the target and the radar sensor, which is related to the gain of the antenna. Different antenna gain patterns are compensated to achieve angle-of-arrival independent reflection amplitudes. In order to understand the relationship between the amplitude and the angle of the reflection point, place a corner reflector at the same distance from the radar sensor but at different angles, and measure the reflection amplitude of the reflector at different angles (see Figure 7). With this antenna pattern, the amplitudes at all points are converted to an isotropic value, thus eliminating any angular dependence.

Figure 7: Antenna Gain Empirical Characteristic Curve

 Normalization of reflection amplitudes. Reflection magnitude is a relative value and varies with signal processing algorithms and parameters. However, the relationship of the amplitudes between different points always exhibits relative reflection strength. Therefore, the compensated amplitude is normalized to a value between 0 and 1. For each measurement cycle, all points are sorted by their amplitude (see Figure 8).

 Figure 8: Distribution and normalization of reflection amplitudes

If you set the maximum amplitude value of the reflection intensity to 1 and the minimum amplitude value to 0, then an inappropriate scale is used because some points have an extreme value. Therefore, the 10% maximum value is normalized to 1, and the 10% minimum value is 0. The reflection amplitude between them is converted to this value according to a linear function. Therefore, the reflection intensities at all points are normalized (see Figure 9).

    

 Figure 9: Normalized reflection magnitude Figure 10: Detection probability (ego vehicle near origin)

Probability of detection in a single grid cell . After reflection amplitudes were compensated and normalized, the points were assigned to grid cells. Each grid cell can be occupied by several points with different reflection intensities. The reflection intensity of a single grid cell or the reflection intensity of all points or the number of points can calculate the detection probability of a single grid cell. In a grid cell, some points with high reflection intensity are detected from one object, while some points with low reflection intensity are reflected from another nearby object due to antenna sidelobes. The influence of those points with lower reflection intensity should be ignored , otherwise, a lower detection probability is calculated by calculating the average reflection intensity within a grid cell . Also, the number of points in each grid cell depends heavily on the size of the grid cell.

Based on the above reasons, only the point with the maximum reflection intensity value of 20% in each grid cell is considered in the calculation . Their mean reflection intensity value is defined as the probability of detection in the grid cell . Figure 10 depicts the detection probabilities for all grid cells in one measurement cycle.

4.2. Posterior probability

The radar sensor model converts reflection intensity to detection probability, which is different from the lidar sensor model, so Equation (2) has been modified.

First, the detection probability is adjusted to a value between 0.5 and 1 using Equation (8); otherwise, reflection intensities below 0.5 also come from obstacles, resulting in a decrease in the logarithmic ratio of the posterior probability.

 (8)

 However, as the detection probability is scaled, the posterior probability increases each time the data for a new measurement period is calculated. To solve this problem with a degeneracy factor k, the equation is then used to calculate the log odds ratio of the posterior probability

 (9)

 As the ego vehicle moves, the grid cells with occupancy probability values ​​move. Therefore, each grid cell maintains a detection probability based on the radar data for the current period and the occupancy probability for the previous period . Previous radar data should have less influence on the final occupancy probability than new data . With the degradation factor k, the log odds ratio of the occupancy probability ℓt−1 decreases with time. Therefore, in each cycle, the occupancy probability value in the grid cell first decreases with the increase of the degradation factor, and then increases with the increase of the current detection probability .

The log odds ratio ℓt in grid cells is normalized to a value between 0 and 1, which represents the posterior occupancy probability. The maximum and minimum limits are determined predictively: an object is located in a grid cell and is detected with the same detection probability Pth in each cycle. After n measurement periods, it is assumed that the grid cells are 100% occupied. The current log odds ratio is set to an upper bound ℓth,max, denoted by the value 1 of the posterior probability. ℓth,max can be calculated by the following formula

 (10)

 In the next m cycles, no reflected points are detected in this grid cell. The grid cells are set freely again. The current log ratio is defined as the lower bound ℓth,min, denoted by the value 0 of the posterior probability. ℓth,min can be calculated by the following formula

(11)

 The log ratio values ​​between the upper and lower bounds were converted to values ​​between 0 and 1. Figure 11 shows the variation curve of occupancy probability in prediction with measurement period (Pth=0.9, n=m=10). In the 10th cycle, the occupancy probability reaches a maximum value, then decreases, and a minimum value occurs in the 20th cycle.

Figure 11: Predicted Occupancy Probability Variation Curve

 4.3. Results

The posterior probability represents the final occupancy probability in each period. In Figure 12, an occupancy grid map measured from a parking space where several trucks and vans are parked (see Figure 5) is shown. In the occupancy grid map, the outlines of trucks are recognized even though they are parked close to each other. The area where the truck is located has an occupancy probability of almost 1, and the grid cells between them have an occupancy probability of 0. This occupancy grid map correctly represents the static environment.

Figure 12: Occupancy grid map of the parking lot

4.4 . Magnitude grid map

Magnitude gridmap is another commonly used gridmap method, which normalizes the maximum value of reflection amplitude over time for each grid cell to the occupancy probability . In Fig. 13, an example of a magnitude grid map is given. In contrast to the occupancy grid map, the measurement noise is not filtered and presented in the grid map, since it only considers the maximum value and ignores the duration period of the measured value . Due to the presence of measurement noise, a higher occupancy probability is calculated in the existing free space, which interferes with the detection of the free space. Therefore, the methods mentioned earlier in Sections 4.1 and 4.2 will be used in the following sections.

Figure 13: Example of an amplitude grid map

5. Occupancy Grid Map Fusion

To extend the field of view (FoV), three high-resolution radars are mounted around the vehicle. After introducing the radar sensor configuration, three occupancy grid maps for radar data processing are proposed. Using the method described above, a stable grid map with a large field of view is obtained.

5.1. Sensor configuration

The azimuth aperture of the radar sensor (indicated by FC radar, see Fig. 2) installed in the center of the front spoiler is about ±50°. This means that most detection points are located in front of the vehicle, but the surroundings on the sides are not well sensed. More radar sensors are needed to extend the sensing area.

Due to the lack of installation space for the wiper oil storage tank and exhaust pipe on the left side of the vehicle, the other two radar sensors are respectively installed in the right front corner of the vehicle (indicated by FR radar) and the right rear corner (indicated by RR radar) (see Figure 14). The installation position and orientation of the sensors are described in the vehicle coordinate system (see Figure 3), for example (xFR, yFR, φFR) for FR radar.

Figure 14: Three radar sensors are installed around the vehicle (the front center radar is shown in Figure 2)

 5.2. Data Fusion

Three radar sensors can detect different reflection points in their field of view, which partially overlap (see Figure 15). In order to process and store data from individual radar sensors simultaneously and synchronously, data fusion between them is required.

Sensor data fusion can be classified into low-level and high-level according to the processing stage where fusion is performed. High-level data fusion means that the detections and their information from each radar sensor are first pre-processed separately to the target level. Objects from different sensors are then merged and fused together. High-level data fusion is time-efficient, but some information is lost or ignored during preprocessing before data fusion. For example, when an object reflects only a few points in each sensor's field of view, those points and their information from one sensor may not be sufficient to be preprocessed to an object. Since these points cannot identify objects, their data cannot be passed on in the processing stage and used for data fusion. In contrast, low-level data fusion directly combines raw data from all sensors (i.e. detection points) and then generates fused raw data. In the example above, objects can be identified using fused data that is more informative after the lower level data has been fused.

 Figure 15: FOV of three radars

Occupancy grid maps are a suitable method for low-level fusion of radar data from different sensors. A grid map around the vehicle can be created where all reflection points for each radar sensor are assigned. In Figure 16, the single reflection points detected by the three radar sensors are represented by different colors.

Figure 16: Detection points from three radar sensors (instantaneous, single-shot recording)

 Using the knowledge of the sensor installation position and orientation, we can first transform the position of the radar-specific detection point into the vehicle coordinate system by Equation (12) and then enter a common grid map coordinate system by Equation (6). Afterwards, they will be assigned to the corresponding grid cells.

(12)

 Here xra,i, yra,i are the coordinates of the detection point in the radar sensor coordinate system.

The probability of detection is calculated by considering the assigned points with high reflection intensity for each individual grid cell, and the occupancy grid map is built using the method mentioned in Sections 4.1 and 4.2 above.

5.3. Results

Figure 17 is an occupancy grid diagram of multiple intersecting streets. Through data fusion between the three radar sensors, the street outline is clearly visible in front of and to the right of the vehicle on the grid map. The occupancy probability of grid cells in the road area is 0. Measurements show that detection of intersections, road boundaries, and free-space driving zones can be well identified using an occupancy grid map fused with high-resolution radar.

Figure 17: Occupancy grid map on a street with multiple intersections

 6. Free space detection

Free space detection around the vehicle is not possible due to being outside the detection range and aperture of the radar sensor, or behind some larger obstacles. For vehicle motion planning, the field of interest (FoI) is the region along possible trajectories. First, the occupancy status in all grid cells is determined to create a binary grid map. Using the clustering method, the occupied area caused by constants caused by measurement errors and strong reflection points is defined again as free space. Based on the boundary recognition algorithm, the boundary of the occupied area is detected, and the free space detection along the vehicle trajectory is realized.

6.1. Determination of Occupancy Status

Before detecting free space, it should be determined whether a grid cell is occupied. The simplest approach is to use a constant occupancy probability threshold. The occupancy status of the grid cells is determined so that the occupancy grid map can be converted to a binary grid map (see Figure 18).

Figure 18: Binary grid map with occupancy probability threshold (red: occupied grid cells, white: free grid cells)

 However, due to the characteristics of the radar sensor and the OS-CFAR algorithm, the occupancy status of some grid cells does not match their respective values. From an object, many reflection points are detected and assigned to different grid cells. Some of these points have low reflection magnitudes , making the occupancy probability of their corresponding grid cells close to zero. These grid cells are detected as free space, but are actually obstacles . In this paper, two methods are proposed to identify grid cells that belong to obstacles but have low occupancy probability.

1) Grid cells whose occupancy probability is below a threshold are considered. The number of grid cells in the neighborhood whose occupancy probability is much higher than the selected grid cell is calculated (see grid cell N in Fig. 19, left image). If this amount is greater than the threshold, the selected grid cell (grid cell C in Figure 19) is set as occupied . With this method, grid cells with low occupancy probability inside obstacles and in boundary areas are identified as occupied.

 Figure 19: Adjacent grid cells (C: central grid cell. N: adjacent grid cells).

2) Dealing with grid cells whose occupancy probability is zero. If two "clamp" grid cells (see Figure 19, grid cell N in the middle and right) have a high probability of occupation and are declared as occupied, the selected grid cell is set to occupied . Therefore, especially grid cells with an occupancy probability of zero in the obstacle interior area are detected as occupied.

Using the method described above, the occupancy status of all grid cells can be determined. An example of this result is shown in Figure 20.

 Figure 20: Processed binary grid map

6.2. Clustering Binary Grid Cells

Random measurement noise was filtered using the occupancy grid map. However, some reflection points are caused by nearby strong objects or measurement errors. In a bivariate grid map, points usually occupy areas of small size outside obstacles, called outliers. Outliers are filtered using a threshold of the footprint size of the connection.

In order to calculate the size of the connected occupied area, the binary grid cells need to be grouped first. This article discusses three popular clustering algorithms:

1) K-Means [21]. The division of grid cells is divided into a predefined number of classes, where each grid cell belongs to the class with the closest mean value. Since the environment around the vehicle is always changing, the number of predefined classes is not efficient .

2) Density-based spatial clustering of noise applications (DBSCAN) [22]. The grid cells are grouped together and divided into core grid cells, boundary grid cells and noise grid cells according to the number of adjacent grid cells. Here noisy grid cells are considered outliers. To accurately filter noisy grid cells, a relatively low threshold for the distance between grid cells and a relatively high threshold for the number of grid cells were chosen. However, the calculation time is long because it is a quadratic function of the number of grid cells in the worst case.

3) Connected Component Labeling (CCL) [23,24]. Detect and cluster connected occupied grid cells in a binary grid map. No parameters need to be predefined. Furthermore, it requires significantly less computational burden than DBSCAN. Therefore, CCL is chosen as the clustering algorithm here .

The number of grid cells in each class is counted. Using numerical thresholding, outliers are found, and grid cells in the outline are again marked as free. This processing step makes sense because some contours lie directly in front of the vehicle and belong to the FoI. In Fig. 21, the clustering results of the CCL algorithm are shown. Grid cells in black circles are clustered and then defined as free cells again.

Figure 21: Clustering with CCL algorithm (different colors represent different clusters)

6.3. Boundary recognition

The boundaries of clusters and occupied binary grid cells are mainly relevant for free space detection. This paper introduces the Moore-Neighbor Tracking (MNT) algorithm to identify the boundaries of occupied regions [25]. In Fig. 22, the MNT algorithm is described. Starting from a randomly occupied grid cell B1, the next occupied adjacent grid cell is searched in the clockwise direction B2. The iterative loop terminates when the initial grid cell is visited for the second time.

 Figure 22: MNT Algorithm (B: Bounding Mesh Cell)

All arriving grid cells are marked as boundary grid cells, which helps to detect free space along the trajectory. Figure 23 shows an example of boundary recognition results.

 Figure 23: Boundary recognition (black: boundary grid cells, gray: occupied grid cells)

6.4. Interval-based free-space models

The free space along the vehicle trajectory is defined by the narrowest distance between the possible future positions of the vehicle and the boundary of the occupied area.

First, the trajectory of the ego vehicle is calculated using the current dynamic data based on the CTRA model, and the position and orientation of the vehicle along the trajectory are calculated. Any maneuver can also be used to calculate the vehicle's trajectory. The vehicle trajectory is defined as the baseline, taking into account the orientation of each location, this region resembles a sector, defining the FoI along the trajectory (see Figure 24).

 Figure 24: FoI and intervals along the trajectory

The FoI is then divided into intervals of a certain length along the trajectory. This time interval is always perpendicular to the vehicle orientation at each point. The length of a single interval is defined as a function of vehicle speed, since a wider free space is required as the speed increases.

In order to implement the interval-based free-space model, the grid cell where the vehicle's position in the FoI is located is selected as the baseline grid cell. Baseline grid cells are accessed using Bresenham's line algorithm, which is positioned at each location perpendicular to the vehicle orientation (see Figure 25).

 Figure 25: Free space detection within an interval (blue: baseline grid cells, green: free space grid cells, black: boundary grid cells, gray: occupied grid cells).

Search for the occupied grid cell with the smallest distance from each baseline grid cell. This distance is then defined as the width of the free space interval. In this interval, grid cells closer to the baseline grid cell are marked as free space. Likewise, the width of all intervals can be calculated, thereby detecting free space along the vehicle trajectory.

6.5. Results

Figure 26 shows an example of parking spot free space detection. There is more free space on the left than on the right in front of the vehicle, which means that evasive trajectories to the left are more feasible than those to the right. In addition, parking spaces between trucks are considered free space, which helps generate parking maneuvers.

Figure 26: Example of free space detection

 In Figures 27 and 28, another example on a public road is shown. There are several warning posts on the left side of the road, which are detected as obstacles on the map respectively. The distance between warning posts is recognized as free space.

  

 Figure 27: Measured image on a public road Figure 28: Free space detection on a public road

7. Conclusion and Outlook

To model the static environment around a vehicle using a high-resolution radar sensor, a method based on the occupancy grid map concept and determined free-driving space detection is proposed .

The radar sensor is used to detect the position and reflection amplitude of the target point as the input data of the occupancy grid map. First, the reflection magnitude is compensated for free space loss and antenna pattern gain, and then normalized. Second, according to the location and orientation of the detection points, the detection points are assigned to the corresponding grid cells. Third, the probability of detection in a single grid cell is calculated as a function of the intensity of the reflection at the detection point. Fourth, during the movement of the ego vehicle, the value of the grid cell will drop, and then incorporate the new measurement data to calculate the posterior occupancy probability. Finally, an occupancy grid map is created and updated. At the same time, the data fusion of the three radar sensors is performed using the occupancy grid map.

Then, the occupancy grid map is converted to a binary grid map. Furthermore, grid cells in obstacle regions are marked as occupied according to the state of neighboring grid cells. To remove outliers, the connected grid cells were clustered using the CCL algorithm. Through the MNT algorithm, the boundaries of aggregated, occupied grid cells can be identified. Finally, an interval-based free-space driving zone is detected based on Bresenham's line algorithm. As demonstrated in this paper, the determined free space and detected roadside obstacles perfectly match real-world scenarios of car driving.

In future work, it is planned to incorporate height information from radar detections into occupancy grid maps as well. Of course, this requires further development of high-resolution radar sensors to be able to estimate azimuth and elevation. Further applications of occupancy grid maps, such as vehicle localization and SLAM, can also be developed.

Guess you like

Origin blog.csdn.net/weixin_41691854/article/details/127866853