Comparison of MultiTask V3, HybridNets and YOLOP, an autonomous driving multi-tasking framework

insert image description here

  • Object detection and segmentation are two core modules of autonomous vehicle perception systems. They should have 高效率和低延迟,同时降低计算复杂性. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms.

  • In the scenario of self-driving cars, most of 计算能力有限的嵌入式平the platforms are used, which makes it difficult to meet the above requirements. But the authors can reduce the complexity of the network by using appropriate architecture, representation (reduction of numerical precision, quantization, pruning) and computing platform.

  • In this paper, the authors focus on the first factor - the use of so-called detection-segmentation networks as an integral part of the perception system. The authors consider the task of segmenting drivable areas and road markings, combined with the detection of selected objects (pedestrians, traffic lights, and obstacles). The authors compare the performance of 3 different architectures described in the literature: MultiTask V3, , HybridNetsand YOLOP.

  • The authors conducted experiments on a custom dataset consisting of about 500 images of drivable area and lane markings and 250 images of detected objects. Of the three methods analyzed, MultiTask V3 proved to be the best, achieving 99% mAP50 for detection, 97% MIoU for drivable area segmentation, and 91% MIoU for lane line segmentation on an RTX 3060 at 124 fps .

Code: https://github.com/vision-agh/MMAR_2023

Guess you like

Origin blog.csdn.net/weixin_38346042/article/details/131797479