Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31057
Title: | Research on a Lightweight Panoramic Perception Algorithm for Electric Autonomous Mini-Buses |
Authors: | Liu, Y Li, G Hao, L Yang, Q Zhang, D |
Keywords: | deep learning;model lightweighting;attention mechanism;depth-wise separable convolution;YOLOP;electric autonomous mini-bus |
Issue Date: | 8-Jul-2023 |
Publisher: | MDPI |
Citation: | Liu, Y. et al. (2023) 'Research on a Lightweight Panoramic Perception Algorithm for Electric Autonomous Mini-Buses', World Electric Vehicle Journal, 14 (7), 179, pp. 1 - 14. doi: 10.3390/wevj14070179. |
Abstract: | Autonomous mini-buses are low-cost passenger vehicles that travel along designated routes in industrial parks. In order to achieve this goal, it is necessary to implement functionalities such as lane-keeping and obstacle avoidance. To address the challenge of deploying deep learning algorithms to detect environmental information on low-performance computing units, which leads to difficulties in model deployment and the inability to meet real-time requirements, a lightweight algorithm called YOLOP-E based on the YOLOP algorithm is proposed. (The letter ‘E’ stands for EfficientNetV2, and YOLOP-E represents the optimization of the entire algorithm by replacing the backbone of the original model with EfficientNetV2.) The algorithm has been optimized and improved in terms of the following three aspects: Firstly, the YOLOP backbone network is reconstructed using the lightweight backbone network EfficientNet-V2, and depth-wise separable convolutions are used instead of regular convolutions. Secondly, a hybrid attention mechanism called CABM is employed to enhance the model’s feature-representation capability. Finally, the Focal EIoU and Smoothed Cross-Entropy loss functions are utilized to improve detection accuracy. YOLOP-E is the final result after the aforementioned optimizations are completed. Experimental results demonstrate that on the BDD100K dataset, the optimized algorithm achieves a 3.5% increase in mAP50 and a 4.1% increase in mIoU. During real-world vehicle testing, the detection rate reaches 41.6 FPS, achieving the visual perception requirements of the autonomous shuttle bus while maintaining a lightweight design and improving detection accuracy. |
Description: | Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon request. |
URI: | https://bura.brunel.ac.uk/handle/2438/31057 |
DOI: | https://doi.org/10.3390/wevj14070179 |
Other Identifiers: | ORCiD: Gang Li https://orcid.org/0000-0003-4501-7431 ORCiD: Dong Zheng https://orcid.org/0000-0002-4974-4671 Article number 179 |
Appears in Collections: | Dept of Mechanical and Aerospace Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | 7.65 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License