Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31714
Title: | YOLOv8-LiDAR Fusion: Increasing Range Resolution Based on Image Guided-Sparse Depth Fusion in Self-Driving Vehicles |
Authors: | Yildiz, AS Meng, H Swash, MR |
Keywords: | LiDAR-camera fusion;YOLOv8;perception enhancement;sparse point cloud;object detection;self-driving vehicles |
Issue Date: | 30-Dec-2024 |
Publisher: | Springer |
Citation: | Yildiz, A.S., Meng, H. and Swash, M.R. (2025) 'YOLOv8-LiDAR Fusion: Increasing Range Resolution Based on Image Guided-Sparse Depth Fusion in Self-Driving Vehicles', M.N. Huda, M. Wang and T. Kalganova (eds.) Towards Autonomous Robotic Systems. TAROS 2024. (Lecture Notes in Computer Science, vol. 15051 LNAI). Cham: Springer, pp. 383 - 396. doi: 10.1007/978-3-031-72059-8_32. |
Series/Report no.: | Lecture Notes in Computer Science;vol. 15051 LNAI |
Abstract: | Self-driving vehicles are significant in industrial and commercial applications, primarily driven by the development of environmental awareness systems. The need for real-time object recognition, segmentation, perception, projection, and position has significantly increased in object and line tracking, obstacle avoidance, and route planning. The primary sensors used are high-resolution cameras, Light Detection and Ranging (LiDAR), and high-precision GPS/IMU inertial navigation systems. However, out of all these sensors, LiDARs and cameras have a vital function in perception and comprehensive situations. Although LiDAR is capable of providing precise depth information, its resolution is constrained. On the other hand, cameras provide abundant semantic information but do not offer precise assessments of the distance to objects. This work presents the incorporation of YOLOv8, an advanced object identification method, into the fusion process. We specifically investigate the notion of Camera-LiDAR Projection and provide a thorough explanation of the process of projecting LiDAR point clouds onto an image coordinate frame. This is achieved by utilizing transformation matrices that establish the relationship between the LiDAR and the camera. This project aims to improve the range resolution and perception capabilities of autonomous driving systems by combining YOLOv8-based object recognition with LiDAR point cloud data by using the KITTI object detection benchmark. |
Description: | Conference paper presented at the 25th TAROS(Towards Autonomous Robotic Systems) Conference 2024, Brunel University London, Uxbridge, UK, 21-23 August 2024. |
URI: | https://bura.brunel.ac.uk/handle/2438/31714 |
DOI: | https://doi.org/10.1007/978-3-031-72059-8_32 |
ISBN: | 978-3-031-72058-1 (pbk) 978-3-031-72059-8 (ebk) |
Other Identifiers: | ORCiD: Ahmet Serhat Yildiz https://orcid.org/0000-0002-2957-7394 ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382 ORCiD: Mohammad Rafiq Swash https://orcid.org/0000-0003-4242-7478 Chapter 32 |
Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Embargoed until 30 December 2025. Copyright © 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG. This is a pre-copyedited, author-produced version of a book chapter accepted for publication in Towards Autonomous Robotic Systems. TAROS 2024. (Lecture Notes in Computer Science, vol. 15051), following peer review. The final authenticated version is available online at https://doi.org/10.1007/978-3-031-72059-8_32 (see: https://www.springernature.com/gp/open-research/policies/book-policies). | 10.81 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.