Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31714
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYildiz, AS-
dc.contributor.authorMeng, H-
dc.contributor.authorSwash, MR-
dc.contributor.editorHuda, MN-
dc.contributor.editorWang, M-
dc.contributor.editorKalganova, T-
dc.coverage.spatialUxbridge, UK-
dc.date.accessioned2025-08-07T07:24:32Z-
dc.date.available2025-08-07T07:24:32Z-
dc.date.issued2024-12-30-
dc.identifierORCiD: Ahmet Serhat Yildiz https://orcid.org/0000-0002-2957-7394-
dc.identifierORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382-
dc.identifierORCiD: Mohammad Rafiq Swash https://orcid.org/0000-0003-4242-7478-
dc.identifierChapter 32-
dc.identifier.citationYildiz, A.S., Meng, H. and Swash, M.R. (2025) 'YOLOv8-LiDAR Fusion: Increasing Range Resolution Based on Image Guided-Sparse Depth Fusion in Self-Driving Vehicles', M.N. Huda, M. Wang and T. Kalganova (eds.) Towards Autonomous Robotic Systems. TAROS 2024. (Lecture Notes in Computer Science, vol. 15051 LNAI). Cham: Springer, pp. 383 - 396. doi: 10.1007/978-3-031-72059-8_32.en_US
dc.identifier.isbn978-3-031-72058-1 (pbk)-
dc.identifier.isbn978-3-031-72059-8 (ebk)-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31714-
dc.descriptionConference paper presented at the 25th TAROS(Towards Autonomous Robotic Systems) Conference 2024, Brunel University London, Uxbridge, UK, 21-23 August 2024.-
dc.description.abstractSelf-driving vehicles are significant in industrial and commercial applications, primarily driven by the development of environmental awareness systems. The need for real-time object recognition, segmentation, perception, projection, and position has significantly increased in object and line tracking, obstacle avoidance, and route planning. The primary sensors used are high-resolution cameras, Light Detection and Ranging (LiDAR), and high-precision GPS/IMU inertial navigation systems. However, out of all these sensors, LiDARs and cameras have a vital function in perception and comprehensive situations. Although LiDAR is capable of providing precise depth information, its resolution is constrained. On the other hand, cameras provide abundant semantic information but do not offer precise assessments of the distance to objects. This work presents the incorporation of YOLOv8, an advanced object identification method, into the fusion process. We specifically investigate the notion of Camera-LiDAR Projection and provide a thorough explanation of the process of projecting LiDAR point clouds onto an image coordinate frame. This is achieved by utilizing transformation matrices that establish the relationship between the LiDAR and the camera. This project aims to improve the range resolution and perception capabilities of autonomous driving systems by combining YOLOv8-based object recognition with LiDAR point cloud data by using the KITTI object detection benchmark.en_US
dc.description.sponsorshipThis work is supported in part by the Horizon Europe COVER project under grant number 101086228, via UKRI grant EP/Y028031/1, and Ahmet Serhat Yildiz’s PhD is sponsored by the Ministry of National Education of Türkiye.en_US
dc.format.extent383 - 396-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherSpringeren_US
dc.relation.ispartofseriesLecture Notes in Computer Science;vol. 15051 LNAI-
dc.rightsCopyright © 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG. This is a pre-copyedited, author-produced version of a book chapter accepted for publication in Towards Autonomous Robotic Systems. TAROS 2024. (Lecture Notes in Computer Science, vol. 15051), following peer review. The final authenticated version is available online at https://doi.org/10.1007/978-3-031-72059-8_32 (see: https://www.springernature.com/gp/open-research/policies/book-policies).-
dc.rights.urihttps://www.springernature.com/gp/open-research/policies/book-policies-
dc.sourceThe 25th TAROS(Towards Autonomous Robotic Systems) Conference 2024-
dc.subjectLiDAR-camera fusionen_US
dc.subjectYOLOv8en_US
dc.subjectperception enhancementen_US
dc.subjectsparse point clouden_US
dc.subjectobject detectionen_US
dc.subjectself-driving vehiclesen_US
dc.titleYOLOv8-LiDAR Fusion: Increasing Range Resolution Based on Image Guided-Sparse Depth Fusion in Self-Driving Vehiclesen_US
dc.typeConference Paperen_US
dc.identifier.doihttps://doi.org/10.1007/978-3-031-72059-8_32-
dc.relation.isPartOfLecture Notes in Computer Science-
pubs.finish-date2024-08-23-
pubs.finish-date2024-08-23-
pubs.place-of-publicationCham-
pubs.publication-statusPublished-
pubs.start-date2024-08-21-
pubs.start-date2024-08-21-
pubs.volume15051 LNAI-
dc.rights.holderThe Author(s), under exclusive license to Springer Nature Switzerland AG-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 30 December 2025. Copyright © 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG. This is a pre-copyedited, author-produced version of a book chapter accepted for publication in Towards Autonomous Robotic Systems. TAROS 2024. (Lecture Notes in Computer Science, vol. 15051), following peer review. The final authenticated version is available online at https://doi.org/10.1007/978-3-031-72059-8_32 (see: https://www.springernature.com/gp/open-research/policies/book-policies).10.81 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.