Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32162
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHuda, N-
dc.contributor.advisorKhan, A-
dc.contributor.authorGalvao, Luiz Goldman-
dc.date.accessioned2025-10-16T16:23:17Z-
dc.date.available2025-10-16T16:23:17Z-
dc.date.issued2025-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/32162-
dc.descriptionThis thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University Londonen_US
dc.description.abstractIntelligent vehicles have the potential to revolutionise transport by enhancing safety, reducing congestion, and improving efficiency. A critical component of IVs is perception— the ability to accurately interpret the environment for informed driving decisions. While multi-sensor systems are widely used, camera-only solutions offer a cost-effective alternative but face challenges in recognising the intentions of diverse traffic agents in complex urban environments. This thesis addresses limitations in current mono-camera approaches, which often overlook explicit behavioural cues and lack integrated pipelines for discrete intention behaviour recognition. A literature review identified key gaps, including the absence of methods for predicting discrete intention behaviours across various traffic agents, insufficient public datasets capturing complex urban scenes for intention prediction, and underutilisation of explicit cues such as vehicle light signals and object orientation. Additionally, existing studies rarely consider integrated pipelines that combine detection, tracking, and behaviour recognition while accounting for error propagation across stages. To address these gaps, a monocular traffic hazard dataset was developed, capturing diverse traffic agents and explicit behavioural cues relevant for hazard recognition. Deep learning models, including Vision Transformers and Convolutional Networks, were designed to leverage these features, demonstrating improved accuracy in recognising complex traffic behaviours from single images. Experiments exploring different input features, observation horizons, and class granularities revealed that combining explicit and implicit cues enhances recognition performance. A complete hazard recognition pipeline was implemented, integrating detection, tracking, and behaviour recognition to assess system-level performance. Results highlighted the challenges of error propagation across modules while demonstrating the feasibility of end-to-end monocular pipelines for complex traffic behaviour recognition. The key contributions of this thesis include the development of a targeted monocular dataset for behaviour recognition, the creation of models utilising underexplored visual cues, and the integration of these models into a unified hazard recognition pipeline for camera-only IV systems. This work demonstrates the potential of monocular approaches for traffic behaviour recognition and provides a foundation for cost-effective, scalable intelligent vehicle solutions. Future research should focus on expanding dataset diversity, improving model robustness, and incorporating multi-agent interactions to enhance real-world applicability.en_US
dc.description.sponsorshipUK Research and Innovation and the EPSRC DTP Scholarshipen_US
dc.publisherBrunel University Londonen_US
dc.relation.urihttps://bura.brunel.ac.uk/handle/2438/32162/1/FulltextThesis.pdf-
dc.subjecttraffic road user behaviour recognitionen_US
dc.subjectpotential road hazard recognitionen_US
dc.subjectfull pipeline for potential road hazard recognitionen_US
dc.subjectoverlook featuresen_US
dc.subjectlane change recognition for vehiclesen_US
dc.titleDeep learning algorithms for complex traffic behaviour recognition in mono-camera intelligent vehiclesen_US
dc.typeThesisen_US
Appears in Collections:Electronic and Electrical Engineering
Dept of Electronic and Electrical Engineering Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf28.2 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.