Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31718
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSavin, D-
dc.contributor.advisorMaischak, M-
dc.contributor.authorGrublys, Mykolas-
dc.date.accessioned2025-08-08T12:39:39Z-
dc.date.available2025-08-08T12:39:39Z-
dc.date.issued2024-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/31718-
dc.descriptionThis thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University Londonen_US
dc.description.abstractDeep learning has become a popular way to accurately recognise, classify, generate and forecast data in various complex scenarios. It is commonly based on artificial neural networks, a type of machine learning architecture inspired by the human brain, that consist of multiple layers of neurons with non-linear activations. Their inside workings can be considered as a ‘black box’ because it is difficult, if not impossible, to truly understand “how” or “why” such networks work. Furthermore, the training relies on gradient decent methods which are inherently difficult and computationally expensive. Reservoir Computing has recently emerged as a new paradigm aimed to alleviate such difficulties. In this thesis, we study its particular realisation, known as Echo State Networks (ESNs), which show promise in many tasks, particularly in forecasting the dynamics of chaotic systems. The thesis will provide a detailed discussion of the ESN architecture, hyperparameters, and implementation. We introduce a new performance metric that highlights the networks maintaining small errors for the longest duration and use it in GridSearch to optimise ESN hyperparameters. We study the statistical properties of the correlation matrix that is formed during training, offering a novel approach not applied to ESNs or other types of recurrent networks. Our extensive numerical studies reveal universal results, consistent across tests and different chaotic training signals. Although analytical studies are limited, we introduce a simple ‘toy model’ to qualitatively describe some properties. We conclude our work with study of the statistics of trained output weights, which also exhibit universal characteristics. These universalities provide deeper insights into the inner workings and behaviour of ESNs, enhancing our understanding of how information spreads throughout the network.en_US
dc.description.sponsorshipEPSRC DPT PhD studentship schemeen_US
dc.publisherBrunel University Londonen_US
dc.subjectReservoir Computingen_US
dc.subjectDeep Learningen_US
dc.subjectRandom Matrix Theoryen_US
dc.subjectMultifractalityen_US
dc.subjectLocal Statisticsen_US
dc.titleEcho state networks in forecasting chaotic dynamics and emergent universalitiesen_US
dc.typeThesisen_US
Appears in Collections:Dept of Mathematics Theses
Mathematical Sciences

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf17.44 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.