Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29823
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRoy, G-
dc.contributor.authorChakrabarty, D-
dc.date.accessioned2024-09-25T13:57:09Z-
dc.date.available2024-09-25T13:57:09Z-
dc.date.issued2024-04-18-
dc.identifierORCiD: Dalia Chakrabarty https://orcid.org/0000-0003-1246-4235-
dc.identifier.citationRoy, G. and Chakrabarty, D. (2024) 'A New Reliable & Parsimonious Learning Strategy Comprising Two Layers of Gaussian Processes, to Address Inhomogeneous Empirical Correlation Structures', arXiv:2404.12478v1 [stat.ML], pp. 1 - 28. doi: 10.48550/arXiv.2404.12478.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29823-
dc.descriptionMSC classes: Probability theory and stochastic processes : 60-XX, Stochastic Processes : 60Gxx, Gaussian Processes : 60G15, Generalised stochastic processes : 60G20en_US
dc.descriptionThis is a preprint version of the article. It has not been certified by peer review.-
dc.description.abstractWe present a new strategy for learning the functional relation between a pair of variables, while addressing inhomogeneities in the correlation structure of the available data, by modelling the sought function as a sample function of a non-stationary Gaussian Process (GP), that nests within itself multiple other GPs, each of which we prove can be stationary, thereby establishing sufficiency of two GP layers. In fact, a non-stationary kernel is envisaged, with each hyperparameter set as dependent on the sample function drawn from the outer non-stationary GP, such that a new sample function is drawn at every pair of input values at which the kernel is computed. However, such a model cannot be implemented, and we substitute this by recalling that the average effect of drawing different sample functions from a given GP is equivalent to that of drawing a sample function from each of a set of GPs that are rendered different, as updated during the equilibrium stage of the undertaken inference (via MCMC). The kernel is fully non-parametric, and it suffices to learn one hyperparameter per layer of GP, for each dimension of the input variable. We illustrate this new learning strategy on a real dataset.en_US
dc.description.sponsorshipGR is funded by an EPSRC DTP studentship.en_US
dc.format.extent1 - 28-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherCornell Universityen_US
dc.rightshttps://creativecommons.org/licenses/by/4.0/-
dc.rightsAttribution 4.0 International-
dc.subjectnested Gaussian processesen_US
dc.subjectcovariance kernel parametrisationen_US
dc.subjectnon-parametric non-stationary kernelen_US
dc.subjectMarkov chain Monte Carloen_US
dc.subjectprobabilistic machine learningen_US
dc.titleA New Reliable & Parsimonious Learning Strategy Comprising Two Layers of Gaussian Processes, to Address Inhomogeneous Empirical Correlation Structuresen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.48550/arXiv.2404.12478-
dc.relation.isPartOfarXiv-
dc.identifier.eissn2331-8422-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Mathematics Research Papers

Files in This Item:
File Description SizeFormat 
Preprint.pdfCopyright © 2024 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).3.53 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.