Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/20105
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, K | - |
dc.contributor.author | Chakrabarty, D | - |
dc.date.accessioned | 2020-01-24T14:27:24Z | - |
dc.date.available | 2020-01-24T14:27:24Z | - |
dc.date.issued | 2018-04-16 | - |
dc.identifier | https://arxiv.org/abs/1803.04582v2 | - |
dc.identifier | ORCiD: Dalia Chakrabarty https://orcid.org/0000-0003-1246-4235 | - |
dc.identifier.citation | Wang, K. and Chakrabarty, D. (2018) 'Deep Bayesian Supervised Learning given Hypercuboidally-shaped, Discontinuous Data, using Compound Tensor-Variate & Scalar-Variate Gaussian Processes', arXiv:1803.04582v2 [stat.ME], pp. 1 - 43. doi: 10.48550/arXiv.1803.04582. | - |
dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/20105 | - |
dc.description.abstract | We undertake Bayesian learning of the high-dimensional functional relationship between a system parameter vector and an observable, that is in general tensor-valued. The ultimate aim is Bayesian inverse prediction of the system parameters, at which test data is recorded. We attempt such learning given hypercuboidally-shaped data that displays strong discontinuities, rendering learning challenging. We model the sought high-dimensional function, with a tensor-variate Gaussian Process (GP), and use three independent ways for learning covariance matrices of the resulting likelihood, which is Tensor-Normal. We demonstrate that the discontinuous data demands that implemented covariance kernels be non-stationary--achieved by modelling each kernel hyperparameter, as a function of the sample function of the invoked tensor-variate GP. Each such function can be shown to be temporally-evolving, and treated as a realisation from a distinct scalar-variate GP, with covariance described adaptively by collating information from a historical set of samples of chosen sample-size. We prove that deep-learning using 2-"layers", suffice, where the outer-layer comprises the tensor-variate GP, compounded with multiple scalar-variate GPs in the "inner-layer", and undertake inference with Metropolis-within-Gibbs. We apply our method to a cuboidally-shaped, discontinuous, real dataset, and subsequently perform forward prediction to generate data from our model, given our results--to perform model-checking. | en_US |
dc.format.extent | 1 - 43 | - |
dc.language.iso | en | en_US |
dc.publisher | Cornell University | en_US |
dc.rights | Copyright © The Authors 2018 under an arXiv.org perpetual, non-exclusive license 1.0. This license gives limited rights to arXiv to distribute the article, and also limits re-use of any type from other entities or individuals (see: https://info.arxiv.org/help/license/index.html). | - |
dc.rights.uri | https://info.arxiv.org/help/license/index.html | - |
dc.subject | compound tensor-variate scalar-variate GPs | en_US |
dc.subject | covariance kernel parametrisation | en_US |
dc.subject | Lipschitz continuity | en_US |
dc.subject | deep learning | en_US |
dc.title | Deep Bayesian Supervised Learning given Hypercuboidally-shaped, Discontinuous Data, using Compound Tensor-Variate & Scalar-Variate Gaussian Processes | en_US |
dc.type | Article | en_US |
dc.identifier.doi | https://doi.org/10.48550/arXiv.1803.04582 | - |
dc.rights.holder | The Authors | - |
Appears in Collections: | Dept of Mathematics Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © The Authors 2018 under an arXiv.org perpetual, non-exclusive license 1.0. This license gives limited rights to arXiv to distribute the article, and also limits re-use of any type from other entities or individuals (see: https://info.arxiv.org/help/license/index.html). | 747.79 kB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.