Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/21904
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGhoshal, B-
dc.contributor.authorTucker, A-
dc.contributor.authorSanghera, B-
dc.contributor.authorWong, WL-
dc.date.accessioned2020-11-23T22:50:43Z-
dc.date.available2020-11-23T22:50:43Z-
dc.date.issued2020-10-22-
dc.identifier.citationGhoshal, B, Tucker, A, Sanghera, B, Lup Wong, W. Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection. Computational Intelligence. 2020; 1– 34.en_US
dc.identifier.issn0824-7935-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/21904-
dc.description.abstractDeep learning (DL), which involves powerful black box predictors, has achieved a remarkable performance in medical image analysis, such as segmentation and classification for diagnosis. However, in spite of these successes, these methods focus exclusively on improving the accuracy of point predictions without assessing the quality of their outputs. Knowing how much confidence there is in a prediction is essential for gaining clinicians' trust in the technology. In this article, we propose an uncertainty estimation framework, called MC‐DropWeights, to approximate Bayesian inference in DL by imposing a Bernoulli distribution on the incoming or outgoing weights of the model, including neurones. We demonstrate that by decomposing predictive probabilities into two main types of uncertainty, aleatoric and epistemic, using the Bayesian Residual U‐Net (BRUNet) in image segmentation. Approximation methods in Bayesian DL suffer from the “mode collapse” phenomenon in variational inference. To address this problem, we propose a model which Ensembles of Monte‐Carlo DropWeights by varying the DropWeights rate. In segmentation, we introduce a predictive uncertainty estimator, which takes the mean of the standard deviations of the class probabilities associated with every class. However, in classification, we need an alternative approach since the predictive probabilities from a forward pass through the model does not capture uncertainty. The entropy of the predictive distribution is a measure of uncertainty, but its exponential depends on sample size. The plug‐in estimate in mutual information is subject to sampling bias. We propose Jackknife resampling, to correct for sample bias, which improves estimating uncertainty quality in image classification. We demonstrate that our deep ensemble MC‐DropWeights method, using the bias‐corrected estimator produces an equally good or better result in both quantified uncertainty estimation and quality of uncertainty estimates than approximate Bayesian neural networks in practice.en_US
dc.format.extent1 - 29 (29)-
dc.languageEnglish-
dc.language.isoenen_US
dc.publisherWileyen_US
dc.subjectBias‐corrected uncertainty estimationen_US
dc.subjectClassificationen_US
dc.subjectDeep learningen_US
dc.subjectDropweightsen_US
dc.subjectEnsemblesen_US
dc.subjectMedical image segmentationen_US
dc.titleEstimating Uncertainty in Deep Learning for Reporting Confidence to Clinicians in Medical Image Segmentation and Diseases Detectionen_US
dc.typeArticleen_US
dc.relation.isPartOfComputational Intelligence-
pubs.publication-statusAccepted-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf8.73 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.