Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLane, PCR-
dc.contributor.authorSykes, A-
dc.contributor.authorGobet, F-
dc.identifier.citationProceedings of the European Cognitive Science Conference 2003 (pp. 205-210). Mahwah, NJ: Erlbaum.en
dc.description.abstractThe ability of humans to reliably perceive and recognise objects relies on an interaction between information seen in the visual image and prior expectations. We describe an extension to the CHREST computational model which enables it to learn and combine information from multiple input modalities. Simulations demonstrate the presence of quantitative effects on recognition ability due to cross-modal interactions. Our simulations with CHREST illustrate how expectations can improve classification accuracy, reduce classification time, and enable words to be reconstructed from noisy visual input.en
dc.format.extent84443 bytes-
dc.publisherProceedings of the European Cognitive Science Conference 2003en
dc.subjectComputational modellingen
dc.subjectPerceptual modalityen
dc.subjectPerceptual learningen
dc.subjectNoisy visual inputen
dc.titleCombining low-level perception with expectations in CHRESTen
dc.typeResearch Paperen
Appears in Collections:Psychology
Dept of Life Sciences Research Papers

Files in This Item:
File Description SizeFormat 
lane_EuroCogSci_03.pdf82.46 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.