Please use this identifier to cite or link to this item:
Title: One's own soundtrack: Affective music synthesis
Authors: Khan, A
Fuschi, D
Keywords: Affective;Emotive;Music synthesis;Emotion;CALLAS
Issue Date: 2009
Citation: European and Mediterranean Conference on Information Systems (EMCIS2009), Izmir, Turkey, 2009
Abstract: Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user's mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user's emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal- Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.
ISBN: 978-1-902316-69-7
Appears in Collections:Business and Management
Brunel Business School Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf249.67 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.