Please use this identifier to cite or link to this item:
Title: Feature extraction for speech and music discrimination
Authors: Zhou, H
Sadka, A H
Jiang, M
Issue Date: 2008
Publisher: IEEE
Citation: The Sixth International Workshop on Content-Based Multimedia Indexing. London, UK, 18-20th June, 2008
Abstract: Driven by the demand of information retrieval, video editing and human-computer interface, in this paper we propose a novel spectral feature for music and speech discrimination. This scheme attempts to simulate a biological model using the averaged cepstrum, where human perception tends to pick up the areas of large cepstral changes. The cepstrum data that is away from the mean value will be exponentially reduced in magnitude. We conduct experiments of music/speech discrimination by comparing the performance of the proposed feature with that of previously proposed features in classification. The dynamic time warping based classification verifies that the proposed feature has the best quality of music/speech classification in the test database.
Appears in Collections:Electronic and Electrical Engineering
Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Feature extraction for speech and music discrimination.pdf380.67 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.