Please use this identifier to cite or link to this item:
Title: Modelling faces dynamically across views and over time
Authors: Li, Y
Gong, S
Liddell, H
Keywords: Solid modeling;Computational geometry;Video sequences;Image sequence analysis
Issue Date: 2001
Publisher: IEEE
Citation: Proceedings of the Eighth IEEE International Conference on Computer Vision, (ICCV 2001), 7-14 July 2001, 1: pp. 554 - 559
Abstract: A comprehensive novel multi-view dynamic face model is presented in this paper to address two challenging problems in face recognition and facial analysis: modelling faces with large pose variation and modelling faces dynamically in video sequences. The model consists of a sparse 3D shape model learnt from 2D images, a shape-and-pose-free texture model, and an affine geometrical model. Model fitting is performed by optimising (1) a global fitting criterion on the overall face appearance while it changes across views and over time, (2) a local fitting criterion on a set of landmarks, and (3) a temporal fitting criterion between successive frames in a video sequence. By temporally estimating the model parameters over a sequence input, the identity and geometrical information of a face is extracted separately. The former is crucial to face recognition and facial analysis. The latter is used to aid tracking and aligning faces. We demonstrate the results of successfully applying this model on faces with large variation of pose and expression over time.
ISBN: 0-7695-1143-0
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf187.06 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.