Please use this identifier to cite or link to this item:
Title: Accelerated hardware video object segmentation: From foreground detection to connected components labelling
Authors: Appiah, K
Hunter, A
Dickinson, P
Meng, H
Keywords: Background differencing;Image segmentation;Connected component labelling;Object extraction;FPGA
Issue Date: 2010
Publisher: Elsevier
Citation: Computer Vision and Image Understanding, 114(11): 1282-1291, Nov 2010
Abstract: This paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency.
Description: This is the preprint version of the Article - Copyright @ 2010 Elsevier
ISSN: 1077-3142
Appears in Collections:Electronic and Computer Engineering
Dept of Electronic and Computer Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Appiah2010CVIUAcceleratedHardwareObjectExtraction.pdf579.3 kBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.