Please use this identifier to cite or link to this item:
Title: Super depth-map rendering by converting holoscopic viewpoint to perspective projection
Authors: Alazawi, E
Abbod, M
Aggoun, A
Swash, MR
Fatah, OA
Fernandez, J
Keywords: Depth-map;Feature descriptors;Holoscopic 3D image;Integral image;Orthographic projection;Perspective projection;Viewpoints image
Issue Date: 2014
Publisher: IEEE Computer Society
Citation: 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON): 1 - 4, (2-4 July 2014)
Abstract: The expansion of 3D technology will enable observers to perceive 3D without any eye-wear devices. Holoscopic 3D imaging technology offers natural 3D visualisation of real 3D scenes that can be viewed by multiple viewers independently of their position. However, the creation of a super depth-map and reconstruction of the 3D object from a holoscopic 3D image is still in its infancy. The aim of this work is to build a high-quality depth map of a real 3D scene from a holoscopic 3D image through extraction of multi-view high resolution Viewpoint Images (VPIs) to compensate for the poor features of VPIs. To manage this, we propose a reconstruction method based on the perspective formula to convert sets of directional orthographic low resolution VPIs into perspective projection geometry. Following that, we implement an Auto-Feature point algorithm for synthesizing VPIs to distinctive Feature-Edge (FE) blocks to localize and provide an individual feature detector that is responsible for integration of 3D information. Detailed experiments proved the reliability and efficiency of the proposed method, which outperforms state-of-the-art methods for depth map creation.
Appears in Collections:Dept of Electronic and Computer Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.docx12.65 MBUnknownView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.