Abstract
One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).
Original language | English |
---|---|
Article number | 72440J |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 7244 |
DOIs | |
State | Published - 2009 |
Event | Real-Time Image and Video Processing 2009 - San Jose, CA, United States Duration: 19 Nov 2009 → 20 Nov 2009 |
Keywords
- 3D
- 3D-DCT
- Depth-maps
- Real-time