TY - JOUR
T1 - Wide baseline Matching between unsynchronized video sequences
AU - Wolf, Lior
AU - Zomet, Assaf
PY - 2006/6
Y1 - 2006/6
N2 - 3D reconstruction of a dynamic scene from features in two cameras usually requires synchronization and correspondences between the cameras. These may be hard to achieve due to occlusions, different orientation, different scales, etc. In this work we present an algorithm for reconstructing a dynamic scene from sequences acquired by two uncalibrated non-synchronized fixed affine cameras. It is assumed that (possibly) different points are tracked in the two sequences. The only constraint relating the two cameras is that every 3D point tracked in one sequence can be described as a linear combination of some of the 3D points tracked in the other sequence. Such constraint is useful, for example, for articulated objects. We may track some points on an arm in the first sequence, and some other points on the same arm in the second sequence. On the other extreme, this model can be used for generally moving points tracked in both sequences without knowing the correct permutation. In between, this model can cover non-rigid bodies with local rigidity constraints. We present linear algorithms for synchronizing the two sequences and reconstructing the 3D points tracked in both views. Outlier points are automatically detected and discarded. The algorithm can handle both 3D objects and planar objects in a unified framework, therefore avoiding numerical problems existing in other methods.
AB - 3D reconstruction of a dynamic scene from features in two cameras usually requires synchronization and correspondences between the cameras. These may be hard to achieve due to occlusions, different orientation, different scales, etc. In this work we present an algorithm for reconstructing a dynamic scene from sequences acquired by two uncalibrated non-synchronized fixed affine cameras. It is assumed that (possibly) different points are tracked in the two sequences. The only constraint relating the two cameras is that every 3D point tracked in one sequence can be described as a linear combination of some of the 3D points tracked in the other sequence. Such constraint is useful, for example, for articulated objects. We may track some points on an arm in the first sequence, and some other points on the same arm in the second sequence. On the other extreme, this model can be used for generally moving points tracked in both sequences without knowing the correct permutation. In between, this model can cover non-rigid bodies with local rigidity constraints. We present linear algorithms for synchronizing the two sequences and reconstructing the 3D points tracked in both views. Outlier points are automatically detected and discarded. The algorithm can handle both 3D objects and planar objects in a unified framework, therefore avoiding numerical problems existing in other methods.
KW - Structure from motion
KW - Video synchronization
KW - Wide base-line matching
UR - http://www.scopus.com/inward/record.url?scp=33646587333&partnerID=8YFLogxK
U2 - 10.1007/s11263-005-4841-0
DO - 10.1007/s11263-005-4841-0
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:33646587333
SN - 0920-5691
VL - 68
SP - 43
EP - 52
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 1
ER -