Bibtex:Maidi06b

Un article de Wiki-evr@.

(Différences entre les versions)
Version du 31 octobre 2008 à 10:32 (modifier)
Maidi (Discuter | Contributions)

← Différence précédente
Version du 12 avril 2010 à 18:18 (modifier) (défaire)
Maidi (Discuter | Contributions)

Différence suivante →
Ligne 10 : Ligne 10 :
abstract = {In this paper, we present a robust fiducials tracking method for real time Augmented Reality systems. Our approach identifies the target object with an internal barecode of the fiducial and extracts its 2D features points. Given the 2D feature points and a 3D object model, object pose consists in recovering the position and the orientation of the object with respect to the camera. For pose estimation, we presented two methods for recovering pose using the Extended Kalman Filter and the Orthogonal Iteration algorithm. The first algorithm is a sequential estimator that predicts and corrects the state vector. While the later uses the object space collinearity error and derives an iterative algorithm to compute orthogonal rotation matrices. Due to lighting or contrast conditions or occlusion of the target object by an other object, the tracking may fail. Therefore, we extend our tracking method using a RANSAC algorithm to deal with occlusions. The algorithm is tested with abstract = {In this paper, we present a robust fiducials tracking method for real time Augmented Reality systems. Our approach identifies the target object with an internal barecode of the fiducial and extracts its 2D features points. Given the 2D feature points and a 3D object model, object pose consists in recovering the position and the orientation of the object with respect to the camera. For pose estimation, we presented two methods for recovering pose using the Extended Kalman Filter and the Orthogonal Iteration algorithm. The first algorithm is a sequential estimator that predicts and corrects the state vector. While the later uses the object space collinearity error and derives an iterative algorithm to compute orthogonal rotation matrices. Due to lighting or contrast conditions or occlusion of the target object by an other object, the tracking may fail. Therefore, we extend our tracking method using a RANSAC algorithm to deal with occlusions. The algorithm is tested with
different camera viewpoints under various image conditions and shows to be accurate and robust.}, different camera viewpoints under various image conditions and shows to be accurate and robust.},
- pdf = {Maidi06b.pdf} 
} }
</bibtex> </bibtex>

Version du 12 avril 2010 à 18:18

M. Maidi, F. Ababsa, M. Mallem - Robust Augmented Reality Tracking based Visual Pose Estimation

3rd International Conference on Informatics in Control, Automation and Robotics (ICINCO 2006) pp. 346-351, Setúbal (Portugal), August 2-5, 2006
Bibtex Abstract