Research

constantly under construction…

Datasets

New!!! Occluded Articulated Human Body Dataset
An annotated dataset for human body pose extraction and tracking under occlusions.

t1 t2
t3 t4

Human Body Pose Estimation

A model-based approach for markerless articulated full body pose extraction and tracking in RGB-D sequences. A cylinder-based model is employed to represent the human body. For each body part a set of hypotheses is generated and tracked over time by a Particle Filter. To evaluate each hypothesis, we employ a novel metric that considers the reprojected Top View of the corresponding body part. The latter, in conjunction with depth information, effectively copes with difficult and ambiguous cases, such as severe occlusions.

Contributors: M. Sigalas, M. Pateraki, P. Trahanias

Selected publications:
Full-body Pose Tracking – the Top View Reprojection Approach. Sigalas M., Pateraki M., Trahanias P., 2015. IEEE Transaction on Pattern Analysis and Machine Intelligence. [doi]
Robust Articulated Upper Body Pose Tracking under Severe Occlusions. Sigalas M., Pateraki M. and Trahanias P., 2014. In Proc. of the IEEE/RSJ Intl. Conference on Intelligent Robots and Systems (IROS), 14-18 September, Chicago, USA. [doi] [pdf] [bib]

Estimation of attentive cues: torso pose and head pose

Torso and head pose, as forms of nonverbal communication, support the derivation people’s focus of attention, a key variable in the analysis of human behaviour in HRI paradigms encompassing social aspects. Towards this goal, we have developed a model-based approach for torso and head pose estimation to overcome key limitations in free-form interaction scenarios and issues of partial intra- and inter-person occlusions.

ori_343_all_2 ori_270_all_2 ori_261_all_2

Contributors: M. Pateraki, M. Sigalas, P. Trahanias

Selected publications:
Visual estimation of attentive cues in HRI: The case of torso and head pose. Sigalas M., Pateraki M. and Trahanias P., 2015. In Computer Vision Systems, Lecture Notes in Computer Science, Volume 9163, pp. 375-388. Proc. of the 10th Intl. Conference on Computer Vision Systems (ICVS), 6-9 July, Kopenhagen, Denmark. [doi] [bib]

Multi-hypothesis 3D object pose tracking

3D object pose tracking from monocular cameras. Data association is performed via a variant of the Iterative Closest Point algorithm, thus making it robust to noise and other artifacts. We re-initialise the hypothesis space based on the resulting re-projection error between hypothesized models and observed image objects. The use of multi-hypotheses and correspondences refinement, lead to a robust framework.

Contributors: G. Chliveros, M. Pateraki, H. Baltzakis, P. Trahanias

Selected publications:
Robust multi-hypothesis 3D object pose tracking. Chliveros G., Pateraki M., Trahanias P., 2013. In Proc. of the 9th Intl. Conference on Computer Vision Systems (ICVS), LNCS 7963, pp. 234-243, Springer-Verlag, St. Petersburg, Russia, July 16-18, 2013. [doi] [pdf] [bib]
A framework for 3D object identification and tracking. Chliveros G., Figueiredo R.P., Moreno P., Pateraki M., Bernardino A., Santos-Victor, J. and Trahanias P., 2014. In Proc. of the 9th International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP2014), 5-7 January, Lisbon, Portugal. [doi] [pdf] [bib]
Application of dynamic distributional clauses for multi-hypothesis initialization in model-based object tracking. Nitti D., Chliveros G., Pateraki M., De Raedt L., Hourdakis E. and Trahanias P., 2014. In Proc. of the 9th International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP2014), 5-7 January, Lisbon, Portugal. [doi] [pdf] [bib]

Detection and tracking of human hands, faces and facial features

An integrated method for tracking hands and faces in image sequences. Hand and face regions are detected as solid blobs of skin-colored, foreground pixels and they are tracked over time using a propagated pixel hypotheses algorithm. A novel incremental classifier is further used to maintain and continuously update a belief about whether a tracked blob corresponds to a facial region, a left hand or a right hand. For the detection and tracking of specific facial features within each detected facial blob, an appearance-based detector and a feature-based tracker are combined. The proposed approach is mainly intended to support natural interaction with autonomously navigating robots and to provide input for the analysis of hand gestures and facial expressions that humans utilize while engaged in various conversational states with a robot.

Contributors: H. Baltzakis, M. Pateraki, P. Trahanias

Selected publications:
Visual tracking of hands, faces and facial features of multiple persons. Baltzakis H., Pateraki M., Trahanias P., 2012. Machine Vision and Applications. [doi] [pdf] [bib]
Tracking of facial features to support human-robot interaction. Pateraki M., Baltzakis H., Kondaxakis P., Thahanias P., 2009. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), 12-17 May, Koebe, Japan. [doi] [pdf] [bib]