Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation


Pateraki M.
, Baltzakis H., Trahanias P., 2014. Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation. Computer Vision and Image Understanding. [doi] [bib]

Abstract:

In this paper we address an important issue in human–robot interaction, that of accurately deriving pointing information from a corresponding gesture. Based on the fact that in most applications it is the pointed object rather than the actual pointing direction which is important, we formulate a novel approach which takes into account prior information about the location of possible pointed targets. To decide about the pointed object, the proposed approach uses the Dempster–Shafer theory of evidence to fuse information from two different input streams: head pose, estimated by visually tracking the off-plane rotations of the face, and hand pointing orientation. Detailed experimental results are presented that validate the effectiveness of the method in realistic application setups.

cviu2014_1 robot_gestures
This entry was posted in Publications. Bookmark the permalink. Comments are closed, but you can leave a trackback: Trackback URL.