Visual human-robot communication in social settings

 Pateraki M., Sigalas M., Chliveros G., Trahanias P., 2013. Visual human-robot communication in social settings. In Proc. of the Workshop on Semantics, Identification and Control of Robot-Human-Environment Interaction, held within the IEEE International Conference on Robotics and Automation (ICRA), 10 May, Karlsruhe, Germany, 2013. [pdf] [bib]

Abstract:

Supporting human-robot interaction (HRI) in dynamic, multi-party social settings relies on a number of input and output modalities for visual human tracking, language processing, high-level reasoning, robot control, etc. Capturing visual human-centered information is a fundamental input source in HRI for effective and successful interaction. The current paper deals with visual processing in dynamic scenes and presents an integrated vision system that combines a number of different cues (such as color, depth, motion) to track and recognize human actions in challenging environments. The overall system comprises of a number of vision modules for human identification and tracking, extraction of pose-related information from body and face, identification of a specific set of communicative gestures (e.g. “waving, pointing”) as well as tracking of objects towards identification of manipulative gestures that act on objects in the environment (e.g. “grab glass”, “raise bottle”). Experimental results from a bartending scenario as well a comparative assessment of a subset of modules validate the effectiveness of the proposed system.

This entry was posted in Publications. Bookmark the permalink. Comments are closed, but you can leave a trackback: Trackback URL.