ABSTRACT Identifying the Addressee in Human-Human-Robot Interactions according to Head Pose and Speech (2008) Michael Katzenmaier ,
Office 2007 Serial
Abstract In this perform we check out the power of acoustic and visual cues,
Windows 7 Professional, and their mix,
Office Professional 2007, to determine the addressee in a very human-human-robot interaction. Based on eighteen audiovisual recordings of two human beings and a (simulated) robot we discriminate the interaction with the two humans from your interaction of 1 human together with the robot. The paper compares the end result of 3 approaches. The initial approach utilizes purely acoustic cues to find the addressees. Low stage,
Microsoft Office 2010 Professional Plus, function based mostly cues too as higher-level cues are examined. In the 2nd method we check whether the human’s head pose is actually a ideal cue. Our benefits display that visually approximated head pose is really a much more trustworthy cue for your identification of your addressee inside the human-human-robot interaction. In the third tactic we combine the acoustic and visual cues which results in important enhancements.
Details der Publikation Download Quelle Mitarbeiter CiteSeerX Archiv CiteSeerX - Scientific Literature Digital Library and Search Engine (U.s.) Key phrases attentive interfaces,
Windows 7 Ultimate, focus of focus, head pose estimation Typ text Sprache Englisch