Date Published: June 21, 2019
Publisher: Public Library of Science
Author(s): Saki Kato, Natsuki Yamanobe, Gentiane Venture, Eiichi Yoshida, Gowrishankar Ganesh, Zhan Li.
Object handovers between humans are common in our daily life but the mechanisms underlying handovers are still largely unclear. A good understanding of these mechanisms is important not only for a better understanding of human social behaviors, but also for the prospect of an automatized society in which machines will need to perform similar objects exchanges with humans. In this paper, we analyzed how humans determine the location of object transfer during handovers- to determine whether they can predict the preferred handover location of a partner, the variation of this prediction in 3D space, and to examine how much of a role vision plays in the whole process. For this we developed a paradigm that allows us to compare handovers by humans with and without on-line visual feedback. Our results show that humans have the surprising ability to modulate their handover location according to partners they have just met such that the resulting handover errors are in the order of few centimeters, even in the absence of vision. The handover errors are least along the axis joining the two partners, suggesting a limited role for visual feedback in this direction. Finally, we show that the handover locations are explained very well by a linear model considering the heights, genders and social dominances of the two partners, and the distance between them. We developed separate models for the behavior of ‘givers’ and ‘receivers’ and discuss how the behavior of the same individual changes depending on his role in the handover.
A handover is a complicated interaction between two agents in which one agent passes an object to another in time and space. Handovers are fundamental in human society and occur multiple times in our daily life. They are common in most service tasks, ranging from receiving money at a teller, passing and receiving of tools by an assistant, and serving of food by a caregiver . However, object handovers between humans have been sparsely investigated, and the mechanisms underlying this fundamental human social task are still largely unclear. With the service industry being increasingly automated, handovers are also becoming an essential skill for robots. The key requirement of automated handovers is that they are perceived as comfortable and safe by their human partners, and an examination of handovers between humans can arguably help a lot in the regard .
In our study we modulated the partners and the handover distances for participants, and analyzed how this affected their hand over position as a giver or as a receiver. Specifically, if participants are able to a-priori estimate their partner’s preferred hand over position. To avoid contamination due to visual feedback, which is arguably a major contribution in the handovers, we adopted a strategy utilized in many motor control studies, that to evaluate the participant handovers in the absence of visual feedback.