Bidirectional Human-Robot action reading

Bidirectional Human-Robot action reading
Alessandra Sciutti1 , Oskar Palinko1 , Laura Patan´e1 , Francesco Rea1 , Francesco Nori1 , Nicoletta Noceti2 ,
Francesca Odone2 , Alessandro Verri2 and Giulio Sandini1
The use of robots is predicted to become more and more
widespread in everyday activities. For the moment, however,
human-robot interaction is not as intuitive and efficient as
human-human collaboration. To try to approach a similar
fluidity in the HRI domain it is fundamental to understand
which laws rule human interaction and transfer them to the
collaboration with a robotic partner. We suggest that the use
of a humanoid robot both as a test-bed for computational
models of human skills and as an experimental probe of the
interaction could be a promising way to go [1].
In this contribution we will describe the application of this
method in the context of action understanding. First, we will
present a series of studies in which the humanoid robot iCub
was used to investigate how humans develop the ability to
read the effort of another person lifting an object and how
they use this information to proactively plan their own action
on the same object [2], [3], [4].
Then, we will describe how we used this knowledge to
make the robot purportedly select the most legible lifting
action to facilitate collaboration in an object-offering task
[5].
Lastly, we will show how the comprehension of this
action understanding skill in humans has inspired the
implementation of an action reading system on the robot,
which enables it to recognize the weight of the object lifted
by the human partner just through action observation [6], [7].
The results will be discussed in the context of which could
be the minimal requisites for a robot, in terms of perceptual
and motor skills, to make it intuitively friendly, i.e., implicitly
treated by the human partner as if it were a human, with no
need of manuals or training [8].
ACKNOWLEDGMENT
The research presented here has been supported by the
European CODEFROR project (PIRSES-2013-612555).
R EFERENCES
[1] A. Sciutti, ”Using humanoid robots to measure social interaction,”
in IV Congressodel Gruppo Nazionale di Bioingegneria (GNB 2014)
Pavia, Italy, 2014.
[2] A. Sciutti, L. Patan´e, F. Nori, and G. Sandini, ”Understanding object
weight from human and humanoid lifting actions,” IEEE Transactions
on Autonomous Mental Development, vol. 6, pp. 80-92, 2014.
1 Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di
Tecnologia, Italy
2 Department of Computer Science, Bioengineering, Robotics and System
Engineering, University of Genova, Italy
Fig. 1. The iCub robot selecting and executing the most legible lifting
action [5] (A) and reading the weight lifted by the human partner [6, 7]
(B).
[3] A. Sciutti, L. Patan´e, F. Nori, and G. Sandini, ”Development of
perception of weight from human or robot lifting observation,” in
9th ACM/IEEE International Conference on Human-Robot Interaction
Bielefeld, Germany, 2014, pp. 290-291.
[4] A. Sciutti, L. Patan´e, O. Palinko, F. Nori, and G. Sandini, ”Developmental changes in children understanding robotic actions: the case of
lifting.,” in ICDL- Epirob 2014 Genoa, Italy, 2014.
[5] O. Palinko, A. Sciutti, F. Rea, and G. Sandini, ”Weight-Aware Robot
Motion Planning for Lift-to-Pass Action,” in 2nd International Conference of Human-Agent Interaction Tsukuba, Japan, 2014.
[6] N. Noceti, A. Sciutti, F. Rea, F. Odone, A. Verri, and G. Sandini,
”Biological motion understanding for human-robot interaction,” in
Vision for Language and Manipulation BMVA symposium London,
UK, 2014.
[7] A. Sciutti, N. Noceti, F. Rea, F. Odone, A. Verri, and G. Sandini, ”The
informative content of optical flow features of biological motion,”
in 37th European Conference on Visual Perception (ECVP 2014)
Belgrade, Serbia, 2014.
[8] A. Sciutti and G. Sandini, ”Investigating social intelligence with humanoid robots to define the ”minimal” human-likeness,” in Workshop
on ”Philosophical Perspectives of HRI”, RO-MAN 2014 Edinburgh,
Scotland, UK., 2014.