KoBo34 - Intuitive HRI

KoBo34 („Intuitive Interaktion mit kooperativen Assistenzrobotern für das 3. und 4. Lebensalter”) is a R&D project funded by the German Federal Ministry of Education and Research (BMBF, grant no. 16SV7984, July 2018 – June 2021). It aims at improving social participation and maintaining personal independence of elderly people by developing an assistive humanoid robot.

TU Darmstadt’s Centre for Cognitive Science participates with its research groups Psychology of Information Processing (Constantin Rothkopf) and Intelligent Autonomous Systems (Jan Peters). The project consortium comprises robotics company Franka Emika, Technical University of Munich (TU München), Rosenheim University of Applied Sciences (TH Rosenheim), and Technical University of Darmstadt (TU Darmstadt).

Intention Recognition. One crucial project objective is intuitive communication between elderly people and assistive robots. In order to develop a robot assistant that is really helpful in everyday life activities effortless communication between human and robot is indispensable.

Therefore, a central aspect of our work in KoBo34 is research of automated intention recognition in order to support robotic “understanding” of human behaviour. Since humans express intentions through different modalities such as hand gestures, gaze directions or speech, it is reasonable to also exploit the data of different modalities for automatically recognizing human intentions. Particularly, this can reduce the uncertainty over the intention to be predicted. Thus, one of the goals of the KoBo34 project is to reduce this uncertainty by combining information from different modalities [1].

Human-robot interaction with intention recognition

Interactive Learning. Another core objective of the project is continuous improvement of the interaction by enabling the robot to learn from human input. Here, we work on learning interactive skills by demonstration.

A skill library containing robot movements as components of complex interactive tasks can be taught by kinesthetic teaching. However, many learning algorithms require the number of skills to be taught beforehand. For additional skills the whole library needs to be retrained. In KoBo34 we developed a new approach allowing open-ended learning of a collaborative skill library. This approach facilitates to iteratively refine existing skills and to add new ones without such retraining [2] [3].

While the previously discussed work focused on learning new skills we also worked on methods to adapt the robot’s movements online according to human movements, e.g. to avoid collisions. We developed two different modes of adaptation, spatial deformation of the respective robot trajectories and temporal scaling. Both modes were evaluated and compared on naïve human subjects [4].

All these approaches regarding skill learning by imitation learning, adaptation of robot trajectories to human movements and improvement of robot’s skills to fit the human’s needs should be applicable to assistive tasks until the project’s end. One example task, preparing a tray of snacks for being served in an elderly home, is demonstrated here:

Assistant robot Kobo learns how to prepare a tray of snacks (Demo)


[1] Trick, S.; Koert, D.; Peters, J.; Rothkopf, C. (2019): Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2] D. Koert, S. Trick, M. Ewerton, M. Lutter, J. Peters (2018): Online Learning of an Open-Ended Skill Library for Collaborative Tasks. Proceedings of the IEEE-RAS 18th International Conference on Humanoid Robots (HUMANOIDS)

[3] Koert, D.; Trick, S., Ewerton, M.; Lutter, M.; Peters, J. (2020). Incremental Learning of an Open-Ended Collaborative Skill Library, International Journal of Humanoid Robotics Vol. 17, No. 1 (2020) 2050001 (23 pages)

[4] Koert, D.; Pajarinen, J.; Schotschneider, A.; Trick, S., Rothkopf, C.; Peters, J. (2019). Learning Intention Aware Online Adaptation of Movement Primitives, IEEE Robotics and Automation Letters (RA-L)