RO-MAN 2022 Workshop on Machine Learning for HRI: Bridging the Gap between Action and Perception

A key factor for the acceptance of robots as partners in complex and dynamic human-centered environments is their ability to continuously adapt their behavior. This includes learning the most appropriate behavior for each encountered situation based on its specific characteristics as perceived through the robots senors. To determine the correct actions the robot has to take into account prior experiences with the same agents, their current emotional and mental states, as well as their specific characteristics, e.g. personalities and preferences. Since every encountered situation is unique, the appropriate behavior cannot be hard-coded in advance but must be learned over time through interactions. Therefore, artificial agents need to be able to learn continuously what behaviors are most appropriate for certain situations and people based on feedback and observations received from the environment to enable more natural, enjoyful, and effective interactions between humans and robots.

This workshop aims to attract the latest research studies and expertise in human-robot interaction and machine learning at the intersection of rapidly growing communities, including social and cognitive robotics, machine learning, and artificial intelligence, to present novel approaches aiming at integrating and evaluating machine learning in HRI. Furthermore, it will provide a venue to discuss the limitations of the current approaches and future directions towards creating robots that utilize machine learning to improve their interaction with humans.

Submission

We invite regular papers (4-6 pages) and position papers (2 pages), using the official RO-MAN 2022 format, presenting or investigating novel approaches that utilize machine learning to improve HRI. Suggested topics include, but are not limited to:

  • Autonomous robot behavior adaptation
  • Interactive learning approaches for HRI
  • Continual learning
  • Meta-learning
  • Transfer learning
  • Learning for multi-agent systems
  • User adaptation of interactive learning approaches
  • Architectures, frameworks, and tools for learning in HRI
  • Metrics and evaluation criteria for learning systems in HRI
  • Legal and ethical considerations for real-word deployment of learning approaches

Submissions can be made through EasyChair.

Important Dates

Paper submission: June 17, 2022 July 15, 2022 (AoE)
Notification: August 1, 2022 August 7, 2022 (AoE)
Camera ready: August 14, 2022 (AoE)
Workshop: August 22, 2022

Program

Time (EDT)
07:20 - 07:30 Opening
07:30 - 08:10
Invited Talk: Mohamed Chetouani
08:10 - 08:50
Invited Talk: Alessandra Sciutti
08:50 - 09:30 Contributed Talks I
09:30 - 09:40 Break
09:40 - 10:20
Invited Talk: Oya Celiktutan
10:20 - 11:00 Contributed Talks II
11:00 - 12:00 Lunch Break
12:00 - 12:40
Invited Talk: Sean Andrist
12:40 - 14:10 Interactive Session
14:10 - 14:20 Break
14:20 - 15:00
Invited Talk: Angelica Lim
15:00 - 16:00
Panel Discussion
16:00 - 16:10 Closing

Invited Speakers (Panelists)

Avatar

Mohamed Chetouani

Sorbonne University, France

Title: Human Centered Robot Learning

Abstract: Interactive Learning opens new opportunities for humans to teach robots new tasks. However, several challenges at the intersection of machine-learning and human-robot interaction are raised by the development of interactive learning algorithms and systems. To address such challenges, human-centered AI approaches have to be considered. In this talk, we will show how such approaches could be applied to develop better human centered robot learning algorithms.

Avatar

Alessandra Sciutti

Italian Institute of Technology, Italy

Title: Action, Perception and beyond for Cognitive Robotics

Abstract: Enabling robots to “see the world through our eyes” and act in an understandable way are two fundamental challenges for HRI. Machine learning has proven to be precious in supporting the efforts towards these goals. However, the dynamic nature of human behavior makes it particularly difficult to identify solutions that enable effective mutual understanding and collaboration with robots over longer periods of time. An architectural vision, combining action and perception with other important elements such as memory, internal motivations and different types of learning could represent a pathway to more autonomous robots, able to adapt to our needs and “grow” with us, while remaining predictable during the interaction.

Avatar

Oya Celiktutan

King's College London, UK

Title: The Role of Human Behaviour in Building Social Robots

Abstract: Robots are progressively moving out from research laboratories into real-world human environments, envisioned to provide companion care for elderly people, teach children with special needs, assist people in their day-to-day tasks at home or work, and offer service in public spaces. All these practical applications require that humans and robots work together in human environments, where interaction is unavoidable. Therefore, there has been an increasing effort towards advancing the perception and interaction capabilities of robots, making them recognise, adapt to, and respond to human behaviours and actions, with increasing levels of autonomy. Given this context, in this talk, I will give an overview of our ongoing research activities on the understanding and generation of human behaviour from a robotics perspective. Particularly, I will present examples from our ongoing work on how to make robots navigate in crowded social spaces, learn new tasks by just watching humans, and learn how to adapt to their interaction partners. I will conclude by highlighting the challenges and open problems.

Avatar

Sean Andrist

Microsoft, USA

Title: Situated Intelligence - New Challenges and Tools

Abstract: Current hardware trends coupled with advances in perception are bringing the world onto the cusp of a new computing paradigm, not so much in terms of where the computation happens (e.g., desktop, cloud, or mobile), but rather in terms of the subject matter of computation: reasoning about the physical everyday world, at human scale, and in real time. This emerging paradigm, sometimes referred to as spatial computing or situated computing, will generate within the next decade a whole new ecosystem of applications, such as mixed-reality systems for task assistance, education, or remote collaboration scenarios; mobile robots in homes or public spaces; cashierless shopping experiences; intelligent factory floors; and many more. In this talk, I will give a broad overview of our group's research on "situated intelligence," in which we aim to bridge the gaps between current perception technologies and the needs of these new applications. I will present some open research challenges and new open-source tools we are developing to make it easier to build systems that can reason about the 3D world (physical or virtual) and collaborate with people in physical space.

Avatar

Angelica Lim

Simon Fraser University, Canada

Title: Social Signals in the Wild: Multimodal Machine Learning for Human-Robot Interaction

Abstract: Science fiction has long promised us interfaces and robots that interact with us as smoothly as humans do - Rosie the Robot from The Jetsons, C-3PO from Star Wars, and Samantha from Her. Today, interactive robots and voice user interfaces are moving us closer to effortless, human-like interactions in the real world. In this talk, I will discuss the opportunities and challenges in creating technologies that can analyze, detect and generate non-verbal communication, including gestures, gaze, auditory signals, and facial expressions. Specifically, I will discuss how we might allow robots to understand human social signals (including emotions, mental states, and attitudes) across cultures as well as in recognize and generate expressions with diversity in mind.

Accepted Papers

Authors Title Paper
Mincheul Kang, Minsung Yoon and Sung-Eui Yoon User Command Correction for Safe Remote Manipulation in Dynamic Environments
Minsung Yoon, Mincheul Kang, Daehyung Park and Sung-Eui Yoon Fast and Robust Trajectory Generation for Cartesian Path-following Problems of Redundant Manipulators
Inkyu An and Sung-Eui Yoon DOA Estimation based on Learnable TDOA Feature
Di Fu, Fares Abawi, Erik Strahl and Stefan Wermter Judging by the Look: The Impact of Robot Gaze Strategies on Human Cooperation
Maciej K Wozniak and Patric Jensfelt Virtual Reality Framework for Better Human-Robot Collaboration and Mutual Understanding
Minqiu Zhou, Isobel Voysey and J. Michael Herrmann Is Machine Learning Enough to Train Robotic Pets?

Organizers

Avatar

Oliver Roesler

IVAI, Germany

Avatar

Elahe Bagheri

IVAI, Germany

Avatar

Amir Aly

University of Plymouth, UK