Adaptive Human Machine Interfaces for Autonomous Systems

General Information 

  • Offered by: UMTL (Chair of Prof. Dr. Antonio Krüger) 
  • Lecturers: Dr. Michael Feld, Amr Gomaa, Niko Kleer, Maurice Rekrut, Guillermo Reyes, Julian Wolter 
  • Location: HS003 (presence preferred, fallback to online possible) 
  • Time: Thursdays, 16:15-17:45 
  • Credit Points: 7 
  • Language: English 
  • Places: 20 
  • Mode: we plan to offer this as a hybrid lecture, allowing both on-site and remote participation (via MS Teams)
  • Grading criteria: Your grade will be based on an individual paper presentation of a selected scientific paper, a practical group assignment and active participation in the discussion rounds.

 

Announcements

  • Since the assignment system is finalizing the assignment of students to seminars tomorrow evening, we will be postponing the Kick-Off meeting till next week on Thursday 28.10.2021. We will send the link to join virtually to those students who were assigned to the seminar and cannot be there on-site. 
  • If you're interested in the seminar and didn't manage to get a spot, please email michael.feld@dfki.de expressing your interest, and if after the first week there's an available spot, we'll contact you to let you know.
  • The kick-off meeting will be on Thursday 21.10.2021. We will provide a link for those joining virtually via MS Teams.

 

Seminar Overview

The development of autonomous systems in areas such as industry production, autonomous vehicles, and conversational soft-bots has gained substantial momentum. Most of these systems are designed to work hand-in-hand with humans, like cobots interact with other worker, autonomous cars with their drivers, and retail store bots with staff. The field of AI that we are dealing with in this seminar is human-machine interaction. More specifically, we present concepts and techniques for creating interfaces to autonomous systems that adapt to the user and the situation, learning from user feedback and previous experience, as well as explaining its behavior, such as when the system transfers control to the human. Therefore, this seminar will also cover aspects of applied machine learning in selected domains. 

 

Aside from learning about relevant concepts, participants will work in small groups on a practical project and implementation that demonstrates adaptive behavior based on learned data. Most of these projects also connect to a particular application domain. The following topics / projects are available:

  1. Bridging the Gap Between Reinforcement Learning and Inverse Reinforcement Learning for Autonomous Agents. 
    Did you ever want to teach an AI agent to play a game as you do? In this project, you will use Reinforcement Learning and Inverse Reinforcement Learning concepts to teach your gaming skills to a Unity agent and let it play like a Pro.  
  2. EEG-based grasp-pose prediction for natural Human-Robot-Interaction.
    Have you ever tried to teach a robot a grasp-pose by showing it and the system did just not get it? Here is your chance to fix this issue! Measure the electrical activity of the brain via EEG while making different grasp-poses and classify them based on the EEG data. 
  3. Effects of cognitive load on the transfer of control of autonomous vehicles.
    What happens if your autonomous car can’t drive anymore and you’re too overwhelmed to take over the control? Would you be able to do this effectively, or would you make mistakes? What if your car could read your mind and know how you’re feeling at that moment? Could it then help you take over?
  4. Self-adapting Smart Home Assistant for recurring Actions
    Autonomous systems often perform user-triggered actions that are recurring. In this project, these recurring actions and their triggers are to be detected and learned based on a data stream of events. When learned trigger is detected in the data streamthe corresponding action should be suggested to the user in an adequate way (e.g., on a display). 
  5. Legal, Transparent and Explainable Human-Robot Interaction.
    Most robotic systems are static, strictly execute procedures, and don’t explain their actions. The latter leads to less natural human-robot interaction, trust problems and makes robots not comply with newly introduced laws. In this project, you will investigate how robots are able to explain their actions and therefore, implement an experimental agent that addresses  all aforementioned challenges.