Automotive User Interfaces Seminar
General Information
An HMI (Human Machine Interface) is the part of a system that is in charge of the communication between the human and the machine. It is through it that the human is able to control the system (by means of input devices) to complete a certain task, but it also gives the human access to all the information relevant and necessary for this (through output devices). In the Automotive context, the HMI helps the driver fulfill not only the primary task (driving), but it also helps him with secondary or tertiary tasks (like using the infotainment system).
In this seminar, we will focus on the particular challenges of user-centered design for Human-Machine Interfaces in the Automotive Domain for driver and passengers, taking into account recent progress made in sensor and presentation tech available in a vehicle. You will read about these technologies and develop an in-car information or assistance system by applying one or more of the following Artificial Intelligence (AI) techniques:
- Multimodal Interaction Design
- Dialogue Systems
- User Adaptation
- Machine Learning
As a participant, you will give a 30 min. presentation on a paper related to these techniques and work on one of the practical projects below in a team.
- Offered by: UMTL (Chair of Prof. Dr. Antonio Krüger)
- Lecturers: Michael Feld, Guillermo Reyes, Amr Gomaa, Mohammad Mehdi Moniri
- Location: Building E1.3, Room 015
- Time: Mondays 10:00 - 12:00
- Credit Points: 7
- Language: English
- Places:16
Announcements
As requested, the presentations for days 06.01.2020 and 13.01.2020 have been switched (see below)
You can now download STEP-DP
Small changes in the presentations, please check if you are affected by them below
The OpenDS Introduction sildes and Tutorial slides are up
Final Paper and Moderator assignments complete (see below)
All papers for the presentations can be found here
The introduction slides can be found here
The Kickoff meeting will be delayed one week due to a workshop. The new Kickoff will be on October 28th!
The Kickoff meeting will be on October 21st!
Requirements:
The course is intended for students in Computer Science and Media Informatics who like challenges that come with working on practical projects with frameworks and devices they have never used before. Very good programming skills and proactivity are essential.
Projects
- Transfer of Control with Distracted Drivers
Autonomous vehicles are not perfect. There are situations in which the car will not be able to drive autonomously and will need to let the driver take control of the wheel. However, in many cases the driver will be distracted and not ready to assume control. In this project you will work on the transfer of control from an autonomous vehicle to the driver and vice-versa. This will involve some programming with a dialog platform, use of a driving simulator and machine learning. Part of the challenge will be to detect distracted drivers either with an eye tracker or with an RGB camera and computer vision. - In-Car Referencing and Control with Multi-modal Fusion:
In this project, you will work on hands free control of the car features (e.g.infotainment, wipers, doors) using the combination of eye-gaze and speech. You will also work on analyzing users' behaviour and detecting users' routine during first time use of a new car. The project requires user design knowledge as well as adequate programming skills (preferably in Java). - Multimodal Interactive Car Windows:
The technologies in future cars are rapidly changing. One of the recent technologies is Augmented Reality Side Windows, where the passengers interact with the side windows. The interactions could be such as providing information about the outside environment of the car, an interactive application like a game or weather forecast, and many other use cases. In this project, you will work with Microsoft Hololens to simulate the holographic side windows of the car. The requirements to complete this project are some pre-knowledge of Augmented Reality and programming skills in Unity3D (C#) for using Hololens. With the Hololens SDK, you will be able to define speech or gesture recognition for developing this project. - Multimodal-Multisensor Car Games
Today, cars have many integrated sensors, from traditional GPS positioning sensor to gesture and gaze recognition systems. In addition, they have several output possibilities (visual, audio, haptic). Moreover, they are on the wheels and move around the city. In addition, the information from these sensors can be acquired and used by various systems. This opens the opportunity to transform the car into a game machine on the wheel.
With the introduction of various levels of autonomy, the driver should not be concentrated on the driving task all the time. This gives the driver the opportunity to use the car as an infotainment platform in various conditions (in parking space, while waiting for the traffic light, etc.).
This concept is not new, for example Tesla is providing the opportunity for the drivers to play video games when the car is standing still. For this they use the big display in the car (as output) and the steering wheel and brake (as input).
In this seminar, we want to go one step further. We want to work on ideas that embed more sensors and more modalities into the car game. For example, interaction with various spaces in the car or outside environment. For the game we would like to use combination of all possible modalities, for example speech, gaze and gesture. In addition, we would like to consider user’s environment in the game. For this we will brainstorm together and set the framework for some novel ideas. We may also implement some of these ideas.
Presentations
Date | Name | Paper | Moderator | Supervisor |
---|---|---|---|---|
04.11.19 | Mehdi Moniri | Automotive HMI Topics from an Industry Perspective | ||
11.11.19 | Rafael Math | Introduction to OpenDS Driving Simulation | ||
18.11.19 | Garcia | Distracted Driver Detection : Deep Learning vs Handcrafted Features | Birtel | Guillermo Reyes |
18.11.19 | ||||
25.11.19 | Fang | Autonomous Driving : Investigating the Feasibility of Car Driver Handover Assistance | Aly | Guillermo Reyes |
25.11.19 | Engel | SiAM dp : A Platform for the Model Based Development of Context Aware Multimodal | Wocker | Guillermo Reyes |
02.12.19 | Khatagalli | Multisensory Haptic Interactions | Molz | Mehdi Moniri |
02.12.19 | Withanage Don | Understanding and Supporting Multimodal Choices | Kahn | Mehdi Moniri |
09.12.19 | Jamil |
Using Cognitive Models to understand Multimodal Processes
|
Garcia | Mehdi Moniri |
09.12.19 | ||||
16.12.19 | Birtel | I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving | Engel | Amr Gomaa |
16.12.19 | Paul | Combining Speech, Gaze, and Micro Gestures for the Multimodal Control of In Car Functions | Fang | Amr Gomaa |
06.01.20 | Molz | Augmented Reality Displays for Communicating Uncertainty Information in Automated Driving | Paul | Michael Feld |
06.01.20 | Wocker | Augmented reality navigation systems | Withanage Don | Michael Feld |
13.01.20 | Aly | Gaze based interaction on multiple displays in an automotive environment | Jamil | Amr Gomaa |
13.01.20 | Kahn | OpenDial: A Toolkit for Developing Spoken Dialogue Systems with Probabilistic Rules | Khatagalli | Amr Gomaa |
20.01.20 | Ali | InCarAR: A Design Space Towards 3D Augmented Reality Applications in Vehicles | Paul | Michael Feld |
20.01.20 | ||||
27.01.20 | Presentation Practical Assignments Groups 1+2 | |||
03.02.20 | Presentation Practical Assignments Groups 3+4 |
Registration
To register for the seminar, please use the university's seminar assignment system