Open Thesis Topics

This work focuses on context factors (e.g., stream type: casual stream vs. speedrun) that apply to audience influence options in game live-streams. Building on previous work, your task is to conduct studies in which different situations in typical game live-streams are shown and in which different audience influence options are presented. The relevant influence options for this thesis are not bound to the actual content of the game, instead they work independently of the streamed game (“beyond the game”). Typical examples include modifying the streamer’s room lighting, activating vibration feedback on the streamer’s arm, or exchanging keyboard bindings. The goal of these studies is to derive guidelines which context factors moderate the perception of these modifications.

Focus

The focus is on the study. Likely, you need to either create small mock up prototypes and find a streamer using them (for recording sessions) or you need to have good skills to create proper visualization material, to clarify what happens.

Prerequisites

  • Read our previously published papers on live-streaming.
  • Background or interest in game live-streams.
  • Completed HCI and/or the Statistics lecture and ideally already attended at least one seminar at our chair

How to apply

Please send me an email with the following pieces of information (if you do not answer every point, your application will not be considered):

  • When you plan to start the thesis
  • When you plan to finish the thesis
  • A short motivational statement why this topic is interesting for you
  • A summary why you be a good fit for this topic
  • Your transcript of records
  • Provide the information, whether you have contacts to streamers who would be willing to assist in the study
  • Assume that changing the lighting situation in the streamer’s room is the only possible audience influence option beyond the game. Under this assumption, list context factors that you think are relevant in live-streaming and explain how you would do an online study to investigate these contexts (max 2 A4 page)

 


See personal profile of Dr. Pascal Lessel

In dieser Arbeit sollen unterschiedliche Inzentivierungsstrategien für Microtasks entwickelt werden. Diese Strategien müssen im Rahmen einer Browserextension, die die Microtasks anzeigt, funktionieren. Ziel der Extension ist es, dass so viel wie möglich Microtasks von Freiwilligen gelöst werden. Ein Microtask könnte z. B. die Bewertung von Sätzen im Hinblick auf ihre Verständlichkeit sein; ein anderer, Bilder aus Social Media „zu melden“, weil diese als jugendgefährdend von den Extensionnutzer*innen eingestuft werden. Im Rahmen der Arbeit soll mindestens eine Gamification-Strategie, eine Game-(With-A-Purposed-)based Strategie und eine Nudging-Strategie konzeptioniert, umgesetzt und im Anschluss Pro’s und Con’s in einer Vergleichsstudie eruiert werden.

Fokus
Diese Arbeit fokussiert zu gleichen Teilen die Implementierung als auch die Studien.

Voraussetzungen
- Deutsch lesen & verstehen können (die Zielgruppe sind deutschsprachige Nutzer*innen und die Microtasks werden primär aus deutschem Quellmaterial generiert werden)
- Gute Web-Dev Skills
- Interesse an der Durchführung von Studien
- Idealerweise selbst „Gamer“ sein

Wie bewerbe ich mich?

Bitte sende eine E-Mail mit den folgenden Informationen (wenn nicht alles beantwortet wird, wird die Bewerbung nicht beachtet). 

  • Wann kannst du starten?
  • BA oder MA?
  • Kurzes Motivationsschreiben, warum das Thema für dich interessant ist und du der richtige dafür bist.
  • Dein Transcript of Records (als MA: Bitte füge auch das BA-ToR hinzu)
  • „One pager”: Beschreibe eine Inzentivierungsstrategie in diesem Kontext, von der du glaubst, dass sie Leute dazu motiviert, Microtasks zu lösen.

See personal profile of Dr. Maximilian Altmeyer

This work focuses on human-robot collaboration, in more detail, how a robotic arm and a human can work together at an assembly cell such that the robot pro-actively supports the worker in assembling a workpiece. A prototypical set-up including the robot and components of the workpiece are already available as well as a mixed-reality duplicate of the set-up, which can be used to conduct Wizard-of-Oz (WoZ) style user-studies using AR glasses. Building on the existing work, your task is to conceptualize, plan, conduct and assess a user-study to determine the most appropriate work dynamic (the optimal division of tasks) between the robot and the worker. This also includes determining the most suitable modalities for human-robot communication during the process. This thesis is a collaboration between ZeMA (Zentrum für Mechatronik und Automatisierungstechnik gGmbH) and DFKI. The practical work will be done at the Power4Production Hall at Eschbergerweg 46, Saarbrücken.

Focus

The focus is on the user-study itself. You will need to create a storyline, think about relevant questions, including identifying which data must be recorded, find participants, plan the execution, and analyze the results. 

Prerequisites

  • Background in planning and conduction user-studies (e.g. from the HCI lecture)
  • Interest in Mixed Reality (e.g. Meta Quest AR headsets)
  • Interest in Robotics
  • Enrolled in a Bachelor programme in computer science, mediainformatics or related field

How to apply

Please send us an email with the following pieces of information (if you do not answer every point, your application will not be considered):

  • When you plan to start the thesis
  • When you plan to finish the thesis
  • A short motivational statement why this topic is interesting for you
  • A summary why you would be a good fit for this topic
  • Your transcript of records and CV

See personal profile of Dr. Tim Schwartz

Recent advances in Machine learning (specifically Deep Learning) allowed robots to understand objects and the surrounding environment on a perceptual non-symbolic level (e.g. object detection, sensors fusion, and language understanding), however a trending area of research is to understand objects on a conceptual symbolic level so we can achieve a level of robots thinking like humans. Deep Reinforcement Learning (RL) recently attempted implicitly combining these symbolic and non-symbolic learning paradigms, but it has several drawbacks such as: (1) the need for very long training time with respect to traditional deep learning approaches, (2) convergence to optimum policy is not guaranteed and it can get stuck in a sub-optimal policy, and (3) a RL agent is trained over a simulated environment so it cannot foresee actions that only exist in the real environment. The goal of thesis is to train a robot that would explicitly learn on both perceptual and conceptual levels through direct feedback from a human expert along with its existing view (i.e. sensors) of the world.

 

Focus

This work will focus on Reinforcement Learning, Imitation Learning and the combination of both. This work will involve real-time implementation of a working system.

 

Prerequisites

  • Please read about the following papers [1] [2] [3] [4] [5]
  • Background or interest in RL, Computer Vision or AI Planning
  • Completed HCI, Statistics and/or Machine learning courses
  • Strong programming skills
  • Unity/SImulation environments background is a plus

 

How to apply

Please  send me an email  with the following pieces of information:

  • When you plan to start the thesis
  • When you plan to finish the thesis
  • A short motivational statement why this topic is interesting for you
  • A summary why you would be a good fit for this topic
  • Your transcript of records and CV

See personal profile of Amr Gomaa

Referencing resolution is a trending topic that remains unsolved due to the high variance in users' behavior when performing a referencing task. Referencing resolution is simply identifying the object a user is intending to select through speech, pointing, gaze or multi modal fusion of all the previous modalities. Referencing is used in multiple domains in HCI such as Human Robot Interaction (HRI) [Nickel et al. 2003; Whitney et al. 2016; Kontogiorgos et al. 2018; Sibirtseva et al. 2019], and Vehicle and Drone interaction [Rümelin et al. 2013; Roider et al. 2017; Gomaa et al. 2020]. However, most of the current research focus on stationary first-person view when interacting with the object. In this thesis, you will work on the task of multi-modal real-time reference resolution using speech, gaze and/or pointing gestures from a moving source when interacting with a vehicle, industrial robot, or retail delivery drone. 

 

Focus

This work will focus on gesture identification, gaze tracking, object detection, speech recognition and/or modality fusion techniques. This work will involve real-time implementation of a working system.

 

Prerequisites

  • Please read about the following papers [1] [2] [3] [4] [5] [6]
  • Background or interest in gesture recognition, NLP or gaze tracking
  • Completed HCI, Statistics and/or Machine learning courses
  • Strong programming skills

 

How to apply

Please  send me an email  with the following pieces of information:

  • When you plan to start the thesis
  • When you plan to finish the thesis
  • A short motivational statement why this topic is interesting for you
  • A summary why you would be a good fit for this topic
  • Your transcript of records and CV

See personal profile of Amr Gomaa

Vehicles get exponentially smarter every day; Manufacturers are continuously adding more features to smart cars. While these functionalities are added to enhance driver's experience and make their rides smoother, they often come with extra complexity that causes more stress and might make the trip more dangerous. Several researchers attempted conceptualizing / introducing situation-aware personalized adaptive interfaces that would ultimately reduce the interface complexity [Garzon et al. 2010; Garzon et al. 2011; Garzon 2012; Siegmund et al. 2013; Walter et al. 2015; Hasenjäger et al. 2017; Knauss et al. 2018]. However, as far as we know, there is no actual implementation for such interface due to the lack and obtaining difficulty of such training data. 

 

Focus

In this thesis, you will work on a two-stage project that 1) Identify certain activities or scenarios based on driving behavior for the use in traditional or autonomous driving situations using state diagrams or specific schema, 2) Use a hybrid deep learning approach (e.g., Graph Neural Networks or Deep Reinforcement Learning) to find adaptive patterns in behavior using small amount of data. You will focus on activity recognition, situation awareness and hybrid learning approaches combining symbolic and sub-symbolic learning.

 

Prerequisites

  • Please read about the following papers [1] [2] [3]  [4] [5] [6]
  • Familiar with deep learning concepts and / or state diagrams / graphical modeling
  • Completed AI Planning, Statistics and / or Machine learning courses
  • Strong programming skills

 

How to apply

Please   send me an email   with the following pieces of information:

  • When you plan to start the thesis
  • When you plan to finish the thesis
  • A short motivational statement why this topic is interesting for you
  • A summary why you would be a good fit for this topic
  • Your transcript of records and CV

See personal profile of Amr Gomaa