Immersive Notification Framework for Virtual Reality
Notifications in everyday virtual reality (VR) applications are currently realized by displaying generic pop-ups within the immersive virtual environment (IVE) containing the message of the sender. However, this approach tends to break the immersion of the user. In order to preserve the immersion and the suspension of disbelief, we propose to adapt the method of notification to the current situation of the user in the IVE and the messages' priority. We propose the concept of adaptive and immersive notifications in VR and introduce an open-source framework which implements our approach. The framework aims to serve as an easy-to-extend code base for developers of everyday VR applications. As an example, we implemented a messaging application that can be used by a non-immersed person to send text messages to an immersed user. We describe the concept and our open-source framework and discuss ideas for future work.
Selection-based Text Entry in Virtual Reality
In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. While the technology for input as well as output devices is market ready, only a few solutions for text input exist, and empirical knowledge about performance and user preferences is lacking. In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods. Other methods such as head pointing can be viable alternatives depending on available resources. We summarize our findings by formulating guidelines for choosing optimal virtual keyboard text entry methods in VR.
Mixed Reality Pinball
This student project aims to use Mixed Reality (VR/AR) to create a new pinball experience connecting modern technologies and original old-school pinball-machines provided by Pinball Dreams from Saarbrücken.
MMPE: Multi-modal and Language Technology Based Post-Editing Support for Machine Translation
In order to ensure professional human quality level translation results, in many cases, the output of Machine Translation (MT) systems has to be manually post-edited by human experts. The postediting process is carried out within a post-editing (PE) environment, a user-interface, which supports the capture and correction of mistakes, as well as the selection, manipulation, adaptation and recombination of good segments. PE is a complex and challenging task involving considerable cognitive load. To date, PE environments mostly rely on traditional graphical user interfaces (GUIs), involving a computer screen as display and keyboard and mouse as input devices. In this research project we propose the design, development, implementation and extensive road-testing and evaluation of a novel multi-modal post-editing support for machine translation for translation professionals, which extends traditional input techniques of a PE system, such as keyboard and mouse, with novel free-hand and screen gestures, as well as speech and gaze input modalities (and their combinations). The objectives of the research are to increase the usability and the user experience of post-editing Machine Translation and to reduce the overall cognitive load of the translation task, supporting (i) the core post-editing tasks as well as (ii) controlling the PE system and environment. The multimodal PE environments will be integrated with quality prediction (QE) to automatically guide search for useful segments and mistakes, as well as automatic PE via incremental adaptation of MT to PEs to avoid repeat mistakes, in order to achieve the above mentioned objectives. The environments will be road-tested with human translation professionals and trainees and (where possible) within the partner projects in the Paketantrag (Riezler, Frazer, Ney and Waibel) and (where possible) the post-edited data captured will feed into dynamic and incremental MT retraining and update approaches pursued in the partner projects.
VRShop: A Mobile Interactive Virtual Reality Shopping Environment
In this work, we explored the main characteristics of on- and offline shops with regard to customer shopping behavior and frequency. Thus, we designed and implemented an immersive virtual reality (VR) online shopping environment. We tried to maintain the benefits of online shops, like search functionality and availability, while simultaneously focusing on shopping experience and immersion. By touching the third dimension, VR provides a more advanced form of visualization, which can increase the customer’s satisfaction and thus shopping experience. We further introduced the Virtual Reality Shopping Experience (VRSE) model based on customer satisfaction, task performance and user preference. A case study of a first VR shop prototype was conducted and evaluated with respect to the VRSE model. The results showed that the usability and user experience of our system is above average overall. In summary, searching for a product in a WebVR online shop using speech input in combination with VR output proved to be the best regarding user performance (speed, error rate) and preference (usability, user experience, immersion, motion sickness).
FootStriker: An EMS-based Foot Strike Assistant for Running
In running, knee-related injuries are very common. The main cause are high impact forces when striking the ground with the heel first. Mid- or forefoot running is generally known to reduce impact loads and to be a more efficient running style. In this paper, we introduce a wearable running assistant, consisting of an electrical muscle stimulation (EMS) device and an insole with force sensing resistors.