MMPE: Multi-modal and Language Technology Based Post-Editing Support for Machine Translation
In order to ensure professional human quality level translation results, in many cases, the output of Machine Translation (MT) systems has to be manually post-edited by human experts. The postediting process is carried out within a post-editing (PE) environment, a user-interface, which supports the capture and correction of mistakes, as well as the selection, manipulation, adaptation and recombination of good segments. PE is a complex and challenging task involving considerable cognitive load. To date, PE environments mostly rely on traditional graphical user interfaces (GUIs), involving a computer screen as display and keyboard and mouse as input devices. In this research project we propose the design, development, implementation and extensive road-testing and evaluation of a novel multi-modal post-editing support for machine translation for translation professionals, which extends traditional input techniques of a PE system, such as keyboard and mouse, with novel free-hand and screen gestures, as well as speech and gaze input modalities (and their combinations). The objectives of the research are to increase the usability and the user experience of post-editing Machine Translation and to reduce the overall cognitive load of the translation task, supporting (i) the core post-editing tasks as well as (ii) controlling the PE system and environment. The multimodal PE environments will be integrated with quality prediction (QE) to automatically guide search for useful segments and mistakes, as well as automatic PE via incremental adaptation of MT to PEs to avoid repeat mistakes, in order to achieve the above mentioned objectives. The environments will be road-tested with human translation professionals and trainees and (where possible) within the partner projects in the Paketantrag (Riezler, Frazer, Ney and Waibel) and (where possible) the post-edited data captured will feed into dynamic and incremental MT retraining and update approaches pursued in the partner projects.
VRShop: A Mobile Interactive Virtual Reality Shopping Environment
In this work, we explored the main characteristics of on- and offline shops with regard to customer shopping behavior and frequency. Thus, we designed and implemented an immersive virtual reality (VR) online shopping environment. We tried to maintain the benefits of online shops, like search functionality and availability, while simultaneously focusing on shopping experience and immersion. By touching the third dimension, VR provides a more advanced form of visualization, which can increase the customer’s satisfaction and thus shopping experience. We further introduced the Virtual Reality Shopping Experience (VRSE) model based on customer satisfaction, task performance and user preference. A case study of a first VR shop prototype was conducted and evaluated with respect to the VRSE model. The results showed that the usability and user experience of our system is above average overall. In summary, searching for a product in a WebVR online shop using speech input in combination with VR output proved to be the best regarding user performance (speed, error rate) and preference (usability, user experience, immersion, motion sickness).
FootStriker: An EMS-based Foot Strike Assistant for Running
In running, knee-related injuries are very common. The main cause are high impact forces when striking the ground with the heel first. Mid- or forefoot running is generally known to reduce impact loads and to be a more efficient running style. In this paper, we introduce a wearable running assistant, consisting of an electrical muscle stimulation (EMS) device and an insole with force sensing resistors.
Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user’s perception of virtual objects interacted with.