Door Lucienne Wijgergangs en Sjaak Verwaaijen.

Studenten van Fontys ICT ontwikkelen in c.a. 10 weken in een projectgroep (5 studenten) een VR/AR applicatie voor een klant (bedrijf of organisatie). De applicatie worden ontwikkeld in Unity en draait meestal op een HTC Vive. Bij deze 5 voorbeelden van project die de afgelopen 5 jaar zijn uitgevoerd.

  1. Anesthesia

In het project Anethesia (zie video) wordt een VR applicatie getoond waarmee gezondheidszorg studenten een training krijgen om de operatiekamer klaar te maken voor een operatie.

  1. Saasen

In project Saasen (zie video) gaat het om een BHV training. De BHV’er moet de juiste stappen nemen in het bestrijden van een brand in een ziekenhuis. Dit is een mixed VR applicatie, want er wordt een echte brandslang vastgehouden door de gebruiker en in de VR wereld is dit een overeenkomstige virtuele brandslang.

 

   3.  Eyetracking with Noldus Information Technology

Problem Statement:

Design an interactable VR environment in Unity to enable Noldus’ testers to track user interaction and focus to improve their Eyetracking algorithm.

Context:

Noldus tests user interaction based on user focus and interaction with an interface. The client is currently developing an algorithm for eye tracking in a VR environment. By utilizing a Pupil Labs device for the HTC Vive, the product knows exactly what the user is looking at. This data is examined by the algorithm to research what part of the 3D environment the user's attention is catched by. However they need an environment/simulation in which they can test this algorithm. This environment can be something realistic, like traffic in a city center or a natural landscape. The environment is to be a random variation of itself every time it is simulated. The simulation needs to output a log of objects the user has looked at with timestamps. The objects the user looks at can be tracked using an eye tracker. This log needs to be formatted in a way that it can be read by the algorithm.

Research Questions:

  • How can we model the VR environment so we can achieve a serious game effect?
  • How to build VR user interactions that are visually appealing and easy to use?
  • How to use the data from pupil labs(eye tracker for hmd) in the project to identify the object looked at by the user?

Outcome of the project:

Noldus Eye Tracking is an application developed for Noldus Information Technology. The client is working on developing an algorithm which determines the fixation of the user. Noldus Eye Tracking will help the client develop, test and refine the fixation algorithm.

In the application there are two scenes with different functionalities: Brownian Motion and Traffic Scene. The Brownian Motion is more for developing and testing the algorithm. This scene contains moving objects which are controlled by controlled randomness. The controlled randomness is important for the client, as the movement of the objects has to be predictable, but appear random for testing purposes.

The Traffic Scene is for showcasing and demoing the algorithm. This scene is located in a more realistic environment, based on a busy London Square. People and cars are moving around in this environment. They draw the attention of the user. This scene also contains smart agents, which help the user perform certain actions in the environment.

 

 4.  Operation room

The project is to create a training/testing simulator for preparing an operating room. In this training simulator, the student nurses have to prepare the operating room for an upcoming operation. There will be a couple of operations implemented in the simulator, of which one is randomly selected during each run in the simulator. When the user is finished with preparing the operating room, a surgeon will enter the room to give feedback to the user about how well they prepared the operating room.

  5. OnScene

Design an immersive simulation for first-responder trainees in the medical field to improve their performance in stressful situations.

Context

Saasen Groep provides a large variety of different medical training. In order to improve their training they want to invest in the usage of VR. As such they started to develop an immersive simulation together with Fontys. The goal of the project is to improve the already existing application to make the experience more immersive. The primary focus is on improving the interaction based on the stress level of the user and general improvements to the interactions in the simulation.

Results

The result consists of different areas of improvement which will be explained in more detail in the following sections.

 

The Usability improvements consist of:

  • Choose Option by Vision
  • Carry unconscious Person
  • Lip Sync
  • Speech Bubbles

 

The Stress Level Scene Interaction consists of:

  • Analyzing the skin response data
  • Dynamic Sound
  • Body Language
  • Facial Expression

 

Choice by Vision

In the given state of the project you had to choose between two choices by pointing the controller at the choice and selecting by pressing the trigger button.

After doing some user tests we came with the idea to implement a way to make these choices by looking at the text. When you look at a choice a hover effect indicates that you are selecting this choice. A spinner will then appear and the user will be told to speak by showing a microphone icon. When the spinner finished the choice had been made.

 

Carry Person

In addition to make the project more interactive and not just a choice based experience we added a carry person implementation. In the middle of the story there is a part where the victim needs to be placed on the floor. We created an implementation where the user has to grab the shoulders and put the victim on the floor together with the son (character in the scene) who will lift the legs.

 

Lip Sync

Lip Sync is a well-known technique to synchronize the lip movements of an avatar with the speech. After research on how to implement Lip Sync we opted to use Salsa. As it is a well-established solution the TRL is 9. It helps to make the avatar in the scene look more real and therefore increases the immersiveness of the experience.

 

Speech Bubbles
At the beginning, we had a floating white text when a character spoke. Sometimes the text will not be visible because of the white wall of the room. We decided to use speech bubbles to counter this problem. It helps to make the scene more clear.

Skin Response Analyzer

The goal of the Skin Response Analyzer is to get a reliable indicator on how stressed a person is over time. The analyzer is implemented in a separate and reusable microservice architecture. How the algorithm itself was developed will be explained in more detail in “Methodology”. The outcome enables us to make the entire experience more immersive as the different avatars can react based on the behavior of the user.

 

Stress level plotted over time

In order to validate the algorithm we applied common best practices which ensured that we can provide certainty about the output data. We used especially unit testing to test every different step of the algorithm independently, furthermore to extend the meaningfulness of the tests we created an integration test set which can easily be extended with new test suites to improve the algorithm.

In order to reduce the risk of bugs we also used a thorough peer-review process, every piece of the algorithm was reviewed by at least one other person.

The added value can be assigned to TRL 6, the algorithm demonstrates that it is feasible to use real-time gsr data in the simulation and it’s also integrated into the scene.

 

Dynamic Sound

At first, OnScene only contained a sad piano as background sound while playing. With feedback from a first-responder with a lot of experience, we have discovered that these scenarios are usually much more chaotic when it comes to sounds.

Currently, we have replaced the piano BGM with TV audio. The audio plays of a number of random sounds that should serve to distract the user more while playing.

 

Body Language

n important objective of the project was to improve the realism of the human avatars. One way of doing this was to focus on improving their body language so they can better convey their emotional state, which can eventually change depending on the stress level of the user.

Right now, a number of custom animations have been made for the son of the victim inside OnScene’s scenario.

 

Facial Expressions

Facial expressions are very important in this project because they show the stress level/feelings of the avatars more realistically. It allows the user (trainee) to immerse himself in the VR app and sense its realism. After research on how to implement facial expressions we decided to use the same tech as the Lip Sync, Salsa by applying the EmoteR component. To be able to implement this feature the complete Salsa package must be purchased.

 

UX Improvements

UX in Virtual reality is something that varies in a lot of ways. We started researching using Best, good & bad practices. We also created a lot of different prototypes. When a prototype was finished we would perform a user test. The results of this test would lead to either creating a new prototype or expanding the prototype to fit the needs. This methodology also consisted of a lot of AB testing.

 

Carry person

t was not initially planned for something like “carry person” to be implemented in the project.

We performed an expert interview with Pieter van Gorkom and he told us he really missed some interaction in the scene because now it really feels like a movie with choices.

There are a lot of different ways this problem can be tackled. The project was delivered using VRTK toolkit. It would be the obvious way to implement “carry person” by using VRTK Toolkit. After experimenting and researching ways to implement it using VRTK Toolkit we decided it was better to make our own custom implementation as there were lots of problems keeping us from implementing using VRTK Toolkit.

 

Body Language

While there are a lot of publicly available human avatar animations to find on the internet, we have deduced based on a Design Pattern Search that none of the animations fitted the emotions that wanted to use in the project. Therefore, we decided on making custom animations. Because the OptiTrack room was taken down and we had to work from home most of the time, we ended up making custom animations with a Rokoko SmartSuit.

 

Lip Sync and Facial Expressions

fter an initial research on different solutions to tackle lip sync and facial expressions a Comparison of the different viable solutions has been made which made us come to the conclusion that Salsa is the best candidate for implementing these features.

Due to the fact that these solutions charge license fees we did Provocative Prototyping to check whether an own implementation would be feasible. Based on the effort estimation and discussions we came to the conclusion that we propose the project to use Salsa.

 

Skin Response Analyzer

The first step in tackling such a problem is a thorough Literature Study. Skin Response has already been used for analysis of the arousement for decades. The most common way to work with the galvanic skin response is to analyze it manually. This is commonly used for biofeedback training in a therapeutic context. This research ensured that the galvanic skin response is interpretable, real-time available and the usage of the data is already widespread.

Further research defined how such an algorithm could work. There are multiple algorithms for interpreting skin response. Most of these algorithms split the signal into two components: phasic and tonic. Tonic is the constant change of the signal, this part increases slowly and will decrease gradually back to the base level. Phasic is an impulse indicating an event which aroused the person.

It also showed that the use case of analyzing data in real-time - in comparison to analyzing after a certain session - is less widespread which is the reason a custom algorithm for the needs of the simulation had to be implemented.

In order to be able to test the effectiveness of such an algorithm primarily two entirely independent Proof of Concepts have been developed. These prototypes of the algorithm permitted to ensure that a meaningful analysis of the signal is feasible.

Based on the prototypes the end result (Described in “Result”) has been achieved and is in use.