The Future Is Now

Less anxiety in VR: Pioneering work to reduce anxiety triggers in immersive media

Anxiety-free virtual reality: How developers and researchers are working to minimize anxiety triggers in immersive media.

by Daniel Pohl

In the age of social media, randomly viewing different images is a common experience for many people in the 2D world. It’s not uncommon for a seemingly endless scroll through the timeline to include images you didn’t want to see: a macro shot of a spider for arachnophobes, a realistic war photo of wounded soldiers, or a close-up of fresh sushi next to Kobe beef for vegans.

On a phone, tablet, or monitor, you can look away relatively quickly and move on to the next image. So while the experience of a 2D fear trigger may not have been pleasant, it is more quickly forgotten. Highly immersive 3D media, on the other hand, relies on reproducing real scenes in virtual reality in such a way that the viewer feels as present in the scene as if he or she were there. With 180° and 360° images, it becomes more difficult to look away.

One solution would be to close one’s eyes and hope that the next image will not cause anxiety. However, this reaction is often not intuitively anchored and can be of little help, especially for image series with similar content. Because of the high level of realism in VR, it makes sense to think about minimizing anxiety triggers in immersive media. This will allow a broader and more comfortable use for different user groups.

As developers of the VR image and slideshow viewer immerGallery, we have done just that. In collaboration with the University of Würzburg, student Amelie Hetterich has been working on the topic in her bachelor thesis. An abstract of the thesis was presented at the ICAT-EGVE scientific conference in Dublin in December 2023.

Since anxiety triggers can be very intense, especially in highly immersive, realistic VR media, this article presents research that reduces their influence and thus allows for a more comfortable, serendipitous viewing of immersive media. In addition to different strategies for different types of anxiety, we also look at possible automated measures and a current product implementation that already uses them.

Previous work on anxiety in virtual reality

Virtual reality has been used for some time in the therapeutic field, where patients are exposed to graded stress with anxiety triggers under professional supervision. Scientific studies have dealt with topics such as fear of flying or fear of heights (Wiederhold et al. 2002, Rotbaum et al. 1997). In 1995, Hodges et al. used a virtual floor in immersive VR applications to treat fear of heights.

In 2D computer games, mods such as “Spiders Begone” in Skyrim have replaced the virtual giant spiders with bears. In the VR game Walkabout Mini Golf VR, the large spiders in the attic of the creepy golf course “Widow’s Walkabout” can be disabled in the options if you suffer from arachnophobia.

Unlike these interactively rendered experiences, the images for immersive media, such as 3D photos, 180-degree videos, etc., are already finalized. Therefore, it is not possible to simply omit certain 3D objects in a rendering engine in order to re-render them or convert them to a different geometry. In this area of immersive media, there is a patent application from immerVR that summarizes 13 mitigation strategies and shows more specific algorithms in detail.

The following general points for reducing anxiety triggers in immersive images/videos can be found there:

The basics

First, let’s look at the types of images we are dealing with in virtual reality. In general, much of what we describe here using individual images also applies to video. The most common image formats for virtual reality are

Some image formats make limited sense on traditional 2D displays. Therefore, as we will see later in the AI object recognition example, some common algorithms are not yet optimized for these image formats.

The Application

A current version of the immersive VR image viewer immerGallery was used as the basis for the research. Some parts of the research have since been incorporated back into the product. In order to be able to react adequately to the individual fears of a user, it is possible to specify some of the user’s own fears in the application. In a broader sense, it would also be conceivable to have such a profile across applications, e.g. confidentially stored in the profile of the headset account.

Fears and solutions

While we have already mentioned some theoretical approaches to the basic solution of reducing anxiety triggers, we will now take a closer look at the practical implementation and the insights gained from it.

If you know where potential anxiety triggers are in the image, you can try to remove them. In the course of the bachelor thesis, we manually marked the relevant elements in the form of a rectangle in an auxiliary file for each image. If the area is too large and the image would have no significant value for viewing without this area anyway, you can also define the fear trigger for the entire image without the rectangle.

As a simple option, we then filled the marked pixels of the rectangle with the surrounding color, which censors the scare trigger. Depending on the background, this can work very well or have a disturbing effect, depending on whether the borders of the rectangle are visible and the background has a very different color.

The best solution here is to use AI techniques that can redraw image areas with so-called AI inpainting. Ideally, the viewer will not notice that the image has been altered at all. The execution of such AI algorithms can be performance intensive on current standalone headsets. As part of the research, we have therefore only implemented the additional option of offering a pre-processed AI-manipulated image as an alternative solution for the respective anxiety trigger.

Here we use the example of an image of a garden table with a beetle sitting on it. The user has previously indicated that he or she is afraid of beetles or insects in general.

The special thing about immersive images is that you have to remove the anxiety triggers to the left and right of a 3D photo. Care must be taken that the perceived depth of the image does not suffer. With the rectangle, as long as it is relatively narrow around the removed area, this works relatively well in practice.

With AI inpainting, where pixels are randomly “invented” without the AI algorithms taking stereoscopy into account, false impressions of depth can occur more quickly. With VR180 and 360-degree images in equirectangular format, it should also be noted that the distortion of the format may need to be compensated for first, which will be discussed in more detail later.

Spatial-based anxiety triggers

In addition to image-based triggers, there are many triggers that are related to perceived space. For example, fear of heights can be triggered. Claustrophobia can be triggered by rooms that are too small. Being too close to other people or animals can trigger a fear of touch (haphepobia).

For many of these areas it is very helpful to create a depth map to better assess the situation. There are many tools in this area that generate depth maps from both monoscopic and stereoscopic images. However, the scaling of the depth pixel values to the real world is often unclear. However, if the exact model of a 3D camera is known, including the properties of the lenses, this can be converted relatively well into a world scaling in metric units, so that e.g. the distance to the ground or to other people can be reasonably estimated.

As part of the research, we looked more closely at the area of vertigo. This is an area where studies have shown that 3-6% of the population suffer from severe anxiety, and it is estimated that around 30% of the population experience at least some discomfort at high altitudes. With such a high prevalence, this is definitely one of the most important issues in the area of anxiety in virtual reality, and fortunately there are good solutions.

As mentioned at the beginning, there has been previous research suggesting a virtual floor to combat anxiety in virtual reality. In the meantime, this mitigation strategy has been incorporated with enhancements into the immerGallery product with a user profile for acrophobia.

For a moderate fear profile, an artificial floor with railings under the user’s feet is rendered as a 3D environment for all monoscopic images (e.g., drone photos or 360° action cam photos) and for all images displayed on a virtual screen. The high setting does this for all images.

Why is there a difference between medium and high? Because the artificially rendered floor in Virtual Reality, as a real 3D object, naturally has real depth. If the media content displayed around it is only monoscopic, there are no relevant depth conflicts. However, this is not the case with stereoscopic media: because the user is shown images for the left and right eye that are slightly offset, human perception interprets the depth of these images. This can then collide with the depth of the artificial floor, resulting in a more uncomfortable experience of depth. However, this method is probably preferable if you have a strong fear of heights.

As the example in the figure shows, the artificial 3D floor against vertigo can also be rendered without perceptual problems when the elements are further apart, as in the left image. In the right (stereoscopic) image, there would be a depth conflict between the perception of the depth of the floor and the depth of the railing. Depending on the severity of the vertigo, the floor may be the better alternative despite the depth conflict.

Of course, an artificial object such as the 3D floor can also somewhat reduce the illusion of presence in virtual media when viewing highly immersive images. To avoid this as much as possible, we offer several other contextual floors to combat vertigo.

As the two examples of the 3D hot air balloon and the 3D underwater environment show, it is certainly possible to provide contextualized floors to combat vertigo that have less impact on immersion.

In an immersive VR slideshow, not only images are shown, but also sound and possibly haptics. For example, if a user has a fear of dogs, a barking soundtrack may trigger anxiety when a photo of a natural landscape without animals is shown. Therefore, as part of the bachelor thesis, we proposed to hide certain sequences of a marked audio track with anxiety triggers or to turn off an audio track with anxiety triggers completely.

In order to avoid unnatural gaps, alternative audio tracks could also be offered here, e.g. one where the dog sound is replaced by a cat sound and then loaded depending on the user’s anxiety profile. Haptics is often just the vibration of the controller when navigating through menus or during interactions such as switching images. But these haptic perceptions also increase the overall immersion.

You may want to consider turning off all controller haptics if you are anxious about touching in human-centered media. With haptic VR gloves or haptic full-body suits, much finer and newer nuances can be explored.

In addition to image, sound and haptics, immerGallery also offers various virtual environments for viewing media, such as the aforementioned hot-air balloons or the underwater environment, to increase the dynamics and thus the immersion. For nature shots, there is also a meadow environment with pollen and butterflies.

One user told us that he would never use this environment because he is afraid of butterflies. As a result, we now offer “Meadow without Butterflies”, where the pollen still makes a meadow environment more dynamic, but without butterflies to avoid the fear trigger.

User feedback has also made us aware that moving to the next image without knowing what will be displayed can cause some anxiety. It would be possible to open the menu and scroll to find the thumbnail of the next image. In practice, however, this is too time-consuming and reduces immersion. The simple solution is to display a thumbnail of the next image on the controller.

Automate the detection and replacement of fear triggers

By manually tagging and replacing various anxiety triggers, it is already possible to create very pre-selected immersive content that can be consumed comfortably on a wide scale. Of course, we would like to automate this on a larger scale. We talked about this in more detail in the research paper and will discuss it briefly here.

It is important to note that the hemispherical 180° × 180° and the fully spherical 360° × 180° media are typically stored in an equirectangular format. You can think of this as converting a round globe into a rectangular format. This is only possible with distortions, which can be clearly seen in the following image.

Current AI algorithms are largely trained on normal, perspective images, which of course make up the majority of all images. Unfortunately, many AI detectors do a poor job on equirectangular images.

Therefore, the spherical image format must first be converted into other representations in order to achieve good results with today’s AI detectors.

As mentioned above, AI-based inpainting for replacing image content usually does not train support for stereoscopic images, which automatically makes appropriate depth-corrected adjustments in the images for the left and right eyes. Unfortunately, a similar situation currently exists with AI-based photo upscaling.

Since most photos in the world are 2D, the current tools are not trained to consistently “invent” new details for the left and right eye images that are consistent between views and in terms of depth perception. There is certainly much more research to be done here, and we can look forward to new products in this area.

Summary

The issue of anxiety triggers in virtual reality will sooner or later confront many users as the medium becomes more widespread. While interactively rendered games can exchange 3D models with anxiety triggers for rendering, the situation is different for already captured 3D photos and 3D videos. In this article, we have already outlined many research approaches for reducing anxiety triggers in immersive media.

Some of these have already found their way into consumer products. However, much remains to be done in this area. Both on the research side and on the side of app developers and providers of AI tools to make VR even more accessible. It remains an important area with many exciting developments to come. We are pleased to have been able to report on this first phase and look forward to seeing where the journey takes us.

The author Daniel Pohl is CEO and founder of immerVR GmbH. There, Daniel works daily on innovations in the field of immersive media, mostly in the area of VR180 stereo photography. With his app immerGallery, you can experience highly immersive photo galleries with voice-overs and background music in various VR formats on Meta Quest devices – even together with friends in multiplayer.

Source: Mixed News

Share this article
Shareable URL
Prev Post

How the Infamous ‘Buy Bitcoin’ GTA 6 Game Trailer Was Leaked

Next Post

How To Create Hyper-Realistic AI Images with Stable Diffusion

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next