Safety Systems in Social VR Games

Research • Design Implications • Ethics • Accessibility

Content Warning: This project includes discussion around topics such as sexual harassment and identity-based harassment

My Role

  • Conducted field studies and usabilty testing
  • Participated in Qualitative coding, and thematic analysis
  • Proposed design implications for potential change in design and in tech industry

Resources

  • Duration: Oct 2022 - Dec 2022
  • Team: Simona Liao, Monika Kwapisz, Ramya Bhagirathi Subramanian
You're playing a fun game on the latest VR Headset. You're having a lot of fun but suddenly your virtual safety feels threatened due to another player in the game.

How can the current safety systems in social VR games be changed to better address the inadequacies?

Are the current safety systems in social VR games helping users?

Problem

With the growing popularity of VR headsets, virtual socializing has surged, especially during the pandemic. However, platforms like Horizon Worlds have revealed serious safety challenges, with reports of harassment, including groping and misogynistic behavior, exposing the limitations of existing moderation tools.

Despite efforts by companies and users, harassment on social VR platforms continues to rise, revealing gaps in existing safety features. This research evaluates the design and policies of safety systems across VRChat, Horizon Worlds, Rec Room, and AltSpace. By analyzing nuanced cases of verbal and physical harassment in VR, we aim to offer actionable insights for creating safer and more inclusive virtual environments.

Our project will examine the design of safety systems and platform policies in four popular VR social games, through the lens of nuanced cases of verbal and physical (embodied in VR) types of sexual harassment. We aim to provide in-depth design and policy insights into VR social games that protect users in complex and challenging situations.

Why is this relevant?

This project contributes towards building a safer and more accessible environment in the virtual world. By improving the safety norms and providing better support for individuals in the virtual world, this research contributes to developing a more inclusive and self-aware environment for users from diverse backgrounds and minoritized identities.

How are users affected? 

In 2021, Meta’s push toward the metaverse brought VR platforms into the spotlight. However, the launch of Horizon Worlds quickly revealed critical safety issues, with beta testers reporting harassment, including misogynistic comments and groping. Harassment in online spaces disproportionately impacts women, leading to reduced engagement and even withdrawal. As cases of harassment on VR platforms rise, there is an urgent need to examine platform affordances and address the complexities of VR-specific harassment.

Current Safety Designs & Platform Policies

The most popular VR social games have developed and adopted safety systems with similar functions such as:

Personal Bubble
Creates an invisible space around a player's avatar that other players cannot reach or enter.
Blocking
Imitation of the idea of blocking on social media platforms. Some games remove the blocked players avatar, and some turn the avatars invisible
Reporting
By filling out forms on other platforms. Responsibility onto the users to collect a large amount of evidence and rigorous standards.

Different platforms vary the implementation of Personal Bubble where some provide the customizable size of the bubble and different groups of the audience to turn on/off the bubble. The reporting systems on VR Platforms are hard to navigate and use, which heavily shifts the responsibility onto users to collect a large amount of evidence with rigorous standards. These systems often lack transparency in the review process and consequences, if there are any, leaving the players experiencing sexual harassment in the dark.

Thus, these most common safety designs are inadequate and fail at the complex situations reported above because of their reactive and rigid binary (on/off) nature.

Our Analysis

To understand the safety systems employed in VR, the team went into our four targeted social VR games: VRChat, RecRoom, Altspace, and Horizon Worlds to test out their functionalities. We went in groups and also individually tested the common safety tools across all platforms, such as personal bubble/space, block, mute, report, and platform-specific safety strategies.

We gathered user experiences from social media platforms and forums. The online discussion data was collected via keyword searches on Twitter, the Oculus VR forum, and Reddit forums for VRchat, OculusQuest, AltspaceVR, and RecRoom. The keywords included harass, harassment, sex, sexual, and women. Posts and comments were added to our user experiences collection if they discussed any kind of sexual harassment in a social VR game, as were the most prominent comments under the posts that discussed the topic. The result was about 110 online posts. Iteratively, we created a codebook and categorized our gathered data into experiences vs type of user. Our users fell into 4 categories:

01
Players Experiencing Harrasment
Those who experience, or share others' experiences
02
By Standers
Those who watch/not react to experiences showcasing a neutral stance online
03
People who React
Those who react to experiences (positive/negative)
04
Offenders
Those who create nuisance and offend other players

Often the original posts were made by players experiencing harassment, but other players would be inspired to share their stories in the comments. For the most part, the reactions category captured commenters on the original posts that did not experience the harassment. There were a few instances of posts and comments from people who were bystanders or offenders in the incident.

OUR INFERENCES

All four VR games studied use Personal Bubble and Block features with varying effects and customization. Horizon Worlds and Rec Room allow customization but differ in methods. Blocking, a common safety tool, lacks consistency in VR. For instance, Horizon Worlds uses unidirectional blocking, where only the blocker avoids the blocked user, unlike other platforms with bidirectional blocking, which hides both users from each other. The Personal Bubble addresses VR's physicality challenges, but inconsistent naming across platforms complicates user adaptation.

Inconsistencies across game settings and choices cause a major problem for novice users.

Users posting about their first-person experiences of being harassed most frequently reported harassment in the Hate category. 31.9% of posts describing misogyny, racism, homophobia, and/or transphobia (These subcodes were combined into the Hate category as this emerged as a trend in the data.)
22.4% of harassment accounts included unwanted sexual attention, which was on par with embodied harassment at 21.6%. This underscores the prevalence and seriousness of both verbal and physical harassment.

The Providing Suggestions code initially was categorized as sympathetic, but through the coding, it emerged that it could have a supportive or dismissive tone. Apathetic responses were most common with 36.4% of responses falling under this category. 23.7% of responses gave a suggestion, regardless of a positive or negative intention in doing so 22.9% of responses were sympathetic to the experience of the player being harassed.16.9% of responses were negative.

Avatars gender bending (Using a different gender on the player's avatar online) and using voice modulators (showcasing a different voice online) were never offered as a suggestion but employed by potentially vulnerable players. The most dramatic differences between reported and response strategies came in the use of the personal bubble and the suggestion of the block feature.

The percentage differences between the reported and response suggestions were +33.5% for personal bubble use and -21.9% for blocking. This shows players experiencing harassment opt for methods like personal bubble over blocking in contrary to what commenters on their experience suggest they do. There is also a phenomenon where response posters encouraged leaving the world at an 8% higher rate.

We coded the 110 instances that we found on Twitter, Reddit and Oculus forums in pairs. The pairs had an internal consistency of Cohen’s kappa values of 0.78 (97.12% agreement) and 0.84 (96.25% agreement) respectively. Inconsistent instances were discussed within the pairs to reach a consensus on how the text should be coded. The themes that emerged from coding were mostly in line with the originally observed topics that make up the codebook. However, the misogyny, racism, homophobia, and transphobia codes were collapsed into a category called “hate” in parts of the analysis to better represent the intersectionality and prevalence of these topics.

Design Implications and Recommendations

We found that harassment in VR online spaces is a major a concern especially in the categories of embodied sexual harassment, verbal unwanted sexual attention, and hate speech (including misogyny, homophobia/biphobia, racism, and transphobia).

01
Consistency & Customization
Having consistent naming, customization features, and similar user journeys can allow gamers to use the feature extensively at their liberty.
02
Usability of Safety Tools
Should a user feel unsafe at any point, the safety features must be at their disposal to ensure maximum protection. The users should not have to go through multiple dialogs and screens before having to use it.
03
Increased Accountability
Current harassment prevention relies on strict policies and content moderation. Introducing mutual friend endorsements could add accountability and promote ethical behavior.
04
Norm of Accepting User Preferences
Badges enable users to self-report preferences for intimacy and social interaction (verbal, non-verbal, or text) while also showing support for marginalized communities.
05
Increased Transparency
Users hesitate to block offenders due to unclear consequences and fear of retaliation, viewing it as a negative last resort.
06
Building a Positive Community
First-time users often avoid returning to social VR games after negative experiences. Building support systems and relatable communities can foster positivity, boosting participation and reducing dropouts.

FUTURE of VR Gaming

Participatory Design Sessions
PD sessions with marginalized communities help understand user safety practices and identify system gaps.
Inclusive Stakeholder Involvement
Input from users, moderators, and industry representatives provides insights into safety improvements.
Building Gender-Positive Communities
Creating inclusive spaces and updating policies can address issues like misogyny, racism, and homophobia.
Demographic Representation and Data Limitations
The data mainly reflects experiences of women, people of color, and LGBTQ individuals but excludes less visible groups.
Coding Consensus & Data Analysis
Improving coding consistency and expanding data collection will capture a wider range of experiences.
Future Research Opportunities
Future studies should involve more diverse demographics to broaden the understanding of safety challenges.

I won the runner-up prize at the Society of Women Engineers Barcelona Conference for presenting this research at their Poster Competition.