You're playing a fun game on the latest VR Headset. You're having a lot of fun but suddenly your virtual safety feels threatened due to another player in the game.
How can the current safety systems in social VR games be changed to better address the inadequacies?
Are the current safety systems in social VR games helping users?
With the growing popularity of VR headsets, virtual socializing has surged, especially during the pandemic. However, platforms like Horizon Worlds have revealed serious safety challenges, with reports of harassment, including groping and misogynistic behavior, exposing the limitations of existing moderation tools.
Despite efforts by companies and users, harassment on social VR platforms continues to rise, revealing gaps in existing safety features. This research evaluates the design and policies of safety systems across VRChat, Horizon Worlds, Rec Room, and AltSpace. By analyzing nuanced cases of verbal and physical harassment in VR, we aim to offer actionable insights for creating safer and more inclusive virtual environments.
Our project will examine the design of safety systems and platform policies in four popular VR social games, through the lens of nuanced cases of verbal and physical (embodied in VR) types of sexual harassment. We aim to provide in-depth design and policy insights into VR social games that protect users in complex and challenging situations.
This project contributes towards building a safer and more accessible environment in the virtual world. By improving the safety norms and providing better support for individuals in the virtual world, this research contributes to developing a more inclusive and self-aware environment for users from diverse backgrounds and minoritized identities.
In 2021, Meta’s push toward the metaverse brought VR platforms into the spotlight. However, the launch of Horizon Worlds quickly revealed critical safety issues, with beta testers reporting harassment, including misogynistic comments and groping. Harassment in online spaces disproportionately impacts women, leading to reduced engagement and even withdrawal. As cases of harassment on VR platforms rise, there is an urgent need to examine platform affordances and address the complexities of VR-specific harassment.
The most popular VR social games have developed and adopted safety systems with similar functions such as:
Different platforms vary the implementation of Personal Bubble where some provide the customizable size of the bubble and different groups of the audience to turn on/off the bubble. The reporting systems on VR Platforms are hard to navigate and use, which heavily shifts the responsibility onto users to collect a large amount of evidence with rigorous standards. These systems often lack transparency in the review process and consequences, if there are any, leaving the players experiencing sexual harassment in the dark.
Thus, these most common safety designs are inadequate and fail at the complex situations reported above because of their reactive and rigid binary (on/off) nature.
To understand the safety systems employed in VR, the team went into our four targeted social VR games: VRChat, RecRoom, Altspace, and Horizon Worlds to test out their functionalities. We went in groups and also individually tested the common safety tools across all platforms, such as personal bubble/space, block, mute, report, and platform-specific safety strategies.
We gathered user experiences from social media platforms and forums. The online discussion data was collected via keyword searches on Twitter, the Oculus VR forum, and Reddit forums for VRchat, OculusQuest, AltspaceVR, and RecRoom. The keywords included harass, harassment, sex, sexual, and women. Posts and comments were added to our user experiences collection if they discussed any kind of sexual harassment in a social VR game, as were the most prominent comments under the posts that discussed the topic. The result was about 110 online posts. Iteratively, we created a codebook and categorized our gathered data into experiences vs type of user. Our users fell into 4 categories:
Often the original posts were made by players experiencing harassment, but other players would be inspired to share their stories in the comments. For the most part, the reactions category captured commenters on the original posts that did not experience the harassment. There were a few instances of posts and comments from people who were bystanders or offenders in the incident.
All four VR games studied use Personal Bubble and Block features with varying effects and customization. Horizon Worlds and Rec Room allow customization but differ in methods. Blocking, a common safety tool, lacks consistency in VR. For instance, Horizon Worlds uses unidirectional blocking, where only the blocker avoids the blocked user, unlike other platforms with bidirectional blocking, which hides both users from each other. The Personal Bubble addresses VR's physicality challenges, but inconsistent naming across platforms complicates user adaptation.
Inconsistencies across game settings and choices cause a major problem for novice users.
Users posting about their first-person experiences of being harassed most frequently reported harassment in the Hate category. 31.9% of posts describing misogyny, racism, homophobia, and/or transphobia (These subcodes were combined into the Hate category as this emerged as a trend in the data.)
22.4% of harassment accounts included unwanted sexual attention, which was on par with embodied harassment at 21.6%. This underscores the prevalence and seriousness of both verbal and physical harassment.
The Providing Suggestions code initially was categorized as sympathetic, but through the coding, it emerged that it could have a supportive or dismissive tone. Apathetic responses were most common with 36.4% of responses falling under this category. 23.7% of responses gave a suggestion, regardless of a positive or negative intention in doing so 22.9% of responses were sympathetic to the experience of the player being harassed.16.9% of responses were negative.
Avatars gender bending (Using a different gender on the player's avatar online) and using voice modulators (showcasing a different voice online) were never offered as a suggestion but employed by potentially vulnerable players. The most dramatic differences between reported and response strategies came in the use of the personal bubble and the suggestion of the block feature.
The percentage differences between the reported and response suggestions were +33.5% for personal bubble use and -21.9% for blocking. This shows players experiencing harassment opt for methods like personal bubble over blocking in contrary to what commenters on their experience suggest they do. There is also a phenomenon where response posters encouraged leaving the world at an 8% higher rate.
We coded the 110 instances that we found on Twitter, Reddit and Oculus forums in pairs. The pairs had an internal consistency of Cohen’s kappa values of 0.78 (97.12% agreement) and 0.84 (96.25% agreement) respectively. Inconsistent instances were discussed within the pairs to reach a consensus on how the text should be coded. The themes that emerged from coding were mostly in line with the originally observed topics that make up the codebook. However, the misogyny, racism, homophobia, and transphobia codes were collapsed into a category called “hate” in parts of the analysis to better represent the intersectionality and prevalence of these topics.
We found that harassment in VR online spaces is a major a concern especially in the categories of embodied sexual harassment, verbal unwanted sexual attention, and hate speech (including misogyny, homophobia/biphobia, racism, and transphobia).