Sexual assault in the metaverse isn’t a glitch that can be fixed

A close-up black and white image of a woman looking at the camera. There is a filter overlay that makes it look as if she is glitching

A growing body of research has documented multiple challenges with commercial content moderation conducted by social media platforms today, from appalling working conditions for moderators who are overwhelmingly located in the Global South, to the issue of biased algorithms, and the lack of transparency and accountability in moderation decisions.

Moderation is no doubt an incredibly difficult task. A recent high-profile case of sexual assault in Meta’s platform, Horizon World, raises more questions around how virtual reality environments, such as the metaverse, should be moderated.

Despite Facebook’s recent rebranding as a “metaverse” company, the metaverse is still a speculative platform.

As Julie Inman Grant, Australia’s eSafety Commissioner, speculates, the metaverse could refer to “an extended 3D world”, or “a multiverse with a range of ‘walled garden’ offerings”.

Even Meta has admitted earlier this year that the implementation of its vision for the metaverse is at least five to 10 years away.

However, this didn’t stop CEO Mark Zuckerberg painting a speculative vision of the metaverse in a 2021 keynote – a set of virtual spaces where people from different physical spaces can congregate and seamlessly interact in real time with a sense of presence and total immersion.

For many critics, this real-time multisensory social interaction is what distinguishes itself from traditional “two-dimensional” social media platforms, resulting in a corresponding shift from moderating “content” to moderating “behaviour”.

Moderating bodies and movement

The metaverse provides added complexity to content moderation – not only are texts and images needing to be checked for unsavoury content, but also actions, movements, and voices. This amounts to hundreds of thousands of minute movements that would need to be assessed in the course of content moderation.

The gargantuan volume of these materials creates a problem of scale to which, once again, artificial intelligence (AI) seems to be the perfect solution.

Nick Clegg, Meta’s head of global affairs, muses that the metaverse might modify existing AI tools that are currently being trialled in online gaming, such as Good Game, Well Played. GGWP is AI software that produces a social score for players, based on a number of anti-social behaviours, such as quitting an online gaming match before it’s finished, writing racial slurs in a game’s chat feature to other players, or not being a “team player”.

GGWP creator Dennis Fong says the chat function in particular pays attention to the context in which potentially hateful speech can be made. If a report is made against a player with a bad social credit score, or by a highly-ranked player in the game, then the report will be placed at the top of the queue for a human moderator to assess.

There are familiar challenges with adapting this approach to moderation in the metaverse. AI software is only as sensitive as the data being fed into it, which has historically led to serious problems, as demonstrated by Google’s autocomplete feature suggesting searches that are racist, sexist, and promote misinformation.

metaverse, vr, virtual reality-7247267.jpg
Woman in VR

An approach that leans so heavily on AI, with little human involvement, has also led to policies that disproportionately affect minority groups (such as YouTube’s demonetisation policy affecting LGBTQIA+ content), and tacitly condones behaviour that excludes many minority groups from these spaces.

Further, in order to tackle behaviour moderation in relation to sexual assault, AI software will need to address bodily movements, which begs the question: How do we determine what bodily movements are sexual, given that sexualised violence is highly complex, fluid, context-dependent, and cannot be neatly defined?

And, how might these rules need to be modified in different spaces within the metaverse?

Patrolling the metaverse?

Traditional moderation software won’t be able to cope with moderation in the metaverse.

Matthew Friedman, CEO of The Mekong Club, a non-profit organisation addressing human trafficking and forced labour, takes his cue from how abuses are dealt with in real life. In a SCMP op-ed, Friedman proposes virtual police might be required to patrol the metaverse to keep everyone safe, particularly vulnerable groups such as women and children.

This proposal isn’t surprising, as people have always imagined cyberspace to look like a version of real urban spaces. So, if we expect the police to patrol our cities, we’ll similarly expect them to patrol the metaverse.

Clegg also draws real-world parallels to behaviour moderation in the metaverse, by drawing a comparison with how certain behaviours are enforced in public spaces, such as bars and cafes.

But Clegg seems to have accepted this approach wouldn’t be enforced:

“We wouldn’t hold a bar manager responsible for real-time speech moderation in their bar, as if they should stand over your table, listen intently to your conversation, and silence you if they hear things they don’t like.” This implies two things: that moderation will be left up to individuals or smaller companies that create virtual spaces within the metaverse, and that Meta assumes behaviour moderation on the scale of the metaverse will ultimately not be possible – both of which absolve Meta from the bulk of responsibility when it comes to moderation of behaviour in the metaverse.

Yet, even if Meta is willing to commit to moderation in a way that no other tech company has previously, the issue of sexualised violence and abuse won’t be resolved by simply employing more people to act as virtual police or bar managers.

Similar to the issues with human moderators employed by commercial social media platforms, this raises the question of who will be employed as the police, under which working conditions, and based on standards set by whom.

Further, police forces have historically been ineffective in addressing sexual assault.

More importantly, as one of us has previously argued, ​​this discourse of “police as protectors” and “women as vulnerable” is highly problematic, as this pushes women into the position of victims even before sexual assault occurs, and risks legitimising surveillance as the inevitable solution to address gender-based violence.

Not a glitch that can be fixed or tweaked

While the metaverse remains difficult to define, people are relying on historical solutions to sexualised violence – either through AI on current social media platforms, or police-centred models. Incidents of sexual assault in the metaverse, while troubling, are also unsurprising. While the technology is new, the threats of sexual violence are the continuation of harms we’re familiar with in both the physical and online worlds.


“Incidents of sexual assault in the metaverse, while troubling, are also unsurprising. While the technology is new, the threats of sexual violence are the continuation of harms we’re familiar with in both the physical and online worlds.”


As Meta has historically failed its users on issues of moderation, it’s important to demand clear solutions, as well as more responsibility and accountability from Meta, before the metaverse becomes embedded in our everyday lives.

But we’ll have to come to terms with the fact there’s no magical technological fix to issues of sexual assault in any medium. We’ll have to acknowledge that sexualised violence in the metaverse isn’t simply a “glitch” that can be fixed or tweaked.



This article was first published on Monash Lens. Read the original article.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.