Sexual assault in the metaverse isn’t a glitch that can be fixed
A growing body of research has documented multiple challenges with commercial content moderation conducted by social media platforms today, from appalling working conditions for moderators who are overwhelmingly located in the Global South, to the issue of biased algorithms, and the lack of transparency and accountability in moderation decisions.
Moderation is no doubt an incredibly difficult task. A recent high-profile case of sexual assault in Metaâs platform, Horizon World, raises more questions around how virtual reality environments, such as the metaverse, should be moderated.
Despite Facebookâs recent rebranding as a âmetaverseâ company, the metaverse is still a speculative platform.
As Julie Inman Grant, Australiaâs eSafety Commissioner, speculates, the metaverse could refer to âan extended 3D worldâ, or âa multiverse with a range of âwalled gardenâ offeringsâ.
Even Meta has admitted earlier this year that the implementation of its vision for the metaverse is at least five to 10 years away.
However, this didnât stop CEO Mark Zuckerberg painting a speculative vision of the metaverse in a 2021 keynote â a set of virtual spaces where people from different physical spaces can congregate and seamlessly interact in real time with a sense of presence and total immersion.
For many critics, this real-time multisensory social interaction is what distinguishes itself from traditional âtwo-dimensionalâ social media platforms, resulting in a corresponding shift from moderating âcontentâ to moderating âbehaviourâ.
Moderating bodies and movement
The metaverse provides added complexity to content moderation â not only are texts and images needing to be checked for unsavoury content, but also actions, movements, and voices. This amounts to hundreds of thousands of minute movements that would need to be assessed in the course of content moderation.
The gargantuan volume of these materials creates a problem of scale to which, once again, artificial intelligence (AI) seems to be the perfect solution.
Nick Clegg, Metaâs head of global affairs, muses that the metaverse might modify existing AI tools that are currently being trialled in online gaming, such as Good Game, Well Played. GGWP is AI software that produces a social score for players, based on a number of anti-social behaviours, such as quitting an online gaming match before itâs finished, writing racial slurs in a gameâs chat feature to other players, or not being a âteam playerâ.
GGWP creator Dennis Fong says the chat function in particular pays attention to the context in which potentially hateful speech can be made. If a report is made against a player with a bad social credit score, or by a highly-ranked player in the game, then the report will be placed at the top of the queue for a human moderator to assess.
There are familiar challenges with adapting this approach to moderation in the metaverse. AI software is only as sensitive as the data being fed into it, which has historically led to serious problems, as demonstrated by Googleâs autocomplete feature suggesting searches that are racist, sexist, and promote misinformation.
An approach that leans so heavily on AI, with little human involvement, has also led to policies that disproportionately affect minority groups (such as YouTubeâs demonetisation policy affecting LGBTQIA+ content), and tacitly condones behaviour that excludes many minority groups from these spaces.
Further, in order to tackle behaviour moderation in relation to sexual assault, AI software will need to address bodily movements, which begs the question: How do we determine what bodily movements are sexual, given that sexualised violence is highly complex, fluid, context-dependent, and cannot be neatly defined?
And, how might these rules need to be modified in different spaces within the metaverse?
Patrolling the metaverse?
Traditional moderation software wonât be able to cope with moderation in the metaverse.
Matthew Friedman, CEO of The Mekong Club, a non-profit organisation addressing human trafficking and forced labour, takes his cue from how abuses are dealt with in real life. In a SCMP op-ed, Friedman proposes virtual police might be required to patrol the metaverse to keep everyone safe, particularly vulnerable groups such as women and children.
This proposal isnât surprising, as people have always imagined cyberspace to look like a version of real urban spaces. So, if we expect the police to patrol our cities, weâll similarly expect them to patrol the metaverse.
Clegg also draws real-world parallels to behaviour moderation in the metaverse, by drawing a comparison with how certain behaviours are enforced in public spaces, such as bars and cafes.
But Clegg seems to have accepted this approach wouldnât be enforced:
âWe wouldn’t hold a bar manager responsible for real-time speech moderation in their bar, as if they should stand over your table, listen intently to your conversation, and silence you if they hear things they don’t like.â This implies two things: that moderation will be left up to individuals or smaller companies that create virtual spaces within the metaverse, and that Meta assumes behaviour moderation on the scale of the metaverse will ultimately not be possible â both of which absolve Meta from the bulk of responsibility when it comes to moderation of behaviour in the metaverse.
Yet, even if Meta is willing to commit to moderation in a way that no other tech company has previously, the issue of sexualised violence and abuse wonât be resolved by simply employing more people to act as virtual police or bar managers.
Similar to the issues with human moderators employed by commercial social media platforms, this raises the question of who will be employed as the police, under which working conditions, and based on standards set by whom.
Further, police forces have historically been ineffective in addressing sexual assault.
More importantly, as one of us has previously argued, ââthis discourse of âpolice as protectorsâ and âwomen as vulnerableâ is highly problematic, as this pushes women into the position of victims even before sexual assault occurs, and risks legitimising surveillance as the inevitable solution to address gender-based violence.
Not a glitch that can be fixed or tweaked
While the metaverse remains difficult to define, people are relying on historical solutions to sexualised violence â either through AI on current social media platforms, or police-centred models. Incidents of sexual assault in the metaverse, while troubling, are also unsurprising. While the technology is new, the threats of sexual violence are the continuation of harms weâre familiar with in both the physical and online worlds.
“Incidents of sexual assault in the metaverse, while troubling, are also unsurprising. While the technology is new, the threats of sexual violence are the continuation of harms weâre familiar with in both the physical and online worlds.”
As Meta has historically failed its users on issues of moderation, itâs important to demand clear solutions, as well as more responsibility and accountability from Meta, before the metaverse becomes embedded in our everyday lives.
But weâll have to come to terms with the fact thereâs no magical technological fix to issues of sexual assault in any medium. Weâll have to acknowledge that sexualised violence in the metaverse isnât simply a âglitchâ that can be fixed or tweaked.
This article was first published on Monash Lens. Read the original article.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.