In-Game Player Toxicity: Using AI to Make Gaming Safer

Last Updated May 18, 2022

player toxicity

In-game player toxicity is a widespread issue, Artificial Intelligence is creating it a safer and more inclusive spaces in gaming.

The gaming industry will exceed $200 billion in 2022. Millions of people all over the world spend hours a day in these environments. Unfortunately, these spaces aren’t especially safe because of player toxicity.

One recent study revealed 83% of those between 18- and 45-years experienced harassment in online multiplayer games. 71% of the harassment qualifies as “severe abuse,” including physical threats, stalking and sustained harassment.

And that’s just scratching the surface when it comes to the nature of the harassment. Misogyny, homophobia, racism and generalized bullying are quite common, and in-game player toxicity is a tricky problem with few meaningful and tangible solutions.

AI is changing the game, though. Here’s how.

Defining the Problem: How rampant is in-game player toxicity?

As the gaming industry grows, so does the need to create safe spaces for all users.

“People turn to gaming to escape, assuming the virtual worlds they enter will be safe and fun,” said PickFu co-founder John Li. “The reality, in gamers’ own words, is that these negative interactions significantly impact their mental health and enjoyment of games. This should be a wake-up call for the industry to make gaming a safe space for everyone.”

What exactly do we mean when we talk about in-game toxicity? Examples include doxxing (publicizing others’ private information), sexual harassment, violent threats, and hate speech.

It’s most common in role-playing, action, and adventure genres, but it also seeps into the world of eSports. Games most associated with toxicity include Call of Duty, Fortnite, League of Legends, Overwatch, and Rainbow Six.

These are competitive games, and that tends to bring out emotional and extreme behavior. Add to that the fact many players have anonymous usernames, and people feel empowered to say things they may not say in everyday situations.

Key Stats

Playing games online in 2021: Toxicity, misogyny and missing moderation” revealed more precise and troubling information about in-game toxicity.

70% of respondents experienced toxic behavior directly or witnessed it firsthand in an online video game.

Out of the thousand gamers they polled, 38% were the direct target of abusive remarks, while 32% had witnessed abuse. Of the 38% who experienced abuse:

  • 46% of men were direct targets
  • 30% of women were direct targets

Nearly half (49%) of the toxic behavior experienced by respondents centered around personal identity, including ethnicity, gender, and sexual orientation.

  • 18% said they had been verbally harassed in an online game specifically because of their gender
  • 16% had been harassed because of their ethnicity
  • 15% had received harassment due to their sexual orientation

The Anti-Defamation League adds these trends are especially troubling with respect to young people.

“Teenage gamers are harassed almost as often as adult gamers,” ADL CEO Jonathan Greenblatt said in a statement. “By allowing this harassment of young people to continue, we risk children feeling that they should be ashamed of who they are.”

The ADL compares the situation to harassment on social media and suggest laws must be put in place to protect gamers from toxicity of this nature.

The tricky part is much of the harassment is verbal, and the questions becomes how to recognize, flag and get rid of this hate speech without constant human monitoring.

How AI Curbs In-Game Player Toxicity

AI is already present in voice-controlled video games. Machine models further monitor and identify bullying, profanity, hate speech, sexual harassment, and graphic abusive language.

Further inroads come from facial recognition and emotional AI.

The AI flags and warns offenders.  The AI can also ban users after a series of offenses or after the first offense if there’s a zero-tolerance policy.

That all begins by teaching the AI to recognize harmful language.

There are two primary types of in-game AI moderation (as well as a hybrid approach): rule-based, and advanced, context-aware moderation.

Rule-Based Moderation

This method relies extensively on databases of acceptable and disallowed words and phrases. It also relies on a great deal of human input and maintenance.

A rule-based system provides some protection for gamers, but it’s not recommended for natural languages that are continuously evolving.

It also lacks context. Slang, for example, can easily get through the filter.

Context-Aware AI Moderation

A vast number of samples of acceptable and objectionable content trains the model. The AI uses these examples to “learn” what’s acceptable.

This more advanced AI is very reliable and hard to cheat. It analyzes entire sentences instead of picking out single keywords and phrases, so it’s much better at understanding semantic meaning, context, and inference.

This solution is cheaper, faster, and more scalable than a large team of human moderators.

How do we get there?

Developers need high-quality speech data to train and test their machine learning models for an increasingly global customer base.

Some of the biggest tech companies in the world outsource their data collection to third-party providers who have spent years developing efficient workflows and technology.

To feed the AI with examples of in-game toxicity that people experience requires speech data of a sensitive nature. You literally need to find people willing to record the things you’re trying to remove from the space.

That means an airtight process, the right people, a secure platform, and the highest degree of protection and privacy.

Creating a more inclusive space also requires human transcription.  To create voice technology – or in this case, moderation – that understands everyone. Speech algorithms need to be trained with speech data from people of all demographic backgrounds.

In these cases, human transcriptionists are still needed to capture the edge cases where automation speech recognition still struggles.

And this is a case very much on the edge, seeing as though there’s very limited data with respect to this level of toxicity.

It begins with collecting and labeling the data, training the models to recognize in-game toxicity, and committing to making these spaces safe for everyone.

Challenges in Achieving Gaming Environment Utopia with Annotation

For more insight, we turned to Aziz Khan, a Solutions Architect Manager here at Summa Linguae Technologies. Here’s what he had to say:

“Human emotions and nuances are very complex, and that creates an enormous challenge for data scientists.

Gaming companies want to create a gaming environment that is exciting and fun for everyone. However, banning all profanity also makes the players feel like they are in grade school.

So, how do we achieve this utopia?

Data annotation is part of a design, and like any design, it takes time to perfect. Using sentiment analysis, we can control certain aspects of the gaming environment. It enables us to make marginal adjustments over time to make gaming pleasant for everyone.

The initial model is only the tip of the iceberg when seeking an ideal result. It can take months to years to develop the ideal detection model.

Data scientists also face challenges when communicating with the average annotator and the project manager. It can be difficult for them to explain their end goals. It’s normal to see gaps in communication between what the project managers need to hear and what the scientist would like to convey.

As the world of AI annotation expands, it’s safe to assume that we’ll need to get better at cross-departmental communication.”

Text-to-image AI is a great example of what Aziz is referring to.

We’re Helping Change the Game

Our clients recognize our data solutions team to be extremely versatile with our outside-of-the-box thinking.

As we’ve developed our crowd and our platform, we’ve gained the ability to offer custom speech data collection at scale.

We also annotate and label the data to make sure it’s usable and effective.

To learn how we can create a speech collection program for your organization, book a consultation now.

Related Posts

Summa Linguae uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy.

Learn More