AI and Online Gaming Safety: Power-Up with a “Hybrid Approach” to Moderation

Last Updated September 5, 2023

online gaming safety

Online gaming safety is a prevalent issue, and AI is helping build safer and more inclusive spaces in gaming. But human touchpoints still offer the best chance at protecting your gaming community.

Chances are you’re taking a break from playing a game on your phone or computer to read this article. Millions of people all over the world spend hours a day in these gaming environments that, unfortunately, aren’t very safe.

A 2022 survey found adult exposure to white supremacy in online games has more than doubled to 20% of gamers, up from 8% in 2021.

Among young gamers ages 10-17, 15% were witness to white supremacist ideologies and themes in online games.

The gaming world isn’t decreasing in size anytime soon. Statista estimates the North American gaming market will amount to 80.9 billion U.S. dollars annually in 2025, up from 56.8 billion U.S. dollars in 2021. North America will likely remain the top-grossing gaming market worldwide despite strong growth in the Asian region.

We’ve written previously about how AI is making gaming safer. Now it’s time to discuss a hybrid method with human touchpoints that will help the gaming community level up!

AI Gaming Moderation

There are two primary types of in-game AI moderation: rule-based, and advanced, context-aware moderation.

Rule-Based Moderation

This method relies extensively on databases of acceptable and disallowed words and phrases, both spoken and in text form. It also relies on a great deal of human input and maintenance.

A rule-based system provides some protection for gamers, but not for natural languages that are continuously evolving.

It also lacks context. Slang, for example, can easily get through these filters.

Context-Aware AI Moderation

To combat that inflexible approach, you need to stay current with data, and lots of it.

A vast number of samples of acceptable and objectionable content, trains this particular model. The AI uses these examples to “learn” what’s acceptable and flag any offensive language.

This more advanced AI is very reliable and hard to cheat. It analyzes entire sentences instead of picking out single keywords or phrases, so it’s much better at understanding semantic meaning, context, and inference.

This solution is cheaper, faster, and more scalable than a large team of human moderators.

But, as  Aziz Khan, a Solutions Architect Manager here at Summa Linguae Technologies notes, “Human emotions and nuances are very complex, and that creates an enormous challenge for data scientists.”

What’s more, have you heard about “algospeak”? These code words or turns of phrase are making things even more challenging and prompting human touchpoints in the moderation process.

“Algospeak” and the Hybrid Approach to Online Gaming Safety

In a recent TELUS International survey, more than half of respondents (51%) said they have seen “algospeak” on social media and in gaming communities.

“Internet users and gamers have become increasingly creative in their efforts to circumvent AI-powered content moderation of offensive or banned expressions online,” according to the accompanying study.

“A combination of ‘algorithm’ and ‘speak,’ algospeak is the collection of codewords, slang, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended one.

“For example, “unalive” is a regularly used algospeak term for “dead” or “The Vid” for COVID-19. While algospeak can help individuals – including those in marginalized communities – discuss topics perceived by some to be controversial without having their content automatically flagged for removal, it also can be used by those wanting to intimidate, harass and cyberbully others.”

If common slang is a challenge for gaming moderation, then keeping up with algospeak isn’t something that can be done by AI automation alone.

Keeping Humans in the Loop

The “human-in-the-loop” approach is a powerful way to leverage the strengths of both human intelligence and machine learning algorithms. This helps achieve better results in tasks that require complex understanding, context, and decision-making.

Here’s how the process generally works in the context of gaming moderation:

Initial Training

Machine learning models are trained on a labeled dataset of content that needs to be moderated. This could include examples of offensive language, hate speech, explicit content, etc. Human moderators are heavily involved in curating and annotating this dataset. 

Algorithmic Automation 

After training, the machine learning model can start automatically flagging or categorizing content based on what it has learned from the training dataset. However, these automated systems are not perfect and can sometimes make mistakes or struggle with nuanced context. 

Initial Training

Machine learning models are trained on a labeled dataset of content that needs to be moderated. This could include examples of offensive language, hate speech, explicit content, etc. Human moderators are heavily involved in curating and annotating this dataset.

Algorithmic Automation

After training, the machine learning model can start automatically flagging or categorizing content based on what it has learned from the training dataset. However, these automated systems are not perfect and can sometimes make mistakes or struggle with nuanced context.

Human Oversight

Human moderators review the content flagged by the automated system. They can make corrections, provide additional labels, and essentially “teach” the model where it went wrong. This feedback loop is essential for refining the model’s performance over time.

Continuous Improvement

The human feedback retrains the model, incorporating new examples and corrections. This iteractive process helps the model improve its accuracy and understanding of complex contexts.

Better Decision-Making

Humans are better at understanding sarcasm, cultural references, and subtle nuances that can be challenging for algorithms. By involving humans in the decision-making process, the system becomes more effective at maintaining a positive user experience.

Adapting to Changing Contexts

Community guidelines, cultural norms, and legal regulations can change over time. Human moderators are essential for staying up-to-date and adjusting the system’s criteria accordingly.

Ethical and Legal Considerations

Human moderators are crucial for ensuring that the content moderation process aligns with ethical standards, respects freedom of speech within legal boundaries, and considers cultural sensitivities.

This delicate balance between human and machine involvement ensures that the moderation process is not only accurate but also sensitive to the complexities of human communication.

It’s a collaborative approach that leverages the strengths of both AI and human moderators to create a safer and more welcoming online environment.

Team Up with our Data Annotation Services for Better Online Gaming Safety Options

In trying to keep online gaming safe and inclusive, human transcriptionists capture the edge cases where automation speech recognition still struggles.

And this is a case very much on the edge, seeing as though there’s very limited data with respect to these levels of toxicity and the corresponding algospeak.

It begins with collecting and labeling the data, training the models to recognize in-game toxicity, and committing to making these spaces safe for everyone.

As innovators in the data collection space, we offer flexible, customizable data services that evolve with your needs.

Render your data meaningful and train your algorithm free from biases with our labeling and classification services for text, speech, image, and video data.

So, contact us today to learn more.

Related Posts

Summa Linguae uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy.

Learn More