skip to Main Content
AI Automation in Gaming

AI Automation in Gaming – The Good, the Bad, and the Uncertain

Activision, the publisher behind the popular ‘Call of Duty’ franchise, has announced plans to utilise an AI tool known as ToxMod to monitor and report on hate speech in multiplayer voice chats. This move comes amid increasing concerns over online toxicity.

The gaming industry is turning to AI technology in new and untested ways, bringing promise and uncertainties.

On the one hand, AI promises to make online interactions safer by detecting toxic behaviour. However, questions remain about copyright issues for AI-generated content.

Gaming companies find themselves navigating uncharted territory in applying these technologies.

Making Gaming Safer with AI Moderation

A key application of AI is using “voice chat moderation” to curb hate speech and harassment in popular online multiplayer games.

Activision Blizzard recently announced a partnership with AI firm Modulate to integrate their “ToxMod” software into the voice chat of upcoming Call of Duty titles.

ToxMod uses machine learning algorithms to analyse the tone and context of conversations to distinguish between playful banter and truly harmful behaviour.

While ToxMod doesn’t directly punish players, it detects and reports violations to Activision’s moderators.

Rival gaming engine maker Unity also unveiled their own AI-powered “Safe Voice” product for analysing in-game voice chat.

Safe Voice similarly promises to detect disruptive behaviours and prioritise moderation for game developers.

Both companies tout these moderator AI systems as solutions to gaming’s ongoing struggle with toxic behaviour while emphasising human oversight in enforcement actions. However, the accuracy and fairness of such systems still need to be tested on a large scale.

Also read:

Copyright Concerns Around AI Content

At the same time, uncertainties around copyright law are leading some gaming firms to shun AI-generated content altogether.

Valve Corporation, the owner of the popular Steam online game store, confirmed reports that it has rejected games using AI-created assets over legal concerns.

Valve has directly communicated to affected developers that its rejection decisions are based on the ambiguous legal status of AI training data.

Much of this data comes from copyrighted works without explicit permission from the owners. While no definitive rulings exist, Valve is playing it safe until clearer guidelines emerge.

This conservative approach has produced mixed reactions among game developers eager to experiment with AI tools. Some view it as stifling innovation, while others worry about promoting low-quality AI content.

Legal experts predict AI systems will eventually shift towards exclusively license-free data.

Navigating the Unknown

Companies face tricky decisions around responsible implementation as AI assumes a growing role in game development.

While the technology shows promise in areas like content moderation, questions linger about biases hidden in “black box” algorithms. Game makers need to strike a careful balance between safety and openness.

For now, firms like Activision and Valve find themselves navigating relatively uncharted territory. Their choices could set influential precedents across the industry.

Ryan Anderson

Ryan specialises in AI media subjects, covering innovations in AI art, music, and more. His academic background, with an MSc in Product Design Engineering and a Masters of Design from Glasgow School of Art, provides a rich foundation for his writings.

Recent AI News Articles
Amazon - Anthropic
Back To Top