Xbox’s latest Transparency Report details AI usage in player safety

Xbox has released its fourth Transparency Report, in which it details its efforts to protect players and what countermeasures it’s using to stave off toxicity and harmful elements. According to Xbox, it has invested in “the responsible application of AI” to help improve detection. The report also reveals how the company’s recently launched safety tools are working for the community, including the recently launched voice reporting feature.

According to the report, two of Xbox’s early AI investments are Auto Labeling, which identifies words and phrases that meet certain criteria and could potentially be harmful. According to the company, this helps moderators sort through false reports more quickly. The other is Image Pattern Matching, which uses databases and image matching techniques to identify potentially harmful imagery, allowing moderators to remove it more swiftly.

Unlock premium content and VIP community perks with GB M A X! Join now to enjoy our free and premium perks. 

Join now →

Sign in to your account.

Rachel Kaser

Rachel Kaser is a gaming and technology writer for from Dallas, Texas. She's been in the games industry since 2013, writing for various publications, and currently covers news for GamesBeat. Her favorite game is Bayonetta.