Three-quarters of teens and pre-teens and 76% of adults experience harassment in online games, while antisemitism is on a concerning rise. Today’s gaming audiences are unsurprisingly demanding more: better safety tools, clear communication, and smarter responses to the issues that plague so many players.
During a fireside chat at GamesBeat Summit, Meta’s Blake Harper and Modulate’s Mark Nolan explored real-world strategies for building resilient, socially comfortable game communities — from setting clear standards to using AI tools that support moderators and boost player retention.
“When I think about social comfort, I think about a feeling that you have when you’re with people that you trust, that you’re interested in, that you feel like are not a threat — that’s the kind of feeling that you want to go for or enable in a multiplayer experience,” Harper said. “That goes beyond just the absence of bad experiences. Are you in a dynamic where you can show up in the way that you want to show up, and others around you are holding you up as well? This is the community concept that many of us want to create when we build games, or build online communities.”
How trust and safety goals differentiate your game
Adding successful, sophisticated moderation that works can actually be fun, Harper said, as a design challenge and a lever to differentiate your game. Players know what it feels like when community standards and moderation are slapped onto an experience in a check-box kind of way. When it doesn’t feel continuous with the rest of the experience, part of the brand or the voice of a game, it can be off-putting and it can undermine moderation completely.
Done right, in ways that go beyond just reactive reporting, it can change the game. Harper pointed to Helldivers 2, which on the surface is a co-op shooting game — the genre of game with the worst reputation for both player interaction and moderation. However, in practice, the game’s core mechanics emphasize teamwork and support. On top of that, the social vision developers had for the game is manifest in the ways in which players can interact, like the hug emote. Players have taken to it enthusiastically, and in response have developed what they call “the hugging strategy,” or using the power of friendship to defeat the enemy, because the hug emote allows players to share their personal shield with another teammate.
Adding prosocial loops is also a critical part of combating negativity — or building incentives into games that encourage players to interact positively, so that trust and safety becomes a core identity of the game. For example, one VR game offers “trusted player” ranks to users, and is transparent in what kinds of positive behaviors each rank takes to earn. Players who earn the ranks take them seriously, and players who are ranked highly are trusted by other players, creating a feeling of collaboration and camaraderie.
“When this is done well, when you’re building these tools downstream of a vision of what your social strategy is, you can think through your trust and safety feature stack — how could this actually enable players to have an experience that feels unique to this game?” he explains. “A lot of that will show up in language, in visual design, but back to just the motivations question, I would say to a studio that is feeling inclined to do a check-box thing – no, this is a surface you’re putting in front of users. Don’t waste this opportunity.”
Designing a game with trust and safety at the table helps the bottom line
At some point in defining mechanics, loops, and user interactions, trust and safety needs to be part of the process — working alongside the development team to help shape gameplay, social experience, and expectations early on.
“That requires some level of top-down buy-in around why getting moderation right matters,” Harper said. “Not just because it’s the right thing to do to make sure that you’re building for social comfort, and that people aren’t experiencing unnecessary amounts of harm or harassment in your games, but also because it pays.”
Those who use trust and safety tools see clear benefits for retention and engagement — keeping long-term players invested and increasing time spent in-game, which also creates more opportunities to monetize.
“Call of Duty saw about a 28% increase in retention after three weeks of new players, just by taking that action to enforce their already existing code of conduct,” said Nolan, referencing Modulate’s recent case study on Call of Duty. “That reinforces the idea that you set the expectation for the type of behavior you want, and you have a way to enforce that and make that happen in real time. Other players are much more likely to come back and engage.”
The way in for trust and safety teams when engaging with design is to frame their input as an opportunity: to elevate the social experience and increase player delight. That means focusing on the feelings and behaviors you want to encourage, and aligning them not just with trust and safety goals, but with broader social objectives as well.
For instance, one studio, while designing its game avatars, partnered with trust and safety to nail down a look and feel. Together, they opted for less human-like avatars — because psychologically, players are more likely to anthropomorphize and respond positively to characters that aren’t hyper-realistically human. That choice directly shaped how players interacted with one another in-game.
“If you have to have some of these features anyway, you might as well make them work for you,” Harper said. “Fundamentally, if you agree at the top level that your social comfort experience, your trust and safety experience is a foundational part of your social strategy, then what you’re doing there, building in some social features, you’re going to be part of that conversation, because you can contribute that voice.”
Riot Games actually established a Player Dynamics Engineer role with the express purpose of embedding that trust and safety lens to every feature design conversation, across engineering and product teams. It’s top-down — identifying potential win-win opportunities, and then codifying that strategy in roles and teams.
Improving trust and safety with automation tools
As a live service game continues to thrive and grow, those policies must adapt to the dynamics of the audience to ensure your approach to player safety stays resilient and continues to meet your players’ expectations.
That’s why the automation piece is huge, especially if your game becomes wildly successful. You can’t expect your players to catch and report every instance, and neither can a dedicated human team. AI uses classifiers to detect, with high confidence, something that violates policies. Today’s AI-powered moderator tools are sophisticated enough that they can even monitor voice exchanges.
These tools, like Modulate’s ToxMod, are a huge boon to online communities, but they need to be carefully configured. And understanding the way these tools work might also influence the policy decisions you make.
“They have to be enforceable. They have to be things you can reliably, repeatably, transparently, defensibly uphold,” Harper said. “They have to be things that you could detect with high confidence. You absolutely have to think about, what are the limitations of classifiers? What’s enforceable with our policies? They must support the policies. They shouldn’t be wagging the policy dog.”