Mitu Khandaker of NYU

How Spirit AI uses artificial intelligence to level up game communities

Detroit: Become Human starts with Connor the police negotiator.
Detroit: Become Human starts with Connor the police negotiator.

GamesBeat: Something like Detroit: Become Human, did you have any reaction to that, or the general trend in stories about AI and the fear reaction to it?

Khandaker: I haven’t yet played Detroit. I need to get to that this weekend. But I know a lot about it thematically and so on. There’s obviously some sort of problematic stuff in there about equating AI to existing racial struggles in the world. There’s a lot there to unpack in general.

I worry less about what happens when machines get sentience and so on, because I think we’re pretty far out from that. There are some more pressing issues to do with AI that we need to think about now, like algorithmic bias. Are we training our machine learning data sets that we use to recognize language—are we recognizing all types of speech styles of people from all kinds of backgrounds? Things like that are way more interesting.

GamesBeat: Like that Microsoft chat bot.

Khandaker: Exactly. There are way more pressing issues to do with the ethics of AI than what happens when they gain sentience, whenever that happens.

Spirit AI’s character engine

GamesBeat: The detecting harassment element, how valuable do you think that’s going to become?

Khandaker: We’re working with a couple of partners that we can’t name just yet, but hopefully soon we can name one of them. Predominantly in the game space, because again, games—I think games are an environment where people do come to it with this playful, “oh, I can say what I want” attitude. There’s really nuanced language at play there. That’s why games has been a good place for us to start out developing this tech. It recognizes, is something a positive interaction or a negative interaction? But we’re also applying it to other industries as well.

GamesBeat: It seems like such a massive scale on the internet. You need something automated to respond to it all.

Khandaker: The first step is obviously recognizing when language is negative, or in a certain category of harassment. I think it’s up to the platform holder who’s implemented our tool to decide, how do I respond? They can do things like—if it’s sexual harassment, they immediately mute the user, for example. Or whatever it is that makes sense for your system. We’re not prescriptive about what moderators do as a result. We just provide lots of options.

GamesBeat: The Gamergate situation, where certain people get hit with so much harassment, it’s not particular insults so much as there are thousands of them at once. It gets overwhelming.

Khandaker: There’s a lot of talk, obviously, about how at the time—or with any of these ongoing harassment campaigns. Things like Twitter not providing enough tools to be able to mute accounts, having to block things one by one. If there was some level of automation, where a system is able to automatically recognize — okay, this is unsolicited, non-consensual harassment – and stop the user being able to see that, or flag up the account to some kind of moderator, then that’s something we need.

Like you say, when it happens at that volume, it doesn’t get deal with otherwise. If you report someone, it takes ages before a human moderator gets around to doing anything. That’s frustrating for both the user and the moderator. It’s about how you can help the user, but also help the moderator. In our interface, there’s a dashboard we provide for moderators where they get to see different categories.

This is an example of the Ally dashboard. There’s no data in this right now, but what we’re showing at the top here are user happiness levels. This is a nice little visualization for the moderator to see how happy and healthy the community happens to be. Are there lots of instances of harassment going on, negative things going on? Some themes we could look for—we can look for bots. We can look for flirt greetings, when people are saying something like, “Hey, sexy, what’s going on?” and you don’t necessarily respond to that. Negative game sentiments, if people are badmouthing your game in same way. Hate speech, scams. But we also track positive things as well. Are people saying good things about a particular feature? That helps you look for that.

Here, this is reporting. A lot of people fake-report each other, as a tool of harassment. Reporting somebody just to get to their account banned. We look for threats like that – “Oh, I’ll report you” – and that kind of thing. People are clever, right? They try to be terrible and also sometimes great to each other in really interesting, intelligent ways. It’s about a system that can keep up with that.

GamesBeat: If AI eliminates a bunch of jobs, there’s this question of what we do afterward, what jobs get created. What interests me is the growing number of people who get paid to play games – streamers, influencers, esports athletes, cosplayers.

Khandaker: It’s that very Iain M. Banks type of thing, getting toward this playful society, enabling people to just be their playful, creative selves, and all the work is outsourced to machines.

GamesBeat: It’s a happier scenario as far as what AI helps us achieve.

Khandaker: The thing is, in many ways that’s good, but this is where it goes beyond tech. It gets into policy. It’s no good AI automating people’s jobs if the social and political structures don’t exist to support people doing other things.

GamesBeat: That’s pretty far out. It seems like you’re targeting more of the near term.

Khandaker: Personally I have a broad interest in how AI helps us with both work and play, in the near term and the longer term. All of these questions are super interesting. It’s important to think about that stuff now — what are the repercussions of the decisions we make in AI now to these longer-term things?

Mitu Khandaker of NYU at Casual Connect Europe in London.

GamesBeat: Is there any kind of AI character you’ve seen that you admire in some way?

Khandaker: When you talk about “AI characters” that can be categorized in all kinds of ways. Obviously our focus is on conversational AI characters, really helping developers create those.

GamesBeat: You have these grind conversations in games, where it’s not going anywhere. You’re just chatting in circles. But is there a story that the conversation can tell?

Khandaker: You’ve just got to the crux of the issue. Your problem isn’t whether the character is real or not. It’s whether you’re having an interesting conversation, whether there’s a point to the conversation. If we’re able to design conversation systems that keep conversations compelling, that answers that. It doesn’t matter whether a player thinks a character is real or not. In fact, it’s better if they don’t. It’s still fiction, just like we get people to care about fictional characters in books and films. But the reason books and films have compelling characters is because they’re written in a compelling way. We need to bring that same ability to naturalistic automated conversation in games.

You don’t need to believe that they’re real. You just need to believe that talking to them has a purpose. That can apply to talking to real people. [laughs] “Is there really a point to this conversation?”

Dean Takahashi

Dean Takahashi is editorial director for GamesBeat at VentureBeat. He has been a tech journalist since 1988, and he has covered games as a beat since 1996. He was lead writer for GamesBeat at VentureBeat from 2008 to April 2025. Prior to that, he wrote for the San Jose Mercury News, the Red Herring, the Wall Street Journal, the Los Angeles Times, and the Dallas Times-Herald. He is the author of two books, "Opening the Xbox" and "The Xbox 360 Uncloaked." He organizes the annual GamesBeat Next, GamesBeat Summit and GamesBeat Insider Series: Hollywood and Games conferences and is a frequent speaker at gaming and tech events. He lives in the San Francisco Bay Area.