Artificial Agency, an AI startup pioneering generative behavior for gaming, announced the alpha launch of its flagship product. It’s an AI-powered behavior engine meant to revolutionize how games are developed and played.
The technology is now available to select studios as part of Artificial Agency’s pilot program.
The gaming industry remains a bright spot and huge growth opportunity amid a declining media and entertainment market. Gaming is still growing, but the gold rush to AI for games hasn’t necessarily panned out. Artificial Agency skipped a lot of that early hype and went to work on something that was a lot more complex, the founders told me in an interview.
CEO Brian Tanner said in our interview, “We don’t have a lot of direct competitors. This ambitious vision that we’re pushing, where we have this alpha version that we’ve just released, seems to be a totally unique offering.”
First announced last summer, Artificial Agency’s behavior engine will empower an entirely new generation of games that are highly adaptive and intelligent. The engine enables developers to embed runtime decision-making seamlessly into any aspect of a game, creating a universe of living agents that are responsive to the world, players, and unfolding story.
“The promise of generative AI for video games has been well-documented, but the technology has often been pigeonholeed as a solution for 3D asset or speech generation. As longtime AI researchers and avid gamers, we believe the true potential lies in generative behavior to create brand new types of interactive, dynamic experiences,” said Tanner. “We’ve spent the last two years developing our engine in collaboration with some of the best triple-A designers and developers in the industry, and are honored to introduce it to the world.”

In its first iteration, the behavior engine will support multiple agent archetypes, including:
- Characters: Fully autonomous, embodied beings that express emotion and react dynamically to complex scenarios. They don’t just react to what a player does at any given moment — they remember it, learn from it, and use that rich history to personalize every future encounter.
- Game Directors: Non-embodied beings that observe the player’s progress and manage the overall story. Game Directors can respond to each player’s gaming history and construct real-time, in-game situations that can guide, aid, or handicap the player as they progress throughout the game.
Over time, the company will unveil new archetypes and create additional example content and tutorials that demonstrate various types of agents.
“This new generation of agents will be transformative, bringing new levels of fun and playfulness not only to the gameplay itself but also to the process of designing and building new games,” said Alex Kearney, cofounder of Artificial Agency.
Kearney added, “For players, our engine will lead to games that adapt to your style of play and curate the experience to uniquely entertain you — whether it’s creating new obstacles to up the ante, introducing new characters to change the narrative, or helping out when you get stuck. For developers, they’ll now have the tools to pursue their wildest creative ambitions and create scenarios that have never before been possible.”

The engine is designed to provide generative behavior out-of-the-box while remaining completely customizable to align with the developer and studio vision. The flexible, layered architecture allows developers to get started and iterate quickly, with the ability to rewrite or replace any code to meet their needs and integrate seamlessly within existing pipelines and methodologies. The experience of building with the engine will feel extremely efficient, allowing them to explore complex, long-term plans and behaviors using only simple directives.
To explore how the engine could support real-world game development and player experiences, Artificial Agency gave studios early access to the technology and encouraged them to experiment. Artificial Agency used their insights to co-develop “Cubico”, a tech demo that illustrates the capabilities of the engine and archetypes in a gaming environment.
Origins

The company was founded in 2023 by Brian Tanner and Alex Kearney. Tanner has 25 years of experience in reinforcement learning and AI, including research with Rich Sutton and leadership at DeepMind. He now leads Artificial Agency to bring agent-driven intelligence to the world of video games.
Kearney is an AI researcher and entrepreneur with experience at DeepMind, Twitter, and the University of Alberta, specializing in generative behavior and reinforcement learning. Previously, she was a research scientist at DeepMind, PhD under Rich Sutton.
Tanner said he has wanted to make AI games since he was 10 years old.
“I remember like doing HyperCard animations in the 90s and wishing I could find a way to actually make the game intelligent. And so I spent so many years doing AI research and reinforcement learning, always wishing there was a way to bring it over to games,” Tanner said. “But it was so hard previously because you needed so much data and training. When we were at Deepmind, we saw all of these amazing things.”
Like beating world-class human players at Starcraft and Atari games. But they needed to train using an enormous amount of computing time for the models.
“Then, when generative AI finally hit the mainstream, we saw what ChatGPT could do, and we saw actually had a system now that had common sense knowledge built into it, which was always the big gap,” Tanner said. “And ever since we basically started playing with ChatGPT, to harnessing this core idea and bringing it into video games.”
I asked him about the pace of change.
“When we started, people were using generative AI in games to make textures and images and maybe some dialogue. And we charted a course that was much further down the road. Our thought was if you could actually harness this technology and build it in from the base level of a game, you can make not really smart characters and companions, but you could really bring AI into any game system,” Tanner said.
Last year, Artificial Agency raised $16 million in funding. The money came from Radical Ventures, Toyota Ventures, Flying Fish, Kaya, BDC Deep Tech, TIRTA Ventures, and others.
The company has 22 people on the team in Edmonton, Canada, with a lot of people from the University of Alberta. It is not disclosing its total funding yet.
How it works

Artificial Agency’s AI-powered behavior engine makes it easy for game developers to integrate generative AI into all kinds of game mechanics, driving engaging behavior in both moment-to-moment interactions and the overarching game narrative.
Developers can add minor improvisation to scripted interactions, full improvisation for emergent gameplay, create full artificial players, and even high-level gamekeeper systems that control pacing, spawn encounters and steer players towards overlooked game elements. The result is a more immersive and entertaining experience that keeps players engaged and reduces churn.
Asked to clarify what it does, Artificial Agency said the behavioral engine goes beyond non-player character (NPC) conversations. Developers can embed it into any decision-making process in the game. For example, in a tech demonstration in Minecraft, the player can tell the agent they are hungry, the agent can recognize they need food and drop the player the necessary resources. If the agent drops the player bread, and they say, “I’m gluten-free,” the agent will take the material back and drop them a gluten-free item like chicken.
Based in Edmonton, Canada, a global hub of AI and gaming innovation, Artificial Agency brings together world-class AI researchers from Google Deepmind with engineers and game developers from elite triple-A studios, all committed to building the future of gaming experiences.
One of the unique things the tech can do is have game directors who can actually control the action in the game.
“Anywhere where a game designer wishes, they could bring human-level intelligence into decision making, like, ‘Oh man, if only I were there in the game, I could do this or that to make this game more fun or make the experience better for the player.’ We wanted to bring a product that would allow them to really bring that into their game,” said Tanner. “It’s not really so much about realism as it is about just finding ways to allow the creative people in games to make experiences and game mechanics that they couldn’t do before.”
Tanner said the team is bringing agents into game systems. Agents have a few features. They have a role, a personality, and goals and objectives.
“The behaviors are any action that the game designer wants to allow them to take,” said Tanner. “So on two ends of the spectrum, you can imagine you’ve got a companion that’s playing along with you. Its behaviors might be talking to you, following along, engaging in combat, crafting and looking around.”
“If we make a game director, it’s totally the opposite end of the spectrum,” Tanner said. “The behaviors that the agent might have could be spawning a dynamic encounter, or changing the difficulty of the game, or reminding you of some information that you’ve forgotten that happened earlier in your play session, sort of anything they can program. They can expose it to our system as a behavior, and then they can make an agent that’s allowed to choose that behavior.”
The game studios that Artificial Agency has been talking to have expressed a wide variety of interest in what they can do with this technology. Some are creating companions or virtual friends you can play with when your friends aren’t around. Some want to make non-player characters (NPCs) in a role-playing game that might make a village more interesting. Some characters might be more like coaches who can give you new skills or advice.
They could have ways of remembering previous interactions, even your reputation with them.
For instance, in Red Dead Redemption 2, your reputation can get worse if you go around shooting people randomly as a gunslinger. The characters around town might recognize you and remember what you did, said Kearney.
Beyond that, Kearney said, “One of the things that we can do is allow people to express much more complex relationships and much more complex consequences for actions in an environment.”
The game director

But one of the most interesting new kinds of characters that AI can create in a game is the “game director.” This is kind of like the role of the actor working for Donald Sutherland, or Snow, in The Hunger Games. The game director’s job is to make the game more exciting or difficult for the players trying to survive in the game.
“They’re observing you play the game and they are adapting the action dynamically, either making it more challenging or interesting, or scary. They may direct you towards content that you might be missing, or even just allow the game to have open-ended complexity in response to what you’re choosing as a player.”
In a Dungeons & Dragons game, this is more like the dungeon master role.
“You could give them their own personality of what they’re going to try to do to you,” Tanner said.
To get this kind of game director right, Artificial Agency has created different parts of a system. There’s a foundation model, like a language model or a visual language model that has that common sense knowledge has been trained specifically for game-like scenarios. But between the game and the foundation model, there is a whole tier of AI components that help the agents remember and forget, summarize, and query into data stores so that that agent that they’re interacting with is appropriate and true to life for the game they’re playing.
And then the place where the game developers interact the most is with the game plugin. So in the Unreal Engine, for example, the firm is developing very developer friendly tooling that allows them to integrate this into their existing workflows however they want.
“So once the initial integration has been done by game engineers, almost any designer or technical designer can then hook up to the system and create characters and systems that can leverage that intelligence very, very easily,” Tanner said.
Kearney said the team has been working with game narrative designers. Here, the AI gives them more affordances, whereas their work is typically interpreted through designers and programmers to have it rendered out in the game, she said.
“When they edit text, when they describe things differently or recharacterize particular characters in the environment, they get to see in their editor those changes rendered out so they’re directly making changes to how the experience rolls out based on what they’re writing,” Kearney said.
She added, “So instead of them having to write a character in a Word doc and pass it off to the technical people to read that into the game, they can just do it themselves, press play, interact with the character and then iterate on it,” Kearney said.
Kearney said the team thinks of giving agency to characters and environments so they can make decisions and handle complexity. It also gives agency for designers and game programmers to render out situations that they previously wouldn’t have been able to make before. It lets the game characters react to their environments.
How smart?

It reminded me of how Will Wright talked about the Sims, and he was saying that what he really created were smart objects and dumb people, and the objects would be smart enough to advertise themselves to the people and say, ‘Hey, there’s food here.’ And if it matched up with the players or the characters’ need of being hungry, then they would go for the food. So it was more like smart objects in a smart environment, with dumb people.
“I think that’s, that’s brilliant, actually. And it is a notable version of how we’re doing things,” Tanner said. “The character has these different drives. They’re hungry, they go to the bathroom, they’re sleepy. They can just find things in the environment that satisfy those needs. The way we create our characters is sort of the opposite. The intelligence is actually in the characters, and we have to do very little work in the environment to mark it up or advertise what it’s supposed to do. And then the characters themselves can find innovative ways to combine things in the environment.”
This reminded Kearney of a story of working with a game developer with an AI degree.
“They worked in a game studio, and they were first learning how to use our tools, and they asked, ‘How do you let the agents know that fire is bad, like, if something’s on fire?’ The answer is you don’t need to. By simply saying the printer is on fire, that’s enough for the agent to know. ‘Hey, I should do something about this. Let me go find a
fire extinguisher,'” Kearney said. “These agents, just naturally respond to that complexity.”
Behaviors can be embedded deeper into a character’s personality as well. Things like ego, if you’re mean, or suspicious, or easily angered. Those characteristics will come out when the character interacts inside the game. Then the character will initiate animations that express those feelings, like making a gesture when frustrated.
What happens next

There’s a version that is in the hands of a small group of developers now. Later this year, the company will open up the beta to a wider pool of developers. There will be a development fee for those building things on top of the tech.
“We have to make sure that what we’re offering is really a transformational experience that people are willing to pay for,” Tanner said.
Kearney said, “What we’re offering to game developers is decision making within the game and handling open-ended complexity.”
The goal for now is to inspire game developers to think about things they’ve never thought about before when it comes to using AI in games.
“As creative as everyone is in games, what we’ve learned from working with the people that have been in the industry for a long time is they have been locked into ways of thinking about what a character could be and what a game should be,” Tanner said.
One good concept is to make the game director into the person back at the control center or hidden in the van in the Mission: Impossible movies. Instead of being the agent in the field, the game director controls the action from the control center.
“So one of the most interesting conversations I had with the designer lately, when we told him about the behavior engine, was he wanted to make what he called an inversion of control game. What you can never do in the past is, as the player, be the guy in the van, and then have the AI be the actual ones that are going in,” said Tanner.
The Cubico demo

The Artificial Agency folks showed me a demo of what they meant for how the tech can be used. They showed me an agent character moving around in a space with three cubes. By touching a cube, the agent can change its color. Kearney told the agent to tap the first cube and change its color. Then she said, “Do the last one.” The agent goes to the last cube and taps it to change its color, even though it didn’t get explicit instructions on what to do. The agent reasoned out what to do.
Then the founders showed me a more complex demo in partnership with the Multiplayer Group. They had a fairly simple isometric game with a fixed camera looking down on some characters inside a building with some cubicles. In this case, the agent was not a physical agent inside the environment. Rather, it was a game director who could modify the environment for the characters inside the building. Using generative AI, the game director can instantly create objects that are needed as the scenario changes.
If you tell the game director that you are hungry, the game director can place food in the environment, like a bunch of donuts. There were four characters in the room and they each had their own personalities and backstories and relationships. There was also a game director looking down on them with a hidden camera.
Richard is the boss. Clyde is a tech associate who always starts his day by going to Richard to express a serious concern. Today, he is concerned about saboteurs. Richard usually blows him off. Bea is an office custodian with a mysterious past. And Dev is a nervous sales associate. They live out their day, but through the game director, we have the ability to interact with objects and create unscripted changes in the environment.
Kearnedy showed me what would happen if the water cooler broke down. The characters weren’t programmed specifically to react to this event, but we watched how they dealt with this situation. The water cooler breaks. Bea grabs a mop to deal with it. Dev gets very upset. Then Kearney, as the human in the loop, speaks to the game director to do something. The game director will use its affordances, like the ability to spawn objects and then fulfill Kearney’s request. She can say everyone has a craving for donuts.
All the players congregate around the donut box. To spice things up, Kearney says the donuts are poisoned. The characters start to realize they are poisoned and start gagging. Again, this is not preprogrammed. Uninstructed, the game director sends in a new character who is a paramedic, who starts treating the victims. Kearney tells the game director to cure anyone who is near the paramedic. The scene plays out. But Clyde dies. Then, a couple of the characters get upset, and the boss says they will honor his memory by continuing his work.
Kearney restarted the demo and then put me, a reporter, into the scene to talk to the game characters. Clyde happens to be quite concerned about my presence. Then Kearney turns me into an evil robot from the future. My job is to cause chaos. I start electrifying objects nearby. The computers in the office show the blue screen of death, and I start messing with the server in the room. Clyde goes to fix the server. None of this is preprogrammed, and the game director generates much of the necessary art.
Of course, a bad actor player could put the game characters into inappropriate circumstances and make a disturbing scene play out. Artificial Agency allows the developers to think about the guardrails they want to put in to prevent players from doing such things. The devs can create a child-friendly model that could steer the player back to more appropriate scenarios, Kearney said.
The whole point is to create experiences with open-ended complexity, Kearney said.