After eight years of work, Motor Ai has raised $20 million for autonomous driving software that prioritizes compliance.
In a world racing toward autonomous vehicles, Europe has taken a different path where it is demanding not just performance, but a focus on explainability, safety, and full legal compliance.
Berlin, Germany-based Motor Ai is meeting that challenge head-on. The company announced its seed funding round to bring its certified, neuroscience-driven technology into full deployment, starting with German public roads.
Segenia Capital and Capital led the round, with participation from mobility-focused angels. That reflects the national significance of the technology. The new capital will support hiring, commercial rollouts, and expansion.
“We are building this company totally on the idea of following a specific AI approach, which is called active inference. And this is going beyond what you [have heard] about AI, which is just training on a huge amount of data,” said Motor Ai CEO and co-founder Roy Uhlmann, in an interview with GamesBeat. “It’s working differently, more human-like.”
We’ll explain more of that below.

Motor Ai has built an intelligence for Level 4 autonomous driving that reasons through data, rather than just reacting. At the heart of the system is a cognitive architecture rooted in active inference, a model from neuroscience that allows vehicles to make structured, transparent decisions. That’s how Motor Ai makes autonomous technology transparent and aligned with human and regulatory expectations.
“This type of AI enables the highest safety standard in autonomous driving — as is already legally standardized in Europe,” Uhlman.
As other providers pursue autonomy through brute-force data collection, end-to-end solutions and black-box prediction models, Motor Ai has taken a different approach: one that is deeply explainable and certifiable on the world’s highest safety levels. Its full-stack system already meets the most stringent European and international safety and compliance requirements, including UNECE approval standards, ISO 26262 (ASIL-D), Regulation (EU) 2022/1426, Autonomous Vehicles Approval and Operation Ordinance (AFGBV), GDPR, the EU AI Act, and upcoming Cyber Resilience Act provisions.
This regulatory-first design is already moving from the lab to the street.
This year, vehicles equipped with Motor Ai’s Level 4 system for autonomous driving will start operations in several German districts. The vehicles are supervised on board by a safety driver to be taken out during 2026. These deployments include both, the full onboard autonomy stack and the technical supervision required by law – giving local transit authorities a fully operable path to autonomous transport without compromising control or safety.
For the team behind Motor Ai, these milestones are the product of years of deep technical development including regulatory groundwork. Since 2017, the company has built its entire autonomy stack in-house from Berlin, working in close dialogue with certification authorities and federal certifiers.
“Our solution meets key requirements for transparency and traceability of autonomous driving decisions, as required by authorities,” said Uhlmann. “That clearly distinguishes us from US providers and at the same time optimally complies with European regulatory requirements.”

That trust is becoming increasingly important. As autonomous systems move closer to everyday use, European governments and the public are asking tougher questions: How are these systems making decisions? Can those decisions be explained or are they pure black box systems? Motor Ai’s architecture is designed to answer those questions clearly, legally, and reliably.
“This ‘Made in Germany’ in-house development reduces inter-dependencies while strengthening Europe’s ability to operate in critical innovative technology”, said Lucas Merle, Principal at eCAPITAL.
The company’s early traction is a signal of what may come next. With road testing already in place, Motor Ai is well-positioned to define the next chapter of autonomy-based on intelligence for scaling in real-world conditions.
“In a regulated environment like Europe, trust and compliance are non-negotiable,” said Michael Janßen, general partner at Segenia Capital, in a statement “Motor Ai has built a solution that is not only technologically differentiated, but fundamentally aligned with how Europe thinks about infrastructure and public safety. This is how autonomy will scale in future.”
Looking ahead, Motor Ai plans to grow its engineering, safety and type approval teams, expand deployment partnerships with municipalities, and begin cross-border regulatory expansion into other European markets.

Motor Ai’s long-term vision: a certified, explainable driver system that can serve as infrastructure for safe, transparent autonomy – one that Europe can both build on, and believe in.
“We don’t think the future of autonomy in Europe should be a mystery,” added Uhlmann, explaining the fundamentally different approach Germany and the EU takes in comparison to other markets. “It should be measurable, inspectable, and designed to earn public trust. That’s what we’ve been building, and now we’re ready to scale it.”
Origins
Motor Ai was founded in Berlin in 2017 by Roy Uhlmann (CEO) and Adam Bahlke (CTO). The company has developed Level 4 intelligence for autonomous driving. The company took an approach that was based more on reasoning, rather than ingesting a huge amount of data and then trying to act upon that knowledge. It’s a more human way to do reasoning, Uhlman said.
He added, “We were building our business on this new AI approach, and we found out it’s going to help in autonomous driving to solve all these problems with the edge cases. With these scenarios, there is so much data, you need to make it more efficient and safe. That’s basically how we started.”
The company built a full stack of AI and the the sensors it needed in the Level 4 autonomous driving system, where there is no human in the driver’s seat.
The company notes there are three main markets for the products. There’s the European Union, the U.S. and China. The U.S. and China have stringent regulations, while the U.S. does not.
“What we have to meet is this really strict safety standard of the European Union,” he said.
How it works

I noted the CES 2025 speech that Nvidia CEO Jensen Huang gave about the rise of physical AI in the form of autonomous driving and humanoid robots. He foresaw a boom happening because you can create digital twins of products in a virtual world and test them rigorously in the digital twin. As they become ready for the physical world, the products are then much more robust when it comes to proper design. Instead of testing a self-driving car for a limited time in the real world, you can now test it for billions of miles in the real world. The result is safer robots and cars that come out faster.
Uhlman said this approach was correct with the onset of generative AI, where you can simulate different variations of driving for a self-driving car.
“We now go beyond that because active inference is based on understanding a scene that’s happening in real time in our world,” Uhlman said. “We have like the scene in front of us. The system creates different variations, and then decides which of these variations it actually chooses.”
This takes a lot less computing power than a full digital twin of the entire world, and it also can fit inside a car, making it more affordable and less likely to inundate a network or datacenter with too much data.
“You have this generative approach of having simulated different variations of your environment. We do that real time, in the moment we see a traffic situation, [so that we can] create a traffic maneuver. It’s like the next step. It’s not about building a huge server farm to generate everything, which is inside a black box and subject to hallucinations,” Uhlman said.
“We can do it in real time and we can also explain why it does that,” Uhlman added. This explainability is a huge thing for us. In autonomous driving automotive industry, we always have to ensure it’s safe enough.”
The company has been at work for eight years, starting with an era when it was bootstrapping. It tested the car with safety drivers with German traffic authorities, driving cars without a safety driver. In Berlin, there was a closed airport where the company was able to test its cars for about 18 months on the airport’s tarmac.
“We have about 30 different layers to the system, and we have to prove every one of them is safe,” Uhlman said. “We have to show we can mitigate any threat and risk in the system. It’s safety by design.”
The company was able to do that, while others offer a black box. Now it’s starting commercialization. The first cars can start hitting the market in the first quarter of 2026. It will start with the Mercedes Benz V class model, which is a five-seat van. Then the company will use it for shuttle services in public transportation networks, starting in Europe.
The team now has more than 100 people. Overall, the company has raised less than $30 million. The target is to have a failure rate of one in 100 million operating hours or less.
The tech isn’t based on Nvidia technology. Rather, it’s hardware agnostic. There are plenty of other competitors out there, like MobilEye. But not everyone is shooting for Level 4 autonomy as Motor Ai is.
Rather than capture every single intersection in the world and test for driving in those conditions, the company tries a different approach. It can’t test for that kind of condition. But it can test for all of the driving maneuvers that can happen at a given intersection.
“What you can do is go back to all the attributes you have and the features based on what you’re driving and make sure that this works, and bring that up in the variations. You can then ensure that through all these layers, the overall system is safe, because you can’t test every intersection.”
Uhlman added, “The question we have to answer is, ‘What is a twin?’ It’s not about the twin of an object. Rather, when we talk talking about traffic situations, we’re talking about an action. And the question like, “How can you create a twin of an action?’ This is a huge problem. What we can do is we can encounter how actually traffic works, how people behave in the physical world, and that’s actually what we do. Based on that specific moment, we are” solving for that specific intersection.
He added, “We call it the action twin. This is a specific traffic situation, and for that, we don’t need the supercomputer in the car.”