NTT Research press event in San Francisco.

How NTT Research approaches its basic science explorations | Kazu Gomi interview

It’s still heartening to see that big companies can still invest in basic research, with much of the work done in Silicon Valley.

Japanese telecommunications firm NTT announced a series of research projects last week that could pave the way for better AI and more energy efficient data centers.

At a press conference in San Francisco, NTT researchers including Kazu Gomi, CEO of NTT Research, said the company has created a new large language model (LLM) integration that can see and process graphical elements of documents. I caught up with Gomi for an interview during the event.

Gomi also said NTT has initiated a new field of science, “the physics of intelligence,” to study sustainable and trustworthy AI. It teamed up with Harvard University to study brain science, and it is working on making a “digital twin” of the human heart. It’s also exploring quantum computing.

And it also said NTT has successfully demoed its new all-photonic network for distributed data centers, which can lessen the need to build a giant data center in a metropolitan center with high real estate and electricity costs.

NTT has more than 330,000 employees and $97 billion in annual revenue. It invests more than $3.6 billion in annual research and development, and five years ago it created an R&D division in Silicon Valley. At its Upgrade 2024 event, the firm talked about its progress in R&D in San Francisco.

“Our mission, our task is that we upgrade what you think as normal to the next level,” said Gomi.

NTT Research has a big office in Sunnyvale, California.
NTT Research has a big office in Sunnyvale, California.

Since the biggest challenges around data centers in urban areas are the high cost and shortage of land in downtown areas of cities, as well as high costs of electricity in those areas, NTT’s researchers are experimenting with spreading the data centers out into the suburbs and connecting them with fiber-optic cables delivering data at 100 or 400 gigabits a second.

In the United Kingdom, NTT showed that its data centers that were 100-kilometers apart had less than one millisecond of network delay using APN connections. The APN-connected data centers in U.K. and U.S. reduce delay variation to a tiny fraction of what prevails in conventional networks.

Measurements in tests conducted over 100 Gbps and 400 Gbps links showed the two APN-connected data centers in the U.K. operated with less than a millisecond (approximately 0.9 milliseconds) of latency, and with a delay variation (sometimes called jitter) of less than 0.1 microseconds.

NTT has 50 researchers in its Silicon Valley office. In five years, NTT Research has published over 450 academic papers. And it has won seven awards for the best paper in a conference over the five years. Researchers have won awards for optics, physics and cryptography. It is working with 15 different cooperation partners at research departments.

I started out asking about the intersection of science fiction and real technology, and he responded by talking about the “digital twins” of the heart and other body systems. By creating simulated hearts, he said it might one day be possible to simulate your reaction to drugs in order to figure out how much of a dose to give you in real life.

Here’s an edited transcript of our interview.

Kazu Gomi is CEO of NTT Research.

VentureBeat: Some of these ideas are so far out there. Do you sometimes start from science fiction and try to work back, almost?

Kazu Gomi: Did you hear about the cardiovascular bio-digital twin? That fits your description. To me, it’s a project like that. The notion of the bio-digital twin–there’s a lot going on around digital twin technology, typically for cars and factories and so on. But instead of machines, why not apply the same concept to the human body? I thought that was pretty cool. We worked back from there.

VentureBeat: That seems very sci-fi, but did it come from something you’d read?

Gomi: No, I think I just made it up. Probably some other people have thought about it before, but I didn’t learn anything from anyone else about that.

VentureBeat: Do we get there through something like cloning, or through virtual simulations?

Gomi: The term bio-digital twin can be interpreted in different ways. Our approach is to create an exact simulation of, say, your heart and cardiovascular system. The purpose, for now, is to use it to support clinicians. If you have an exact simulation of your heart–an assumption here is that you’re already sick. Maybe you’ve just had a heart attack and you’re in the hospital. The story starts from there. Obviously, the doctors need to do something for your heart. If they have an exact simulation of your heart, they can test out different treatments and scenarios instead of using them on you directly. That’s a pretty narrow scope for the digital twin, but let’s start from there. That’s our approach right now.

VentureBeat: Do you have to end up simulating the whole body to simulate the heart?

Gomi: No, but it’s an interesting problem. We talked about the cardiovascular bio-digital twin, which obviously simulates the heart and the rest of the cardiovascular system. The initial scope was just that much, the heart and the circulatory system. But to simulate accurately, that’s not good enough. We have to add some other organs, especially the liver. The nervous system is also important, because it controls the heart. We’re adding a few more important elements to the simulation. That’s the kind of evolution the researchers are finding. The initial model provides a start, but it’s not good enough.

VentureBeat: It seems like it has to be good enough so that if you simulate the drug treatment, you know how your body is going to react.

Gomi: Right. And what reactions are meaningful? There are some other projects that have tried to simulate every part of the body. Obviously, it’s a very complex system. There’s no way to get there with the current state of technology.

VentureBeat: Have you figured out just how hard that problem is, and how soon you might accomplish it?

NTT Research booth
NTT Research booth

Gomi: Our findings so far–to simulate the whole body is way too complex. You have to narrow the scope in a way that’s meaningful. From that perspective, heart-related diseases are the number one killer in both the United States and the rest of the world. It’s a good subject to study. But again, just the heart is not enough. We add a few more meaningful organs and systems to the model.

To answer the question, we’ve done model development, and we’re doing a lot of animal experiments. There are some iterations going on. We create a model, do experiments, and iterate on that. For animals, like dogs, we can do fairly well. To commercialize a model of a human, of course, we have to go through FDA approvals and so on. We’re definitely not there yet. Before we get there, we have to prove ourselves in different ways, and one of them is using animals. We’re getting closer in that area.

VentureBeat: I saw the announcement about the Harvard Center for Brain Science. Are you looking at the brain in similar ways?

Gomi: That’s a bit of a different project. The release we did yesterday has nothing to do with the bio-digital twin, actually. The Center for Brain Science–that project is about trying to understand how learning is done. It’s more related to AI and how AI learns new knowledge. We want to understand the mechanisms around that. We’ve been collaborating with this team at Harvard over the last two to three years. In the meantime, the AI boom came in. That drew a lot more attention. The discoveries the team has made attracted a lot more interest.

Yesterday’s announcement–given everything that’s going on in the background, we want to accelerate this collaboration with Harvard by giving a grant to the Center, so that the Center can hire more post-docs and accelerate their research. We sent two researchers from NTT to the Center already. They’re working with the professors and researchers and students.

VentureBeat: How many people do you have at NTT Research in the U.S. now?

Gomi: About 50 researchers. They’re mostly in California. But we also have about 10 people on the east coast at different campuses – Harvard, MIT, Cornell.

VentureBeat: Do you feel like you’re assembling different kinds of expertise than in Japan?

Gomi: Yeah, yeah. I think this is the beauty of America and American universities. The top talents from all over the world are here. We definitely feel the research is more dynamic, how it’s done. We’re very pleased with what we’ve done so far, and the way things are going.

VentureBeat: Which piece of the quantum computing problem are you working on?

NTT's tsuzumi
NTT’s tsuzumi

Gomi: In layman’s terms, there’s a mainline quantum computer effort. If you read a textbook about a quantum computer, it typically starts with qubits and entanglements and the interesting features of physics around it. There are a couple of different ways to create a qubit. IBM is using superconductivity, where you have to reach very low temperatures. But a qubit-based computer is the mainline effort as far as quantum computers.

Our effort is a little bit different. I talked about it at the press conference yesterday. The machine we’re trying to create is called a coherent Ising machine, using optical technology. We’re not really dealing with qubits. There are drawbacks to using a qubit-based quantum computer. You can’t really create a general-purpose computer. Our approach won’t get to a general-purpose computer, but it will create some special-purpose computers. By that I mean the computer we’re looking at is only good for some particular problems.

The question you probably then have is, what are those problems? What we’re trying to solve is called the combinatorial optimization problem. You have many parameters and you’re trying to find the best combination of those parameters. Whatever the criteria you give under the circumstances, what is the best combination of your parameters? It sounds kind of boring, and maybe it is very boring, but it’s a very difficult problem for today’s digital computers. If you want to find the best answers, you have to test every single combination. If the number of combinations is small, maybe 10 or so you don’t even need a computer. You could probably do it with paper and pencil. But as the number of parameters grows, it gets very complicated, because there are so many possible combinations. That’s the problem we want to solve, and the coherent Ising machine is very effective at it. We’ve proved that it works.

VentureBeat: Is there an end goal you see yourselves getting to?

Gomi: We have a couple of interesting things going on. BMW, the car company, put out a contest about two years ago. They called it the quantum challenge, something like that. They have their own business problems. Using quantum computers, can you solve this? Or some other new form of computation machine. We actually won that contest back in 2021 or 2022. The problem in this case was that when BMW designs a new car, it has thousands of parts. You need to find the best way to combine those. There are combinatorial optimization problems there. What’s the best arrangement of parts to fit the car together given all the restrictions around the process? Also, they have to come up with many different test scenarios. What’s the most effective way to do that testing? That was BMW’s actual business problem.

We didn’t entirely solve those problems together with their businesspeople, but we gave them the core of the solution. We gave them the math. Translate this business problem through this math and you can resolve it quickly with the right quantum system.

VentureBeat: Jensen Huang has said that he believes this age of sovereign AI is here. Meaning that countries, for example, want all their own data in their own data center and other countries can’t have access to it. The same for enterprises. He said that this might remake how we build data centers. I was thinking about your idea of the distributed data center and how it relates to this problem. How concerned should we be that a data center is in a particular physical place? You could have five or 10 data centers replacing the one in a central location. I don’t know how many data centers you envision with this distributed idea, but if you assume that we have great security, maybe it doesn’t matter where data centers are?

NTT uses photonic links to build distributed data centers.
NTT uses photonic links to build distributed data centers.

Gomi: I agree with you from a security point of view. If you have the best cryptography, if you have all the backups you need, it doesn’t matter if a data center is in this country or that country. In my opinion, the argument is a bit sentimental, if you take what I mean. It’s not 100% technologically explainable. But still, if the data center is in a country where something happens, if they confiscate your important data–I imagine you would encrypt it, so it would still be okay. But you wouldn’t feel very good about it.

Another fear is that someone might–recently we had Google Chrome introducing privacy mode, where in theory Google isn’t supposed to be recording or using your private information, but in reality, people have found Google is still using that data. There are doubts around the hyperscalers. They say they’re encrypting data. They say they’re not using data for their own purposes, their own analytics. But what’s really happening? That fear is always there.

VentureBeat: There’s a tug of war around efficiency. It’s more efficient to put these things in the suburbs instead of downtown, but–it’s sort of like blockchain. If, in order to be more efficient around your data centers, you have to build 20 more data centers, that’s not doing any good.

Gomi: Again, the reasoning around this isn’t 100% technical, the sovereignty issue. But it’s true that if everything is in your backyard, the chances of a leak or confiscation are much lower. It feels better that way. That’s my interpretation. From a networking point of view, obviously you want to put the workloads–if we can distribute the workloads around this data in whatever the optimum way might be, that’s good. If energy is cheaper at this time and in this place, let’s use that. That flexibility in a technology platform will help everyone. If we can make connections fast enough, we can do these things more flexibly. That’s the goal. How people use that is the next question. But we can provide the capability.

VentureBeat: I like how this also works on the micro level. People are talking about making personalized LLMs for everyone. Everything inside my house is going to be on my own personal LLM, so I can search through 20 years of my email and come to interesting analyses. But the walls of my house then define this sovereign area. I don’t want that data going beyond them.

Kazu Gomi, President & CEO, NTT Research.
Kazu Gomi, President & CEO, NTT Research.

Gomi: It’s almost aligned to what we’ve talked about around the AI expert. It’s a pretty good model for when AI becomes more pervasive. You want to keep your own private AI within your territory. This AI may not know everything, though, so it might need to ask other AIs for other knowledge. With the combination of all these different AIs, you can get the answer you need.

VentureBeat: It brings me back to what Tencent has been saying, that now we need to re-architect all these data centers.

Gomi: I think so, yeah. From the user’s point of view, and also the energy consumption point of view. You can’t build data centers at certain locations anymore because there’s just not enough power.

VentureBeat: People had this worry about blockchain, but do you think AI might cause us to use too much energy?

Gomi: There is a fear, a legitimate worry about the way this could develop. I think there are two things we can do. One is that we have to come up with better algorithms. Today’s AI uses very big models, and to support a big model you need more computers. There have to be better algorithms that don’t require so much hardware. There’s already effort going on in that area. Another area, of course we can work to make GPUs more energy-efficient. That’s where our work is coming in, using optical devices more extensively between chips. By doing so we should be able to lower energy consumption. Beyond that, for the actual computations required by AI, we can use optical devices.