Hitachi unveils NVIDIA AI Factory to speed “physical AI” across core businesses

The new global platform, built on NVIDIA’s reference design, will pool Blackwell-class compute and software to develop AI that perceives the real world and acts on it.

Carter Emily
By
Carter Emily - Senior Financial Editor
5 Min Read

Hitachi said on September 26 it has established a global NVIDIA AI Factory to accelerate what it calls physical AI, or systems that ingest data from the real world and take actions based on it.

The company described the initiative as a centralized but globally distributed infrastructure that will support engineers across the United States, Europe, the Middle East and Africa, and Japan.

The buildout follows NVIDIA’s AI Factory reference architecture and is powered by Hitachi’s iQ platforms.

According to the announcement, the footprint includes NVIDIA HGX B200 systems based on Blackwell GPUs, Hitachi iQ M Series systems using RTX PRO 6000 Server Edition GPUs, and NVIDIA Spectrum-X Ethernet networking.

Hitachi plans to run production workflows on NVIDIA AI Enterprise and to use NVIDIA Omniverse libraries for simulation and digital twins.

The company framed the effort as a way to push physical AI from pilot projects into day-to-day operations in its four pillars of Mobility, Energy, Industrial and Technology.

In practical terms, Hitachi said the factory will host models that interpret camera and sensor streams, decide on next steps, and execute actions, with an eye to boosting efficiency, productivity and safety across industries where Hitachi already sells equipment and software.

What Hitachi is building

Hitachi positioned the factory as infrastructure, not a single product. It will anchor new software and services under its Lumada 3.0 operating model, which stitches together the company’s IT, operational technology and hardware know-how.

Hitachi said the platform will also expand HMAX, its family of AI-enabled solutions that already targets complex problems in rail systems and other industrial settings.

Executives from both companies cast the move as a step in an industrial shift.

Jun Abe, general manager in Hitachi’s Digital Systems and Services division, said in a press release that the collaboration with NVIDIA has become “a key engine for solving complex real-world problems,” and that the AI Factory will help the group operate more cohesively across regions.

Justin Boitano, vice president of enterprise AI products at NVIDIA, said the infrastructure gives Hitachi “a transformative platform for building and deploying enterprise and physical AI.”

The launch builds on an NVIDIA announcement in August that named Hitachi among early adopters of RTX PRO Servers, a category meant to help enterprises re-architect data centers for AI workloads such as digital twin simulations and physical AI.

At the time, Hitachi president and CEO Toshiaki Tokunaga said using RTX PRO Servers would speed innovation and improve productivity by accelerating AI reasoning and digital twin development for physical assets, including social infrastructure.

The timing also dovetails with Hitachi’s recent deal to acquire German data and AI services firm synvert through subsidiary GlobalLogic, a move aimed at scaling Agentic AI and physical AI expertise and broadening HMAX deployments.

The company announced that agreement on September 24.

The thrust is consistent with how large industrials are trying to convert AI from an IT experiment into an operational advantage for investors.

The hardware stack points to near-term capacity for training and, more importantly, for inference close to assets.

The software choices reflect a push to standardize on NVIDIA’s enterprise stack and to industrialize digital twin work through Omniverse libraries.

If Hitachi keeps execution tight, the AI Factory could reduce cycle time from simulation to field deployment for customers in transport, energy and manufacturing, which is where physical AI will be judged.

The company did not disclose spending levels, capacity figures or a start date for customer-facing services.

It emphasized instead that the factory is designed to let teams share compute, data and models across regions while complying with latency and locality needs.

As with any AI platform, the long-term impact will depend on how quickly Hitachi can translate a common toolchain into measurable gains in uptime, throughput and safety for its installed base.

Share This Article
Senior Financial Editor
Follow:

I am Emily Carter, a finance journalist based in Toronto. I began my career in corporate finance in Alberta, building models and tracking Canadian markets. I moved east when I realized I cared more about explaining what the numbers mean than producing them. Toronto put me closer to Bay Street and to the people who feel those market moves. I write about investing, stocks, market moves, company earnings, personal finance, crypto, and any topic that helps readers make sense of money.

Alberta is still home in my voice and my work. I sketch portraits in the evenings and read a steady stream of fiction, which keeps me focused on people and detail. Those habits help me translate complex data into clear stories. I aim for reporting that is curious, accurate, and useful, the kind you can read at a kitchen table and use the next day.