Nvidia is ramping up the artificial intelligence (AI) arms race, unveiling more than 70 research papers showing how the fast-evolving technology can perform in real-world settings that go far beyond text and images.
The chip company is advancing what it calls “embodied intelligence,” or AI that can perceive, reason and act in industries including manufacturing, biotechnology and transportation. As Fast Company reported on Monday (May 5), Nvidia sees these capabilities as essential to future breakthroughs in robotics, drug development and autonomous navigation.
“For AI to be truly useful, it must engage meaningfully with real-world use cases,” the magazine quoted Bryan Catanzaro, vice president of applied deep learning, as saying.
The chip giant presented its company-authored papers, covering healthcare, robotics, autonomous vehicles and large language models, at a major technology conference held in Singapore at the end of last month. Nvidia is continuing its collaborative push into AI that has seen partnerships with Google, GE Healthcare and GM.
One paper describes a key development called Skill Reuse via Skill Adaptation (SRSA), a system that enables robotic agents to perform unfamiliar tasks by adapting previously learned skills. Nvidia said the system improved task success by 19% and reduced training sample needs by more than half, helping speed deployment across logistics and industrial robotics.
In the biotech sector, the company’s Proteína model trains on 21 million synthetic protein structures to generate long-chain backbones of up to 800 amino acids. Nvidia says the model outperforms Google’s DeepMind’s Genie 2, a cutting-edge AI model, in accuracy and diversity. It says its structure-labeled outputs could accelerate vaccine development and enzyme design.
STORM, short for Spatio-Temporal Occupancy Reconstruction Machine, builds 3D maps in under 200 milliseconds — fast enough for use in drones, AR systems and autonomous vehicles navigating complex environments.
Aimed at improving reasoning, the company’s Nemotron-MIND teaches large language models how to solve math problems using synthetic dialogue. According to the company, models trained this way outperform larger systems on key benchmarks while using far fewer tokens.
Nvidia delivered these announcements at the 2025 International Conference on Learning Representations (ICLR). At the event, the chipmaker also introduced Inference Microservices (NIM) — a deployment platform designed to help firms run advanced AI models without large-scale infrastructure.