Nvidia Unveils Advanced Generative Physical AI for Revolutionizing Automation
Brief news summary
At the CES event in Las Vegas, Nvidia showcased advancements in AI, particularly focusing on "generative physical AI" aimed at transforming industries such as manufacturing and logistics. Rev Lebaredian, Nvidia's VP for omniverse and simulation, highlighted the importance of robust computing systems—the equivalent of data centers—to support the future "AI factories." Nvidia is incorporating physical AI into industrial environments with the use of humanoid robots and sophisticated traffic systems, with plans to expand into healthcare and urban development. Lebaredian drew parallels between agentic AI and GPT models, explaining Nvidia's approach to categorizing robots into knowledge, generalist, and transportation robots. This physical AI is designed to automate a wide array of industrial tasks, utilizing a combination of real and synthetic data to improve efficiency. Nvidia presented a "three-computer solution" to drive AI progress: the Nvidia DGX for AI training, Nvidia Omniverse on OVX for simulations and data generation, and Nvidia AGX for processing sensor data. They also introduced Cosmos, featuring generative world foundation models (WFMs) for environment simulation and outcome prediction, crucial for training robots. These models come with open licenses, allowing for customized scenario analysis and realistic simulation videos. Additionally, Nvidia's new Nemotron model family prioritizes high performance and cost-effectiveness, laying the groundwork for a digital workforce that collaborates with humans and strengthens Nvidia's AI ecosystem.At CES in Las Vegas, Nvidia highlighted numerous AI advancements, particularly focusing on generative physical AI, which promises to revolutionize factory and warehouse automation. Rev Lebaredian, Nvidia's VP of omniverse and simulation technology, explained that the development of AI requires a new computing framework for data centers, likened to building AI factories. Unlike large language models that predict the next word or pixel, physical AI models must interpret the three-dimensional world, impacting fields from healthcare to smart cities. Lebaredian foresees an era where billions of physical and virtual robots will transform industries. Nvidia categorizes robots into three types: knowledge robots (agentic AI), generalist and humanoid robots, and autonomous vehicles.
These robots, guided by physical AI models, can understand their environments, unlike traditional language models that produce text or images. Nvidia's initiatives aim to bring AI into 10 million factories and 200, 000 warehouses, simplifying the current method of using thousands of human demonstrations or extensive data from autonomous vehicles. Synthetic data, generated through Nvidia's tools like the DGX, Omniverse, and AGX systems, is pivotal in refining AI models. Introducing the Nvidia Cosmos, which enhances this system, Nvidia Cosmos uses generative world foundation models (WFMs) to simulate real-world scenarios and streamline the creation of photorealistic, physics-based synthetic data. This innovation accelerates training for robots and self-driving cars, making development more efficient. Nvidia also presented the Nemotron models for agentic AI, optimized for high performance and reduced costs, supporting a digital workforce alongside humans. These models leverage Nvidia's resources to enhance performance and teach robots complex tasks more efficiently.
Watch video about
Nvidia Unveils Advanced Generative Physical AI for Revolutionizing Automation
Try our premium solution and start getting clients — at no cost to you