2025: The Year of Physical Intelligence in AI and Robotics
Brief news summary
Artificial intelligence (AI) has made notable progress in generating text, audio, and video, but its understanding of the physical world remains limited, creating challenges for practical applications. This is particularly true for technologies like self-driving cars, which may encounter unexpected errors. To address these issues, the concept of "physical intelligence" has emerged. This approach combines AI's computational capabilities with robotics, enabling systems to interact with their environment through an understanding of cause and effect. MIT researchers are pioneering this field with "liquid networks," a form of physical intelligence that goes beyond traditional AI by continuously learning and adapting beyond its initial programming. This innovative method allows systems to perform complex tasks with simple digital instructions. For instance, MIT's lab has created a 3D-printed robot that can move forward with basic commands. Other advancements include Covariant's robotic arms operated by chatbots and Carnegie Mellon's robots, which use neural networks for dynamic movements. Advancements in physical intelligence suggest that by 2025, intelligent systems might become widespread, able to perform physical tasks on demand. This development is anticipated to extend AI's reach beyond digital environments, influencing fields such as smart home technologies and more.Recent AI models exhibit humanlike capabilities in generating text, audio, and video when prompted. However, these algorithms have primarily been confined to the digital realm, rather than interacting with our physical, three-dimensional world. Even the most advanced models face significant challenges when applied to real-world scenarios, such as the ongoing struggle to create safe and reliable self-driving cars. While these models are artificially intelligent, they tend to lack an understanding of physics and often produce hallucinations, leading to inexplicable errors. This is the year AI transitions from the digital sphere to our real world. Extending AI's reach beyond the digital boundary necessitates reimagining machine thinking by combining AI's digital intelligence with robotics' mechanical skills. This fusion, which I refer to as "physical intelligence, " enables machines to comprehend dynamic environments, manage unpredictability, and make real-time decisions. Unlike conventional AI models, physical intelligence is grounded in physics, incorporating fundamental real-world principles like cause-and-effect. Such attributes allow physical intelligence models to engage with and adapt to various environments. At my MIT research group, we are developing physical intelligence models known as liquid networks.
In one experiment, we trained two drones—one using a standard AI model and another using a liquid network—to locate objects in a forest during summer, utilizing data from human pilots. While both drones excelled when performing tasks they were trained for, only the liquid network drone adapted to find objects in new conditions—like winter or urban settings. This experiment demonstrated that unlike conventional AI systems that stop learning after initial training, liquid networks continuously learn and adapt from experience, similar to humans. Physical intelligence also interprets and executes complex commands derived from text or images, bridging digital instructions with real-world action. For example, we've created a system in our lab capable of designing and 3D-printing small robots in less than a minute based on prompts like "robot that can walk forward" or "robot that can grip objects. " Significant breakthroughs are also occurring in other labs. Robotics startup Covariant, led by UC-Berkeley researcher Pieter Abbeel, is creating chatbots similar to ChatGPT that can operate robotic arms. They have raised over $222 million to develop and deploy sorting robots globally in warehouses. A team at Carnegie Mellon University demonstrated that a robot with a single camera and imprecise actuation could perform dynamic parkour movements—such as jumping onto obstacles twice its height and crossing gaps twice its length—using a neural network trained through reinforcement learning. If 2023 was the year of text-to-image and 2024 the year of text-to-video, then 2025 is poised to usher in the era of physical intelligence. This new generation of devices, which include not only robots but also systems like power grids and smart homes, will be capable of interpreting instructions and executing tasks in the real world.
Watch video about
2025: The Year of Physical Intelligence in AI and Robotics
Try our premium solution and start getting clients — at no cost to you