Introducing AIRIS: The Future of AI in 3D Game Environments
Brief news summary
AIRIS, developed by SingularityNET and the ASI Alliance, is transforming AI by enabling self-directed learning in the intricate 3D world of Minecraft. This contrasts with traditional AI training in simpler 2D environments. Minecraft's detailed, open-world setting enhances AI's problem-solving abilities. Initially trained on basic 2D puzzles, AIRIS now sets a standard for Reinforcement Learning by operating in Minecraft, where it processes a 5x5x5 grid view and performs 16 actions such as moving and jumping. In "Free Roam" mode, AIRIS shows remarkable adaptability, exploring new areas and overcoming obstacles, surpassing typical Reinforcement Learning methods used in other dynamic scenarios. This ability allows AIRIS to accurately navigate to specific coordinates, reflecting its advanced learning skills. Potential applications include stress-testing and bug detection in complex video games like Fallout 4, thus improving quality assurance for developers. In essence, AIRIS signifies a major leap in autonomous AI learning within complex virtual environments.A new artificial intelligence named AIRIS (Autonomous Intelligent Reinforcement Inferred Symbolism) is being developed by SingularityNET and the ASI Alliance to learn and play Minecraft from scratch using the game’s feedback loop. Unlike previous AI testing environments, which were often 2D and linear, Minecraft offers a complex 3D world where AIRIS can explore and navigate, testing its ability to grasp game design goals without specific instructions. SingularityNET and ASI Alliance chose Minecraft due to its complexity, popularity, technical fit for AI integration, and status as a Reinforcement Learning benchmark, allowing comparisons with existing algorithms. AIRIS receives inputs from a 5 x 5 x 5 block grid surrounding it and its current coordinates.
Initially, the AI can perform simple actions like moving or jumping in one of eight directions and will later gain more complex actions like mining or crafting. In ‘Free Roam’ mode, AIRIS builds an internal map as it explores, adapting to obstacles such as trees or caves. If given specific coordinates, it navigates towards them, exploring new areas in the process. The ability to explore and adapt in 3D environments distinguishes AIRIS from traditional Reinforcement Learning, which struggles with such tasks. A practical application for AIRIS could include automated bug and stress testing in game development, where it would identify issues by interacting with game elements, thus streamlining the quality assurance process. This represents a significant step toward self-directed learning in complex virtual worlds, exciting news for AI enthusiasts.
Watch video about
Introducing AIRIS: The Future of AI in 3D Game Environments
Try our premium solution and start getting clients — at no cost to you