Liquid AI Inc. Launches Revolutionary Liquid Foundation Models
Brief news summary
Liquid AI Inc., a startup founded by MIT researchers, has launched its first generative AI models known as Liquid Foundation Models (LFMs). These models utilize a unique architecture that sets them apart from traditional Generative Pre-trained Transformers, with the goal of matching or exceeding the performance of existing large language models. The company's mission focuses on developing efficient and adaptable AI systems suitable for organizations of all sizes. The LFMs prioritize memory optimization and computational efficiency, drawing from principles of dynamical systems and signal processing to improve the handling of sequential data. The model lineup includes LFM-1B, designed for low-resource settings; LFM-3B, tailored for edge computing; and LFM-40B, which targets complex tasks in cloud environments. Liquid AI asserts that LFMs achieve new performance standards, outperforming models like ChatGPT. The company aims to enhance compatibility across various hardware configurations and encourages the AI community to participate in red-teaming to evaluate the models before their official release. Early access to these advanced models is currently available on multiple platforms, allowing organizations to capitalize on their capabilities effectively.Liquid AI Inc. , an artificial intelligence startup and MIT spinoff, has launched its first generative AI models, called "Liquid Foundation Models" (LFMs), which are distinguished by a novel architecture focused on performance and efficiency. Founded by MIT researchers Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus, Liquid AI aims to develop versatile AI models suitable for all organizational sizes. LFMs utilize principles from dynamical systems, numerical linear algebra, and signal processing, allowing them to efficiently process various sequential data types, such as text and audio. Unlike traditional deep learning models that require thousands of neurons, LFMs, based on Liquid Neural Network (LNN) architecture, achieve comparable outcomes with significantly fewer.
This efficiency allows them to handle up to 1 million tokens with minimal memory usage. The initial offerings include three models: LFM-1B (1. 3 billion parameters for resource-limited environments), LFM-3B (3. 1 billion parameters for edge applications), and LFM-40B (40. 3 billion parameters for complex cloud-based scenarios). These models have already demonstrated superior performance in AI benchmarks, competing effectively against established models like ChatGPT and others. Liquid AI has made the LFMs available for early access through platforms like Liquid Playground, Lambda, and Perplexity Labs, enabling organizations to integrate and test the models in various deployment scenarios. The company is also optimizing the models for specific hardware from companies like Nvidia and AMD, with plans to release detailed technical information and invite the AI community to stress-test the LFMs.
Watch video about
Liquid AI Inc. Launches Revolutionary Liquid Foundation Models
Try our premium solution and start getting clients — at no cost to you