Liquid AI's LFM2 series, launched in 2025, aims for the fastest on-device foundation models using a "liquid" architecture. LFM2 models are designed to efficiently perform on devices like phones and laptops, offering an alternative to cloud-only LLMs. Liquid AI has expanded LFM2 into specialized models, including a video analysis model and an edge deployment stack. The company has released a detailed technical report that outlines the architecture search, training data, and post-training processes. This approach is intended to allow other organizations to replicate and adapt these models to their specific needs. LFM2's architecture is optimized for real-world constraints like memory and latency on target hardware like mobile CPUs. The training pipeline focuses on structured approaches, making models behave like practical agents, unlike other "tiny LLMs". Furthermore, the series offers multimodal capabilities designed for efficiency on resource-constrained devices, such as document and audio understanding. LFM2 extends to retrieval models suitable for agent systems in enterprise deployments. LFM2 promotes a hybrid architecture where small on-device models handle critical tasks, and cloud models provide heavy reasoning as needed. In essence, LFM2 is enabling on-device AI as a viable design choice for various applications, especially for enterprises.
bsky.app
AI and ML News on Bluesky @ai-news.at.thenote.app
venturebeat.com
venturebeat.com
