IBM has released four new open-source Granite 4.0 Nano language models, ranging from 350 million to 1.5 billion parameters. These models prioritize efficiency and accessibility, designed to run on laptops and edge devices rather than requiring extensive cloud computing resources. The smallest models can even operate within a web browser, making them highly versatile for developers. The models are licensed under Apache 2.0, enabling commercial use and modification. They possess native compatibility with tools like llama.cpp and vLLM and are certified under ISO 42001 for responsible AI development. Benchmarks reveal they rival or outperform larger models in similar categories, particularly in instruction following and function calling. IBM's Granite models address needs for deployment flexibility, inference privacy, and open auditability. IBM has engaged with the open-source community, hinting at larger models and fine-tuning recipes in the future. This release signals a shift towards strategically scaled AI rather than solely relying on model size. Granite models are designed as enterprise-ready systems emphasizing transparency and performance.
bsky.app
AI and ML News on Bluesky @ai-news.at.thenote.app
venturebeat.com
venturebeat.com
Create attached notes ...
