VentureBeat

Arcee aims to reboot U.S. open source AI with new Trinity models released under Apache 2.0

Chinese research labs have been leading the development of open-weight Mixture-of-Experts (MoE) language models, often with superior benchmark performance. In response, U.S. company Arcee AI has launched its new "Trinity" family, the first fully U.S.-trained open-weight MoE models. The initial releases, Trinity Mini and Trinity Nano Preview, are available for free download and modification under an Apache 2.0 license. These models represent a significant effort by a U.S. startup to build end-to-end open-weight models from scratch using American infrastructure and curated datasets. Trinity Mini is a 26B parameter model designed for high throughput, while Trinity Nano Preview is a smaller, experimental 6B parameter model. Both utilize Arcee's novel Attention-First Mixture-of-Experts (AFMoE) architecture, which integrates sparse expert routing with enhanced attention mechanisms for improved reasoning and efficiency. Arcee partnered with DatologyAI for data curation and Prime Intellect for computational infrastructure to execute this ambitious project entirely within the U.S. This strategic move emphasizes model sovereignty and control over the training process, a critical factor for future enterprise AI. Arcee is also training Trinity Large, a 420B parameter model set to launch in January 2026, aiming to be a frontier-scale, U.S.-trained open-weight model. The Trinity launch signifies a renewed push for domestic, controlled development in the open-source LLM landscape.
favicon
venturebeat.com
venturebeat.com
favicon
bsky.app
AI and ML News on Bluesky @ai-news.at.thenote.app
Create attached notes ...