BitEnergy AI, Inc. has developed Linear-Complexity Multiplication (L-Mul), a revolutionary technique that significantly reduces AI model energy consumption by up to 95% without compromising accuracy. L-Mul replaces complex floating-point multiplications with simpler integer additions, achieving faster calculations and lower energy consumption while maintaining accuracy. This method holds promise for reducing the energy footprint of AI models, particularly in the domain of large language models like GPT.
L-Mul demonstrates its effectiveness across various AI tasks, including natural language processing, vision tasks, and symbolic reasoning, achieving higher precision than existing 8-bit standards with less computation. The algorithm's seamless integration into the attention mechanism of transformer-based models further enhances its appeal.
However, L-Mul requires specialized hardware to fully realize its potential, as current hardware isn't optimized for this technique. The researchers are actively working on developing hardware that natively supports L-Mul calculations, aiming to unlock its full potential and revolutionize AI energy efficiency.
science.slashdot.org
science.slashdot.org
