FunctionGemma, released in 2025, sparked interest beyond its initial demos, leading to its potential use as a multi-agent system router. The article details an experiment fine-tuning FunctionGemma to route customer queries across seven e-commerce support agents. The core idea was to determine if a 270M parameter model could learn sophisticated routing, challenging traditional rule-based methods. E-commerce support, with its diverse query types, provided a suitable test case for this multi-agent system. The experiment involved designing specialized agents, each with specific capabilities and triggers, to handle different customer issues. A key challenge was to train FunctionGemma to understand natural language and route queries accordingly. LoRA was used for efficient fine-tuning, focusing on attention layers to optimize the process. Training data was generated programmatically to create realistic customer queries, including variations and edge cases. The training configuration utilized Google Colab with a T4 GPU, utilizing parameters like batch size and learning rate. The results demonstrated an 89.40% overall accuracy in agent routing, significantly surpassing keyword-based approaches.
dev.to
dev.to
