GRAIL: A Deep-Granularity Hybrid Resonance Framework for Real-Time Agent Discovery via SLM-Enhanced Indexing
TLDR
GRAIL is a novel framework for real-time agent discovery, achieving sub-400ms latency and high accuracy using SLMs and fine-grained matching.
Key contributions
- Replaces heavy LLM parsers with a specialized SLM for millisecond-level capability tag prediction.
- Augments agent descriptions with synthetic queries, enhancing semantic density for robust retrieval.
- Introduces MaxSim Resonance for fine-grained matching, mitigating semantic dilution in agent discovery.
- Achieves sub-400ms discovery latency and >79x speedup over LLM baselines on AgentTaxo-9K.
Why it matters
The rapid expansion of LLM-based agents necessitates efficient discovery. GRAIL addresses this by offering a scalable, industrial-grade solution that drastically reduces latency while maintaining high accuracy. This framework is crucial for enabling large-scale multi-agent collaboration and the future "Internet of Agents."
Original Abstract
As the ecosystem of Large Language Model (LLM)-based agents expands rapidly, efficient and accurate Agent Discovery becomes a critical bottleneck for large-scale multi-agent collaboration. Existing approaches typically face a dichotomy: either relying on heavy-weight LLMs for intent parsing, leading to prohibitive latency (often exceeding 30 seconds), or using monolithic vector retrieval that sacrifices semantic precision for speed. To bridge this gap, we propose \textbf{GRAIL} (Granular Resonance-based Agent/AI Link), a novel framework achieving sub-400ms discovery latency without compromising accuracy. GRAIL introduces three key innovations: (1) \textbf{SLM-Enhanced Prediction}, replacing the generalized LLM parser with a specialized, fine-tuned Small Language Model (SLM) for millisecond-level capability tag prediction; (2) \textbf{Pseudo-Document Expansion}, augmenting agent descriptions with synthetic queries to enhance semantic density for robust dense retrieval; and (3) \textbf{MaxSim Resonance}, a fine-grained matching mechanism computing maximum similarity between user queries and discrete agent usage examples, effectively mitigating semantic dilution. Validated on \textbf{AgentTaxo-9K}, our new large-scale dataset of 9,240 agents, GRAIL reduces end-to-end discovery latency by over \textbf{79$\times$} compared to LLM-parsing baselines, while significantly outperforming traditional vector search in Recall@10. This framework offers a scalable, industrial-grade solution for the real-time ``Internet of Agents."
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.