Unified Tool Integration for LLMs: A Protocol-Agnostic Approach to Function Calling
Problem Statement
The growing ecosystem of tool-augmented LLMs has become highly fragmented, forcing developers to manually manage multiple protocols (e.g., OpenAI function calling, Anthropic tool use, etc.), write repetitive schema definitions, and orchestrate complex execution workflows. This fragmentation increases development cost and slows iteration. Existing solutions are protocol-specific and lack a unified abstraction layer for managing tools from heterogeneous sources.
Key Novelty
- Protocol-agnostic design that abstracts differences between competing tool integration standards into a single unified interface
- Automated schema generation that eliminates manual JSON schema authoring for function signatures, reducing boilerplate code by 60-80%
- Dual-mode concurrent execution engine that parallelizes tool calls, achieving up to 3.1x performance improvements over sequential baselines
Evaluation Highlights
- 60-80% reduction in code required for tool integration across multiple real-world integration scenarios
- Up to 3.1x speedup in execution performance via optimized concurrency compared to sequential tool execution baselines
Breakthrough Assessment
Methodology
- Design a protocol-agnostic abstraction layer that maps tool definitions and invocations to a common internal representation, decoupling application logic from specific LLM provider protocols
- Implement automated schema generation by introspecting function signatures, type hints, and docstrings to produce protocol-specific schemas without manual authoring
- Introduce a dual-mode concurrent execution engine that identifies independent tool calls from LLM outputs and dispatches them in parallel, then aggregates results for the model
System Components
Translates between provider-specific tool-calling formats (e.g., OpenAI, Anthropic) and a unified internal representation, enabling single-codebase compatibility with multiple LLM backends
Inspects Python function signatures, type annotations, and docstrings to automatically produce the required JSON schemas for tool registration, eliminating manual schema writing
Analyzes LLM-requested tool calls to detect parallelizable operations and executes them concurrently (async/threaded), reducing latency compared to sequential execution
Provides a unified registry for tools sourced from different origins (local functions, external APIs, MCP servers) with consistent lifecycle management
Results
| Metric/Benchmark | Baseline | This Paper | Delta |
|---|---|---|---|
| Lines of integration code (typical scenario) | ~100% (manual) | ~20-40% (automated) | -60% to -80% |
| Tool execution latency (parallel-eligible calls) | 1.0x (sequential) | up to 3.1x faster | +3.1x speedup |
| Protocol compatibility | Single protocol per codebase | Full multi-protocol support | Qualitative improvement |
| Schema authoring effort | Manual JSON schema per tool | Zero manual schema writing | Fully automated |
Key Takeaways
- ML practitioners building multi-tool LLM agents can adopt a protocol-agnostic wrapper to avoid rewriting integration code when switching between OpenAI, Anthropic, or other providers
- Automated schema generation from Python type hints and docstrings is a practical way to maintain tool definitions as living code rather than brittle, separately maintained JSON blobs
- Significant latency gains are achievable simply by parallelizing independent tool calls—developers should audit their tool execution pipelines for unnecessary sequential dependencies before pursuing more complex optimizations
Abstract
The proliferation of tool-augmented Large Language Models (LLMs) has created a fragmented ecosystem where developers must navigate multiple protocols, manual schema definitions, and complex execution workflows. We address this challenge by proposing a unified approach to tool integration that abstracts protocol differences while optimizing execution performance. Our solution demonstrates how protocol-agnostic design principles can significantly reduce development overhead through automated schema generation, dual-mode concurrent execution, and seamless multi-source tool management. Experimental results show 60-80% code reduction across integration scenarios, performance improvements up to 3.1x through optimized concurrency, and full compatibility with existing function calling standards. This work contributes both theoretical insights into tool integration architecture and practical solutions for real-world LLM application development.