ag_kit_py.providers
The Model Provider system provides a unified interface for interacting with different Large Language Model (LLM) services. It abstracts away provider-specific implementations and offers a consistent API for creating chat completions, streaming responses, and managing tools across multiple LLM providers.Installation
Key Features
- Unified Interface: Consistent API across multiple LLM providers
- Multiple Providers: Support for OpenAI, Zhipu, Qwen, DeepSeek, Anthropic, and custom providers
- Streaming Support: Real-time streaming responses for better user experience
- Tool Calling: Function/tool calling capabilities with standardized format
- LangChain Integration: Seamless integration with LangChain and LangGraph
- Error Handling: Comprehensive error handling with retry mechanisms
- Configuration Management: Flexible configuration with environment variable support
Supported Providers
| Provider | Type | Status | Description |
|---|---|---|---|
| OpenAI | openai | ✅ Available | OpenAI API and compatible endpoints |
| Zhipu | zhipu | 🚧 Coming Soon | Zhipu AI (智谱AI) |
| Qwen | qwen | 🚧 Coming Soon | Alibaba Qwen (通义千问) |
| DeepSeek | deepseek | 🚧 Coming Soon | DeepSeek AI |
| Anthropic | anthropic | 🚧 Coming Soon | Anthropic Claude |
| Custom | custom | ✅ Available | Custom provider implementation |
Quick Start
Using Factory Function (Recommended)
The simplest way to create a provider is using the factory function with environment variables:Direct Provider Creation
For more control, create providers directly:Using with LangGraph Agents
Core Concepts
Provider Types
Providers are identified by their type string:Configuration
All providers use theModelProviderConfig dataclass:
Chat Completion
Providers support both synchronous and streaming chat completions:Tool/Function Calling
Providers support tool calling with standardized format:Environment Variables
Providers support configuration through environment variables:OpenAI Provider
Using Environment Variables
Error Handling
All providers use theModelProviderError exception class:
Error Types
authentication: Invalid API key or credentialsrate_limit: Rate limit exceededquota_exceeded: API quota exceededinvalid_request: Invalid request parametersserver_error: Provider server errortimeout: Request timeoutunknown: Unknown error
Best Practices
1. Use Environment Variables
Store sensitive credentials in environment variables:2. Reuse Provider Instances
Create provider instances once and reuse them:3. Handle Errors Gracefully
Implement proper error handling with retries:4. Configure Timeouts
Set appropriate timeouts for your use case:5. Use Streaming for Better UX
Use streaming for real-time responses:Architecture
Class Hierarchy
Key Methods
All providers implement these abstract methods:create_completion(): Create a chat completioncreate_stream(): Create a streaming chat completionget_provider_name(): Get provider nameget_default_model(): Get default model nameformat_tools(): Format tools for the providerparse_tool_calls(): Parse tool calls from responseget_langchain_model(): Get LangChain-compatible model
Related Documentation
- Base Provider API - Abstract base class and interfaces
- OpenAI Provider - OpenAI provider implementation
- Factory Functions - Provider creation utilities
- Agents Overview - Using providers with agents
- LangGraph Agent - LangGraph integration
Next Steps
- Learn about Base Provider API for custom implementations
- Explore OpenAI Provider for detailed usage
- Check Factory Functions for simplified creation
- See LangGraph Agent for agent integration