GitHub
/tavily-ai/tavily-cookbook/agent-toolkitPyPI
tavily-agent-toolkitWhat Is the Agent Toolkit?
The Tavily Agent Toolkit is a Python library that gives your agents optimized research primitives on top of the Tavily API. Instead of wiring up raw API calls, managing token limits, deduplicating sources, and formatting results for LLMs yourself, the toolkit handles all of that so your agent can focus on reasoning. It provides three layers:| Layer | What It Does |
|---|---|
| Agents | Pre-built research strategies that combine internal knowledge with web research. Fast or deep multi-agent modes. |
| Tools | Optimized retrieval patterns: search, crawl, extract, social media. Each tool handles context engineering (formatting, dedup, token management) automatically. |
| Bring Your Own Model | Every tool that needs an LLM accepts a ModelConfig. Supports 20+ providers via LangChain with automatic fallback chains. |
Installation
Available Tools
| Tool | When to Use |
|---|---|
search_and_answer | Answer questions with web research + LLM synthesis |
search_dedup | Run multiple queries in parallel, deduplicate results |
crawl_and_summarize | Extract and summarize entire websites |
extract_and_summarize | Get focused summaries from specific URLs |
social_media_search | Search Reddit, X, LinkedIn, TikTok, and more |
Tools Reference
Full documentation for every tool: parameters, output shapes, and usage examples.
Pre-Built Agents
hybrid_research
Combines your internal knowledge base with real-time web research. You provide a RAG function that queries your internal data — the agent identifies gaps and fills them with web research.
Two modes:
| Mode | Best For | How It Works |
|---|---|---|
| Fast | Quick answers, lower latency | Internal RAG → generate subqueries → parallel web search → synthesize |
| Multi-Agent | Comprehensive research, complex topics | Internal RAG → identify gaps → Tavily deep research endpoint → synthesize |
Hybrid Research
Deep dive into
hybrid_research: modes, structured output, custom synthesis, and data enrichment patterns.Model Configuration
All tools accept aModelConfig for LLM operations. Use the "provider:model" format:
init_chat_model: OpenAI, Anthropic, Google, Groq, Mistral, Cohere, Together, Fireworks, AWS Bedrock, Azure, and more.
Use-Case Recipes
Production-ready agent implementations. Each is available in both Anthropic SDK and LangGraph flavors.Chatbot
Routes between quick search and deep research based on query complexity.
Company Intelligence
Crawls websites and searches the web for comprehensive company research.
Social Media Research
Searches across TikTok, Reddit, X, LinkedIn, and more for any topic.
Hybrid Research
Combines internal data with web research for comprehensive reports.