Skip to main content

GitHub

/tavily-ai/tavily-cookbook/agent-toolkit

PyPI

tavily-agent-toolkit

What Is the Agent Toolkit?

The Tavily Agent Toolkit is a Python library that gives your agents optimized research primitives on top of the Tavily API. Instead of wiring up raw API calls, managing token limits, deduplicating sources, and formatting results for LLMs yourself, the toolkit handles all of that so your agent can focus on reasoning. It provides three layers:
LayerWhat It Does
AgentsPre-built research strategies that combine internal knowledge with web research. Fast or deep multi-agent modes.
ToolsOptimized retrieval patterns: search, crawl, extract, social media. Each tool handles context engineering (formatting, dedup, token management) automatically.
Bring Your Own ModelEvery tool that needs an LLM accepts a ModelConfig. Supports 20+ providers via LangChain with automatic fallback chains.

Installation

pip install tavily-agent-toolkit
For LLM features, install your preferred provider:
pip install langchain-openai       # OpenAI
pip install langchain-anthropic    # Anthropic
pip install langchain-google-genai # Google
pip install langchain-groq         # Groq

Available Tools

ToolWhen to Use
search_and_answerAnswer questions with web research + LLM synthesis
search_dedupRun multiple queries in parallel, deduplicate results
crawl_and_summarizeExtract and summarize entire websites
extract_and_summarizeGet focused summaries from specific URLs
social_media_searchSearch Reddit, X, LinkedIn, TikTok, and more
from tavily_agent_toolkit import search_and_answer, ModelConfig, ModelObject

result = await search_and_answer(
    query="What are the pros and cons of Rust vs Go?",
    api_key="tvly-xxx",
    model_config=ModelConfig(model=ModelObject(model="anthropic:claude-sonnet-4-5")),
    max_number_of_subqueries=3,
)
print(result["answer"])

Tools Reference

Full documentation for every tool: parameters, output shapes, and usage examples.

Pre-Built Agents

hybrid_research

Combines your internal knowledge base with real-time web research. You provide a RAG function that queries your internal data — the agent identifies gaps and fills them with web research. Two modes:
ModeBest ForHow It Works
FastQuick answers, lower latencyInternal RAG → generate subqueries → parallel web search → synthesize
Multi-AgentComprehensive research, complex topicsInternal RAG → identify gaps → Tavily deep research endpoint → synthesize
from tavily_agent_toolkit import hybrid_research, ModelConfig, ModelObject

result = await hybrid_research(
    api_key="tvly-xxx",
    query="What's our competitor's current pricing strategy?",
    model_config=ModelConfig(model=ModelObject(model="openai:gpt-5.2")),
    internal_rag_function=my_rag,
    mode="fast",
)
print(result["report"])

Hybrid Research

Deep dive into hybrid_research: modes, structured output, custom synthesis, and data enrichment patterns.

Model Configuration

All tools accept a ModelConfig for LLM operations. Use the "provider:model" format:
from tavily_agent_toolkit import ModelConfig, ModelObject

config = ModelConfig(
    model=ModelObject(model="openai:gpt-5.2"),
    fallback_models=[
        ModelObject(model="anthropic:claude-sonnet-4-20250514"),
        ModelObject(model="groq:llama-3.3-70b-versatile"),
    ],
    temperature=0.7,
)
20+ providers are supported via LangChain’s init_chat_model: OpenAI, Anthropic, Google, Groq, Mistral, Cohere, Together, Fireworks, AWS Bedrock, Azure, and more.

Use-Case Recipes

Production-ready agent implementations. Each is available in both Anthropic SDK and LangGraph flavors.

Chatbot

Routes between quick search and deep research based on query complexity.

Company Intelligence

Crawls websites and searches the web for comprehensive company research.

Social Media Research

Searches across TikTok, Reddit, X, LinkedIn, and more for any topic.

Hybrid Research

Combines internal data with web research for comprehensive reports.