Skip to main content

Track API usage by project with the new X-Project-ID header

  • You can now attach a Project ID to your API requests to organize and track usage by project. This is useful when a single API key is used across multiple projects or applications.
  • HTTP Header: Add X-Project-ID: your-project-id to any API request
  • Python SDK: Pass project_id=“your-project-id” when instantiating the client, or set the TAVILY_PROJECT environment variable
  • JavaScript SDK: Pass projectId: “your-project-id” when instantiating the client, or set the TAVILY_PROJECT environment variable
  • An API key can be associated with multiple projects
  • Filter requests by project in the /logs endpoint and platform usage dashboard to keep track of where requests originate from

search_depth parameter - New options: fast and ultra-fast

  • fast (BETA)
    • Optimized for low latency while maintaining high relevance to the user query
    • Cost: 1 API Credit
  • ultra-fast (BETA)
    • Optimized strictly for latency
    • Cost: 1 API Credit

query and chunks_per_source parameters for Extract and Crawl

  • query (Extract)
    • Type: string
    • User intent for reranking extracted content chunks. When provided, chunks are reranked based on relevance to this query.
  • chunks_per_source (Extract & Crawl)
    • Type: integer
    • Range: 1 to 5
    • Default: 3
    • Chunks are short content snippets (maximum 500 characters each) pulled directly from the source.
    • Use chunks_per_source to define the maximum number of relevant chunks returned per source and to control the raw_content length.
    • Chunks will appear in the raw_content field as: <chunk 1> […] <chunk 2> […] <chunk 3>.
    • Available only when query is provided (Extract) or instructions are provided (Crawl).

include_usage parameter

  • You can now include credit usage information in the API response for the Search, Extract, Crawl, and Map endpoints.
  • Set the include_usage parameter to true to receive credit usage information in the API response.
  • Type:boolean
  • Default:false
  • When enabled, the response includes a usage object with credits information, making it easy to track API credit consumption for each request.
  • Note: The value may be 0 if the total successful calls have not yet reached the minimum threshold. See our Credits & Pricing documentation for details.

Tavily is now integrated with Vercel AI SDK v5

  • We’ve released a new @tavily/ai-sdk package that provides pre-built AI SDK tools for Vercel’s AI SDK v5.
  • Easily add real-time web search, content extraction, intelligent crawling, and site mapping to your AI SDK project with ready-to-use tools.
  • Available Tools: tavilySearch, tavilyExtract, tavilyCrawl, and tavilyMap
  • Full TypeScript support with proper type definitions and seamless integration with Vercel AI SDK v5.
  • Check out our integration guide to get started.

timeout parameter for Crawl and timeout parameter for Map

  • You can now specify a custom timeout for the Crawl and Map endpoints to control how long to wait for the operation before timing out.
  • Type:float
  • Range: Between 10 and 150 seconds
  • Default: 150 seconds
  • This gives you fine-grained control over crawl and map operation timeouts, allowing you to balance between reliability and speed based on your specific use case.

Role options: Owner, Admin, Member

You can now assign roles to team members, giving you more control over access and permissions. Each team has one owner, while there can be multiple admins and multiple members. The key distinction between roles is in their permissions for Billing and Settings:

  • Owner
    • Full access to all Settings
    • Access and ownership of the Billing account
  • Admin
    • Full access to Settings except ownership transfer
    • No access to Billing
  • Member
    • Limited Settings access (view members only)
    • No access to Billing

timeout parameter

  • You can now specify a custom timeout for the Extract endpoint to control how long to wait for URL extraction before timing out.
  • Type: number (float)
  • Range: Between 1.0 and 60.0 seconds
  • Default behavior: If not specified, automatic timeouts are applied based on extract_depth: 10 seconds for basic extraction and 30 seconds for advanced extraction.
  • This gives you fine-grained control over extraction timeouts, allowing you to balance between reliability and speed based on your specific use case.

start_date parameter,end_date parameter

  • You can now use both the start_date and end_date parameters in the Search endpoints.
  • start_date will return all results after the specified start date. Required to be written in the format YYYY-MM-DD.
  • end_date will return all results before the specified end date. Required to be written in the format YYYY-MM-DD.
  • Set start_date to 2025-01-01 and end_date to 2025-04-01 to reiceive results strictly from this time range.

Login to your account to view the usage dashboard


The usage dashboard provides the following features to paid users/teams:
  • The Usage Graph offers a breakdown of daily usage across all Tavily endpoints with historical data to enable month over month usage and spend comparison.
  • The Logs Table offers granular insight into each API request to ensure visibility and traceability with every Tavily interaction.

include_favicon parameter

  • You can now include the favicon URL for each result in the Search, Extract, and Crawl endpoints.
  • Set the include_favicon parameter to true to receive the favicon URL (if available) for each result in the API response.
  • This makes it easy to display website icons alongside your search, extraction, or crawl results, improving the visual context and user experience in your application.
Tavily Search
auto_parameters

  • Boolean default: false
  • When auto_parameters is enabled, Tavily automatically configures search parameters based on your query’s content and intent. You can still set other parameters manually, and your explicit values will override the automatic ones.
  • The parameters include_answer, include_raw_content, and max_results must always be set manually, as they directly affect response size.
  • Note: search_depth may be automatically set to advanced when it’s likely to improve results. This uses 2 API credits per request. To avoid the extra cost, you can explicitly set search_depth to basic.
/usage endpoint
Tavily Search
country parameter

Boost search results from a specific country.

    This will prioritize content from the selected country in the search results. Available only if topic is general.
Make & n8n Integrations
Tavily Extract
format parameter
  • Type: enum<string>
  • Default: markdown
  • The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.
  • Available options: markdown, text
Tavily Search
search_depth and chunks_per_sourceparameters
  • search_depth
    • Type: enum<string>
    • Default: basic
    • The depth of the search. advanced search is tailored to retrieve the most relevant sources and content snippets for your query, while basic search provides generic content snippets from each source.
    • A basic search costs 1 API Credit, while an advanced search costs 2 API Credits.
    • Available options: basic, advanced
  • chunks_per_source
    • Chunks are short content snippets (maximum 500 characters each) pulled directly from the source.
    • Use chunks_per_source to define the maximum number of relevant chunks returned per source and to control the content length.
    • Chunks will appear in the content field as: <chunk 1> […] <chunk 2> […] <chunk 3>.
    • Available only when search_depth is advanced.
    • Required range: 1 < x < 3
Tavily Crawl
  • Tavily Crawl enables you to traverse a website like a graph, starting from a base URL and automatically discovering and extracting content from multiple linked pages. With Tavily Crawl, you can:
    • Specify the starting URL and let the crawler intelligently follow links to map out the site structure.
    • Control the depth and breadth of the crawl, allowing you to focus on specific sections or perform comprehensive site-wide analysis.
    • Apply filters and custom instructions to target only the most relevant pages or content types.
    • Aggregate extracted content for further analysis, reporting, or integration into your workflows.
    • Seamlessly integrate with your automation tools or use the API directly for flexible, programmatic access.
    Tavily Crawl is ideal for use cases such as large-scale content aggregation, competitive research, knowledge base creation, and more.
    For full details and API usage examples, see the Tavily Crawl API reference.