Skip to main content

Overview

When using the Tavily Research API, you can stream responses in real-time by setting stream: true in your request. This allows you to receive research progress updates, tool calls, and final results as they’re generated, providing a better user experience for long-running research tasks. Streaming is particularly useful for:
  • Displaying research progress to users in real-time
  • Monitoring tool calls and search queries as they execute
  • Receiving incremental updates during lengthy research operations
  • Building interactive research interfaces

Enabling Streaming

To enable streaming, set the stream parameter to true when making a request to the Research endpoint:
{
  "input": "What are the latest developments in AI?",
  "stream": true
}
The API will respond with a text/event-stream content type, sending Server-Sent Events (SSE) as the research progresses.

Event Structure

Each streaming event follows a consistent structure compatible with the OpenAI chat completions format:
{
  "id": "123e4567-e89b-12d3-a456-426614174111",
  "object": "chat.completion.chunk",
  "model": "mini",
  "created": 1705329000,
  "choices": [
    {
      "delta": {
        // Event-specific data here
      }
    }
  ]
}

Core Fields

FieldTypeDescription
idstringUnique identifier for the stream event
objectstringAlways "chat.completion.chunk" for streaming events
modelstringThe research model being used ("mini" or "pro")
createdintegerUnix timestamp when the event was created
choicesarrayArray containing the delta with event details

Event Types

The streaming response includes different types of events in the delta object. Here are the main event types you’ll encounter:

1. Tool Call Events

When the research agent performs actions like web searches, you’ll receive tool call events:
{
  "id": "evt_002",
  "object": "chat.completion.chunk",
  "model": "mini",
  "created": 1705329005,
  "choices": [
    {
      "delta": {
        "role": "assistant",
        "tool_calls": {
          "type": "tool_call",
          "tool_call": [
            {
              "name": "WebSearch",
              "id": "fc_633b5932-e66c-4523-931a-04a7b79f2578",
              "arguments": "Executing 5 search queries",
              "queries": ["latest AI developments 2024", "machine learning breakthroughs", "..."]
            }
          ]
        }
      }
    }
  ]
}
Tool Call Delta Fields:
FieldTypeDescription
typestringEither "tool_call" or "tool_response"
tool_callarrayDetails about the tool being invoked
namestringName of the tool (see Tool Types below)
idstringUnique identifier for the tool call
argumentsstringDescription of the action being performed
queriesarray(WebSearch only) The search queries being executed
parent_tool_call_idstring(Pro mode only) ID of the parent tool call for nested operations

2. Tool Response Events

After a tool executes, you’ll receive response events with discovered sources:
{
  "id": "evt_003",
  "object": "chat.completion.chunk",
  "model": "mini",
  "created": 1705329010,
  "choices": [
    {
      "delta": {
        "role": "assistant",
        "tool_calls": {
          "type": "tool_response",
          "tool_response": [
            {
              "name": "WebSearch",
              "id": "fc_633b5932-e66c-4523-931a-04a7b79f2578",
              "arguments": "Completed executing search tool call",
              "sources": [
                {
                  "url": "https://example.com/article",
                  "title": "Example Article",
                  "favicon": "https://example.com/favicon.ico"
                }
              ]
            }
          ]
        }
      }
    }
  ]
}
Tool Response Fields:
FieldTypeDescription
namestringName of the tool that completed
idstringUnique identifier matching the original tool call
argumentsstringCompletion status message
sourcesarraySources discovered by the tool (with url, title, favicon)
parent_tool_call_idstring(Pro mode only) ID of the parent tool call

3. Content Events

The final research report is streamed as content chunks:
{
  "id": "evt_004",
  "object": "chat.completion.chunk",
  "model": "mini",
  "created": 1705329015,
  "choices": [
    {
      "delta": {
        "role": "assistant",
        "content": "# Research Report\n\nBased on the latest sources..."
      }
    }
  ]
}
Content Field:
  • Can be a string (markdown-formatted report chunks) when no output_schema is provided
  • Can be an object (structured data) when an output_schema is specified

4. Sources Event

After the content is streamed, a sources event is emitted containing all sources used in the research:
{
  "id": "evt_005",
  "object": "chat.completion.chunk",
  "model": "mini",
  "created": 1705329020,
  "choices": [
    {
      "delta": {
        "role": "assistant",
        "sources": [
          {
            "url": "https://example.com/article",
            "title": "Example Article Title",
            "favicon": "https://example.com/favicon.ico"
          }
        ]
      }
    }
  ]
}
Source Object Fields:
FieldTypeDescription
urlstringThe URL of the source
titlestringThe title of the source page
faviconstringURL to the source’s favicon

5. Done Event

Signals the completion of the streaming response:
event: done

Tool Types

During research, you’ll encounter the following tool types in streaming events:
Tool NameDescriptionModel
PlanningInitializes the research plan based on the input queryBoth
GeneratingGenerates the final research report from collected informationBoth
WebSearchExecutes web searches to gather informationBoth
ResearchSubtopicConducts deep research on specific subtopicsPro only

Research Flow Example

A typical streaming session follows this sequence:
  1. Planning tool_call → Initializing research plan
  2. Planning tool_response → Research plan initialized
  3. WebSearch tool_call → Executing search queries (with queries array)
  4. WebSearch tool_response → Search completed (with sources array)
  5. (Pro mode) ResearchSubtopic tool_call/response cycles for deeper research
  6. Generating tool_call → Generating final report
  7. Generating tool_response → Report generated
  8. Content events → Streamed report chunks
  9. Sources event → Complete list of all sources used
  10. Done event → Stream complete

Handling Streaming Responses

Python Example

from tavily import TavilyClient

# Step 1. Instantiating your TavilyClient
tavily_client = TavilyClient(api_key="tvly-YOUR_API_KEY")

# Step 2. Creating a streaming research task
stream = tavily_client.research(
    input="Research the latest developments in AI",
    model="pro",
    stream=True
)

for chunk in stream:
    print(chunk.decode('utf-8'))

JavaScript Example

const { tavily } = require("@tavily/core");

const tvly = tavily({ apiKey: "tvly-YOUR_API_KEY" });

const stream = await tvly.research("Research the latest developments in AI", {
  model: "pro",
  stream: true,
});

for await (const chunk of result as AsyncGenerator<Buffer, void, unknown>) {
    console.log(chunk.toString('utf-8'));
}

Structured Output with Streaming

When using output_schema to request structured data, the content field will contain an object instead of a string:
{
  "delta": {
    "role": "assistant",
    "content": {
      "company": "Acme Corp",
      "key_metrics": ["Revenue: $1M", "Growth: 50%"],
      "summary": "Company showing strong growth..."
    }
  }
}

Error Handling

If an error occurs during streaming, you may receive an error event:
{
  "id": "1d77bdf5-38a4-46c1-87a6-663dbc4528ec",
  "object": "error",
  "error": "An error occurred while streaming the research task"
}
Always implement proper error handling in your streaming client to gracefully handle these cases.

Non-Streaming Alternative

If you don’t need real-time updates, set stream: false (or omit the parameter) to receive a single complete response:
{
  "request_id": "123e4567-e89b-12d3-a456-426614174111",
  "created_at": "2025-01-15T10:30:00Z",
  "status": "pending",
  "input": "What are the latest developments in AI?",
  "model": "mini",
  "response_time": 1.23
}
You can then poll the status endpoint to check when the research is complete.