Tasks API

Most MCP tool calls complete in under a second: query a database, call an API, read a file. But some operations take minutes or hours: training a model, processing a large dataset, running a batch export, triggering a CI/CD pipeline. For these, a synchronous request-response model breaks down. The MCP Tasks API provides an async task model: a client submits a task, the server accepts it immediately, the client polls for updates, and the server streams progress events until completion. This lesson covers the full Tasks API implementation.

MCP Tasks API async operation diagram task submitted accepted polling progress events completion dark
Tasks API: submit a long-running operation, poll for progress via SSE, receive the result when done.

When to Use Tasks API vs Regular Tools

  • Use regular tools for operations that complete in under 30 seconds. Keep them synchronous – the LLM waits for the result before proceeding.
  • Use Tasks API for operations that take longer than 30 seconds, produce intermediate results the user or LLM can act on, or may fail partway through and need resumability.

Server-Side Task Implementation

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';
import crypto from 'node:crypto';

const server = new McpServer({ name: 'async-server', version: '1.0.0' });

// Task store (use Redis or PostgreSQL in production)
const tasks = new Map();

// Submit a long-running task - returns a task ID immediately
server.tool('start_data_export', {
  datasetId: z.string(),
  format: z.enum(['csv', 'json', 'parquet']),
  dateRange: z.object({
    from: z.string(),
    to: z.string(),
  }),
}, async ({ datasetId, format, dateRange }) => {
  const taskId = crypto.randomUUID();

  // Store task state
  tasks.set(taskId, {
    id: taskId,
    status: 'pending',
    progress: 0,
    createdAt: new Date().toISOString(),
    result: null,
    error: null,
  });

  // Start the long-running operation asynchronously
  runExportTask(taskId, datasetId, format, dateRange).catch(err => {
    const task = tasks.get(taskId);
    if (task) {
      task.status = 'failed';
      task.error = err.message;
    }
  });

  return {
    content: [{
      type: 'text',
      text: JSON.stringify({ taskId, status: 'pending', message: 'Export started. Use get_task_status to check progress.' }),
    }],
  };
});

// Poll task status
server.tool('get_task_status', {
  taskId: z.string().uuid(),
}, async ({ taskId }) => {
  const task = tasks.get(taskId);
  if (!task) {
    return {
      content: [{ type: 'text', text: JSON.stringify({ error: 'Task not found' }) }],
      isError: true,
    };
  }
  return { content: [{ type: 'text', text: JSON.stringify(task) }] };
});

// The actual long-running work
async function runExportTask(taskId, datasetId, format, dateRange) {
  const task = tasks.get(taskId);
  task.status = 'running';

  const totalRows = await db.countRows(datasetId, dateRange);
  const batchSize = 1000;
  const batches = Math.ceil(totalRows / batchSize);
  const results = [];

  for (let i = 0; i < batches; i++) {
    const batch = await db.fetchBatch(datasetId, dateRange, i * batchSize, batchSize);
    results.push(...batch);
    task.progress = Math.round(((i + 1) / batches) * 100);
    task.message = `Processed ${Math.min((i + 1) * batchSize, totalRows)} / ${totalRows} rows`;
    // Small delay to not hammer the DB
    await new Promise(r => setTimeout(r, 10));
  }

  const exportUrl = await uploadToStorage(results, format);
  task.status = 'completed';
  task.progress = 100;
  task.result = { url: exportUrl, rowCount: results.length };
}
Task status polling pattern LLM calling get_task_status multiple times watching progress 0 to 100 percent dark
LLM polling pattern: call start_task, then poll get_task_status with increasing intervals until status is ‘completed’.

Client-Side: LLM-Driven Task Polling

// System prompt that teaches the LLM how to handle async tasks
const ASYNC_SYSTEM_PROMPT = `When you call a tool that returns a taskId (like start_data_export), 
you must poll for the result using get_task_status. 
Poll every 5 seconds until status is 'completed' or 'failed'.
When completed, use the result URL to complete the user's request.
When failed, report the error message.`;

// Add this as part of the tool description to hint the LLM
server.tool('start_data_export', /* ... */);
// Tool description: "Starts a data export. Returns a taskId. Use get_task_status to check progress. 
//                   Poll until status is 'completed', then use the result.url."

Task Cancellation

server.tool('cancel_task', {
  taskId: z.string().uuid(),
}, async ({ taskId }) => {
  const task = tasks.get(taskId);
  if (!task) {
    return { content: [{ type: 'text', text: 'Task not found' }], isError: true };
  }
  if (task.status === 'completed' || task.status === 'failed') {
    return { content: [{ type: 'text', text: `Cannot cancel: task already ${task.status}` }], isError: true };
  }
  task.status = 'cancelled';
  task.cancelledAt = new Date().toISOString();
  // The running task checks for cancellation in its loop
  return { content: [{ type: 'text', text: `Task ${taskId} cancelled` }] };
});

Task Expiry and Cleanup

// Clean up completed/failed tasks older than 24 hours
setInterval(() => {
  const cutoff = Date.now() - 24 * 60 * 60 * 1000;
  for (const [id, task] of tasks) {
    if (['completed', 'failed', 'cancelled'].includes(task.status)) {
      const age = new Date(task.createdAt).getTime();
      if (age < cutoff) tasks.delete(id);
    }
  }
}, 60 * 60 * 1000);  // Run every hour

What to Build Next

  • Identify one tool in your MCP server that regularly takes longer than 10 seconds. Refactor it using the async task pattern from this lesson.
  • Add a list_my_tasks tool that returns all pending and running tasks for the authenticated user.

nJoy 😉

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.