Get the FREE Ultimate OpenClaw Setup Guide →

vercel-ai-sdk

Scanned
npx machina-cli add skill existential-birds/beagle/vercel-ai-sdk --openclaw
Files (1)
SKILL.md
7.0 KB

Vercel AI SDK

The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.

Quick Reference

Basic useChat Setup

import { useChat } from '@ai-sdk/react';

const { messages, status, sendMessage, stop, regenerate } = useChat({
  id: 'chat-id',
  messages: initialMessages,
  onFinish: ({ message, messages, isAbort, isError }) => {
    console.log('Chat finished');
  },
  onError: (error) => {
    console.error('Chat error:', error);
  }
});

// Send a message
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });

// Send with files
sendMessage({
  text: 'Analyze this',
  files: fileList // FileList or FileUIPart[]
});

ChatStatus States

The status field indicates the current state of the chat:

  • ready: Chat is idle and ready to accept new messages
  • submitted: Message sent to API, awaiting response stream start
  • streaming: Response actively streaming from the API
  • error: An error occurred during the request

Message Structure

Messages use the UIMessage type with a parts-based structure:

interface UIMessage {
  id: string;
  role: 'system' | 'user' | 'assistant';
  metadata?: unknown;
  parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}

Part types include:

  • text: Text content with optional streaming state
  • file: File attachments (images, documents)
  • tool-{toolName}: Tool invocations with state machine
  • reasoning: AI reasoning traces
  • data-{typeName}: Custom data parts

Server-Side Streaming

import { streamText } from 'ai';
import { convertToModelMessages } from 'ai';

const result = streamText({
  model: openai('gpt-4'),
  messages: convertToModelMessages(uiMessages),
  tools: {
    getWeather: tool({
      description: 'Get weather',
      inputSchema: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        return { temperature: 72, weather: 'sunny' };
      }
    })
  }
});

return result.toUIMessageStreamResponse({
  originalMessages: uiMessages,
  onFinish: ({ messages }) => {
    // Save to database
  }
});

Tool Handling Patterns

Client-Side Tool Execution:

const { addToolOutput } = useChat({
  onToolCall: async ({ toolCall }) => {
    if (toolCall.toolName === 'getLocation') {
      addToolOutput({
        tool: 'getLocation',
        toolCallId: toolCall.toolCallId,
        output: 'San Francisco'
      });
    }
  }
});

Rendering Tool States:

{message.parts.map(part => {
  if (part.type === 'tool-getWeather') {
    switch (part.state) {
      case 'input-streaming':
        return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
      case 'input-available':
        return <div>Getting weather for {part.input.city}...</div>;
      case 'output-available':
        return <div>Weather: {part.output.weather}</div>;
      case 'output-error':
        return <div>Error: {part.errorText}</div>;
    }
  }
})}

Reference Files

Detailed documentation on specific aspects:

Common Patterns

Error Handling

const { error, clearError } = useChat({
  onError: (error) => {
    toast.error(error.message);
  }
});

// Clear error and reset to ready state
if (error) {
  clearError();
}

Message Regeneration

const { regenerate } = useChat();

// Regenerate last assistant message
await regenerate();

// Regenerate specific message
await regenerate({ messageId: 'msg-123' });

Custom Transport

import { DefaultChatTransport } from 'ai';

const { messages } = useChat({
  transport: new DefaultChatTransport({
    api: '/api/chat',
    prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({
      body: {
        chatId: id,
        lastMessage: messages[messages.length - 1],
        trigger,
        messageId
      }
    })
  })
});

Performance Optimization

// Throttle UI updates to reduce re-renders
const chat = useChat({
  experimental_throttle: 100 // Update max once per 100ms
});

Automatic Message Sending

import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai';

const chat = useChat({
  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls
  // Automatically resend when all tool calls have outputs
});

Type Safety

The SDK provides full type inference for tools and messages:

import { InferUITools, UIMessage } from 'ai';

const tools = {
  getWeather: tool({
    inputSchema: z.object({ city: z.string() }),
    execute: async ({ city }) => ({ weather: 'sunny' })
  })
};

type MyMessage = UIMessage<
  { createdAt: number }, // Metadata type
  UIDataTypes,
  InferUITools<typeof tools> // Tool types
>;

const { messages } = useChat<MyMessage>();

Key Concepts

Parts-Based Architecture

Messages use a parts array instead of a single content field. This allows:

  • Streaming text while maintaining other parts
  • Tool calls with independent state machines
  • File attachments and custom data mixed with text

Tool State Machine

Tool parts progress through states:

  1. input-streaming: Tool input streaming (optional)
  2. input-available: Tool input complete
  3. approval-requested: Waiting for user approval (optional)
  4. approval-responded: User approved/denied (optional)
  5. output-available: Tool execution complete
  6. output-error: Tool execution failed
  7. output-denied: User denied approval

Streaming Protocol

The SDK uses Server-Sent Events (SSE) with UIMessageChunk types:

  • text-start, text-delta, text-end
  • tool-input-available, tool-output-available
  • reasoning-start, reasoning-delta, reasoning-end
  • start, finish, abort

Client vs Server Tools

Server-side tools have an execute function and run on the API route.

Client-side tools omit execute and are handled via onToolCall and addToolOutput.

Best Practices

  1. Always handle the error state and provide user feedback
  2. Use experimental_throttle for high-frequency updates
  3. Implement proper loading states based on status
  4. Type your messages with custom metadata and tools
  5. Use sendAutomaticallyWhen for multi-turn tool workflows
  6. Handle all tool states in the UI for better UX
  7. Use stop() to allow users to cancel long-running requests
  8. Validate messages with validateUIMessages on the server

Source

git clone https://github.com/existential-birds/beagle/blob/main/plugins/beagle-ai/skills/vercel-ai-sdk/SKILL.mdView on GitHub

Overview

The Vercel AI SDK provides React hooks and server utilities to build streaming chat interfaces with tool calls, file attachments, and multi-step reasoning. It centers on useChat, UIMessage parts, and streaming responses to render live conversations. This enables inline tool execution and real-time UI updates in chat applications.

How This Skill Works

Developers use useChat to manage messages, status, and tool interactions. The server-side streaming pipeline uses streamText and toUIMessageStreamResponse to convert UIMessage parts into a live stream, while tool definitions and execution patterns support onToolCall and addToolOutput for dynamic tool usage.

When to Use It

  • Building a streaming chat UI with live message updates
  • Implementing the useChat hook in a React app
  • Handling tool calls and rendering tool states within messages
  • Supporting file attachments alongside streaming responses
  • Implementing multi-step reasoning and tool-driven workflows

Quick Start

  1. Step 1: Install or import the SDK: npm install @ai-sdk/react and import { useChat } from '@ai-sdk/react'
  2. Step 2: Initialize useChat with an id, initial messages, and optional onFinish/onError callbacks
  3. Step 3: Send messages with sendMessage, attach files if needed, and respond to tool calls via onToolCall / addToolOutput

Best Practices

  • Model messages as UIMessage with a parts-based structure (text, file, tool-*, reasoning)
  • Leverage status states (ready, submitted, streaming, error) to manage UI flow
  • Use onToolCall and addToolOutput to implement client-side tool execution and feedback
  • Render tool state parts (input-streaming, input-available, output-available, output-error)
  • Use streamText and toUIMessageStreamResponse for server streaming and persisting results via onFinish

Example Use Cases

  • Live customer support chat that streams responses while invoking a weather or order-tracking tool
  • AI assistant that analyzes uploaded documents and returns structured insights
  • Weather bot that uses a getWeather tool and streams updates to the UI
  • Code assistant with reasoning traces and tool calls for compilation and testing
  • Data analysis chat that streams results from multiple tools and saves outcomes

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers