Get the FREE Ultimate OpenClaw Setup Guide →

building-ai-chat

Scanned
npx machina-cli add skill ancoleman/ai-design-components/building-ai-chat --openclaw
Files (1)
SKILL.md
13.3 KB

AI Chat Interface Components

Purpose

Define the emerging standards for AI/human conversational interfaces in the 2024-2025 AI integration boom. This skill leverages meta-knowledge from building WITH Claude to establish definitive patterns for streaming UX, context management, and multi-modal interactions. As the industry lacks established patterns, this provides the reference implementation others will follow.

When to Use

Activate this skill when:

  • Building ChatGPT-style conversational interfaces
  • Creating AI assistants, copilots, or chatbots
  • Implementing streaming text responses with markdown
  • Managing conversation context and token limits
  • Handling multi-modal inputs (text, images, files, voice)
  • Dealing with AI-specific errors (hallucinations, refusals, limits)
  • Adding feedback mechanisms (thumbs, regeneration, editing)
  • Implementing conversation branching or threading
  • Visualizing tool/function calling

Quick Start

Minimal AI chat interface in under 50 lines:

import { useChat } from 'ai/react';

export function MinimalAIChat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, stop } = useChat();

  return (
    <div className="chat-container">
      <div className="messages">
        {messages.map(m => (
          <div key={m.id} className={`message ${m.role}`}>
            <div className="content">{m.content}</div>
          </div>
        ))}
        {isLoading && <div className="thinking">AI is thinking...</div>}
      </div>

      <form onSubmit={handleSubmit} className="input-form">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask anything..."
          disabled={isLoading}
        />
        {isLoading ? (
          <button type="button" onClick={stop}>Stop</button>
        ) : (
          <button type="submit">Send</button>
        )}
      </form>
    </div>
  );
}

For complete implementation with streaming markdown, see examples/basic-chat.tsx.

Core Components

Message Display

Build user, AI, and system message bubbles with streaming support:

// User message
<div className="message user">
  <div className="content">{message.content}</div>
  <time className="timestamp">{formatTime(message.timestamp)}</time>
</div>

// AI message with streaming
<div className="message ai">
  <Streamdown className="content">{message.content}</Streamdown>
  {message.isStreaming && <span className="cursor">ā–Š</span>}
</div>

// System message
<div className="message system">
  <Icon type="info" />
  <span>{message.content}</span>
</div>

For markdown rendering, code blocks, and formatting details, see references/message-components.md.

Input Components

Create rich input experiences with attachments and voice:

<div className="input-container">
  <button onClick={attachFile} aria-label="Attach file">
    <PaperclipIcon />
  </button>

  <textarea
    value={input}
    onChange={handleChange}
    onKeyDown={handleKeyDown}
    placeholder="Type a message..."
    rows={1}
    style={{ height: textareaHeight }}
  />

  <button onClick={toggleVoice} aria-label="Voice input">
    <MicIcon />
  </button>

  <button type="submit" disabled={!input.trim() || isLoading}>
    <SendIcon />
  </button>
</div>

Response Controls

Essential controls for AI responses:

<div className="response-controls">
  {isStreaming && (
    <button onClick={stop} className="stop-btn">
      Stop generating
    </button>
  )}

  {!isStreaming && (
    <>
      <button onClick={regenerate} aria-label="Regenerate response">
        <RefreshIcon /> Regenerate
      </button>
      <button onClick={continueGeneration} aria-label="Continue">
        Continue
      </button>
      <button onClick={editMessage} aria-label="Edit message">
        <EditIcon /> Edit
      </button>
    </>
  )}
</div>

Feedback Mechanisms

Collect user feedback to improve AI responses:

<div className="feedback-controls">
  <button
    onClick={() => sendFeedback('positive')}
    aria-label="Good response"
    className={feedback === 'positive' ? 'selected' : ''}
  >
    <ThumbsUpIcon />
  </button>

  <button
    onClick={() => sendFeedback('negative')}
    aria-label="Bad response"
    className={feedback === 'negative' ? 'selected' : ''}
  >
    <ThumbsDownIcon />
  </button>

  <button onClick={copyToClipboard} aria-label="Copy">
    <CopyIcon />
  </button>

  <button onClick={share} aria-label="Share">
    <ShareIcon />
  </button>
</div>

Streaming & Real-Time UX

Progressive rendering of AI responses requires special handling:

// Use Streamdown for AI streaming (handles incomplete markdown)
import { Streamdown } from '@vercel/streamdown';

// Auto-scroll management
useEffect(() => {
  if (shouldAutoScroll()) {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
  }
}, [messages]);

// Smart auto-scroll heuristic
function shouldAutoScroll() {
  const threshold = 100; // px from bottom
  const isNearBottom =
    container.scrollHeight - container.scrollTop - container.clientHeight < threshold;
  const userNotReading = !hasUserScrolledUp && !isTextSelected;
  return isNearBottom && userNotReading;
}

For complete streaming patterns, auto-scroll behavior, and stop generation, see references/streaming-ux.md.

Context Management

Communicate token limits clearly to users:

// User-friendly token display
function TokenIndicator({ used, total }) {
  const percentage = (used / total) * 100;
  const remaining = total - used;

  return (
    <div className="token-indicator">
      <div className="progress-bar">
        <div className="progress-fill" style={{ width: `${percentage}%` }} />
      </div>
      <span className="token-text">
        {percentage > 80
          ? `āš ļø About ${Math.floor(remaining / 250)} messages left`
          : `${Math.floor(remaining / 250)} pages of conversation remaining`}
      </span>
    </div>
  );
}

For summarization strategies, conversation branching, and organization, see references/context-management.md.

Multi-Modal Support

Handle images, files, and voice inputs:

// Image upload with preview
function ImageUpload({ onUpload }) {
  return (
    <div
      className="upload-zone"
      onDrop={handleDrop}
      onDragOver={preventDefault}
    >
      <input
        type="file"
        accept="image/*"
        onChange={handleFileSelect}
        multiple
        hidden
        ref={fileInputRef}
      />
      {previews.map(preview => (
        <img key={preview.id} src={preview.url} alt="Upload preview" />
      ))}
    </div>
  );
}

For complete multi-modal patterns including voice and screen sharing, see references/multi-modal.md.

Error Handling

Handle AI-specific errors gracefully:

// Refusal handling
if (response.type === 'refusal') {
  return (
    <div className="error refusal">
      <Icon type="info" />
      <p>I cannot help with that request.</p>
      <details>
        <summary>Why?</summary>
        <p>{response.reason}</p>
      </details>
      <p>Try asking: {response.suggestion}</p>
    </div>
  );
}

// Rate limit communication
if (error.code === 'RATE_LIMIT') {
  return (
    <div className="error rate-limit">
      <p>Please wait {error.retryAfter} seconds</p>
      <CountdownTimer seconds={error.retryAfter} onComplete={retry} />
    </div>
  );
}

For comprehensive error patterns, see references/error-handling.md.

Tool Usage Visualization

Show when AI is using tools or functions:

function ToolUsage({ tool }) {
  return (
    <div className="tool-usage">
      <div className="tool-header">
        <Icon type={tool.type} />
        <span>{tool.name}</span>
        {tool.status === 'running' && <Spinner />}
      </div>
      {tool.status === 'complete' && (
        <details>
          <summary>View details</summary>
          <pre>{JSON.stringify(tool.result, null, 2)}</pre>
        </details>
      )}
    </div>
  );
}

For function calling, code execution, and web search patterns, see references/tool-usage.md.

Implementation Guide

Recommended Stack

Primary libraries (validated November 2025):

# Core AI chat functionality
npm install ai @ai-sdk/react @ai-sdk/openai

# Streaming markdown rendering
npm install @vercel/streamdown

# Syntax highlighting
npm install react-syntax-highlighter

# Security for LLM outputs
npm install dompurify

Performance Optimization

Critical for smooth streaming:

// Memoize message rendering
const MemoizedMessage = memo(Message, (prev, next) =>
  prev.content === next.content && prev.isStreaming === next.isStreaming
);

// Debounce streaming updates
const debouncedUpdate = useMemo(
  () => debounce(updateMessage, 50),
  []
);

// Virtual scrolling for long conversations
import { VariableSizeList } from 'react-window';

For detailed performance patterns, see references/streaming-ux.md.

Security Considerations

Always sanitize AI outputs:

import DOMPurify from 'dompurify';

function SafeAIContent({ content }) {
  const sanitized = DOMPurify.sanitize(content, {
    ALLOWED_TAGS: ['p', 'br', 'strong', 'em', 'code', 'pre', 'blockquote', 'ul', 'ol', 'li'],
    ALLOWED_ATTR: ['class']
  });

  return <Streamdown>{sanitized}</Streamdown>;
}

Accessibility

Ensure AI chat is usable by everyone:

// ARIA live regions for screen readers
<div role="log" aria-live="polite" aria-relevant="additions">
  {messages.map(msg => (
    <article key={msg.id} role="article" aria-label={`${msg.role} message`}>
      {msg.content}
    </article>
  ))}
</div>

// Loading announcements
<div role="status" aria-live="polite" className="sr-only">
  {isLoading ? 'AI is responding' : ''}
</div>

For complete accessibility patterns, see references/accessibility.md.

Bundled Resources

Scripts (Token-Free Execution)

  • Run scripts/parse_stream.js to parse incomplete markdown during streaming
  • Run scripts/calculate_tokens.py to estimate token usage and context limits
  • Run scripts/format_messages.js to format message history for export

References (Progressive Disclosure)

  • references/streaming-patterns.md - Complete streaming UX patterns
  • references/context-management.md - Token limits and conversation strategies
  • references/multimodal-input.md - Image, file, and voice handling
  • references/feedback-loops.md - User feedback and RLHF patterns
  • references/error-handling.md - AI-specific error scenarios
  • references/tool-usage.md - Visualizing function calls and tool use
  • references/accessibility-chat.md - Screen reader and keyboard support
  • references/library-guide.md - Detailed library documentation
  • references/performance-optimization.md - Streaming performance patterns

Examples

  • examples/basic-chat.tsx - Minimal ChatGPT-style interface
  • examples/streaming-chat.tsx - Advanced streaming with memoization
  • examples/multimodal-chat.tsx - Images and file uploads
  • examples/code-assistant.tsx - IDE-style code copilot
  • examples/tool-calling-chat.tsx - Function calling visualization

Assets

  • assets/system-prompts.json - Curated prompts for different use cases
  • assets/message-templates.json - Pre-built message components
  • assets/error-messages.json - User-friendly error messages
  • assets/themes.json - Light, dark, and high-contrast themes

Design Token Integration

All visual styling uses the design-tokens system:

/* Message bubbles use design tokens */
.message.user {
  background: var(--message-user-bg, var(--color-primary));
  color: var(--message-user-text, var(--color-white));
  padding: var(--message-padding, var(--spacing-md));
  border-radius: var(--message-border-radius, var(--radius-lg));
}

.message.ai {
  background: var(--message-ai-bg, var(--color-gray-100));
  color: var(--message-ai-text, var(--color-text-primary));
}

See skills/design-tokens/ for complete theming system.

Key Innovations

This skill provides industry-first solutions for:

  • Memoized streaming rendering - 10-50x performance improvement
  • Intelligent auto-scroll - User activity-aware scrolling
  • Token metaphors - User-friendly context communication
  • Incomplete markdown handling - Graceful partial rendering
  • RLHF patterns - Effective feedback collection
  • Conversation branching - Non-linear conversation trees
  • Multi-modal integration - Seamless file/image/voice handling
  • Accessibility-first - Built-in screen reader support

Strategic Importance

This is THE most critical skill because:

  1. Perfect timing - Every app adding AI (2024-2025 boom)
  2. No standards exist - Opportunity to define patterns
  3. Meta-advantage - Building WITH Claude = intimate UX knowledge
  4. Unique challenges - Streaming, context, hallucinations all new
  5. Reference implementation - Can become the standard others follow

Master this skill to lead the AI interface revolution.

Source

git clone https://github.com/ancoleman/ai-design-components/blob/main/skills/building-ai-chat/SKILL.mdView on GitHub

Overview

This skill provides battle-tested AI chat interface components for streaming responses, context management, and multi-modal inputs. It targets ChatGPT-style interfaces, AI assistants, copilots, or chatbots, emphasizing accessibility and performance while supporting tool usage visualization and robust error handling.

How This Skill Works

It offers reference implementations and modular UI components to render streaming AI messages, manage conversation context and token budgets, and support multi-modal inputs (text, images, files, voice). Through input controls, message displays with streaming support, and response controls, it delivers accessible, high-performance patterns for AI chats and tool usage visualization.

When to Use It

  • Building ChatGPT-style conversational interfaces
  • Creating AI assistants, copilots, or chatbots
  • Implementing streaming text responses with markdown
  • Managing conversation context and token limits
  • Handling multi-modal inputs (text, images, files, voice)

Quick Start

  1. Step 1: Create a minimal AI chat using the useChat hook and render the MinimalAIChat UI as shown in the Quick Start sample
  2. Step 2: Enable streaming markdown for AI messages and test with the streaming examples (e.g., complete implementation with streaming markdown)
  3. Step 3: Refer to the full implementation in examples/basic-chat.tsx to explore streaming, token handling, and tool usage visualization

Best Practices

  • Design the streaming UX with clear loading indicators and Markdown rendering for readability
  • Plan and enforce robust context management and token budgeting from the start
  • Prioritize accessibility, keyboard navigation, and focus management in all components
  • Support multi-modal inputs and visualize tool usage and function calls
  • Incorporate feedback loops (thumbs, regeneration, editing) and graceful AI error handling

Example Use Cases

  • ChatGPT-style customer support assistant
  • AI coding copilots and developer assistants
  • Multi-modal virtual assistant accepting text, images, and files
  • Product docs or knowledge-base conversational bot
  • Tool-usage driven chatbot with function-calling visualization

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers ↗