Get the FREE Ultimate OpenClaw Setup Guide β†’

volcano-agent-sdk

πŸŒ‹ Build AI agents that seamlessly combine LLM reasoning with real-world actions via MCP tools β€” in just a few lines of TypeScript.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio kong-volcano-agent-sdk npx -y @volcano.dev/agent \
  --env VP_API_KEY="Your Vertex/Vertex AI API key/credential (if using Vertex)" \
  --env LOGGING_LEVEL="Optional: set logging level (e.g., info, debug)" \
  --env OPENAI_API_KEY="Your OpenAI API key (if using OpenAI provider)" \
  --env MISTRAL_API_KEY="Your Mistral API key (if using Mistral provider)" \
  --env ANTHROPIC_API_KEY="Your Anthropic API key (if using Anthropic provider)"

How to use

The Volcano Agent SDK provides a TypeScript-first toolkit for building multi-provider AI agents and coordinating MCP-based tools. It enables automatic tool selection, multi-agent coordination, streaming outputs, and observability through OpenTelemetry. With the SDK, you can compose agents that orchestrate calls to a variety of MCP tools and LLM providers (OpenAI, Anthropic, Mistral, Llama, Vertex, etc.), enabling patterns like parallel tool calls, branching logic, and sub-agent composition. The MCP integration allows your agents to invoke tools exposed by MCP servers, have the LLM reason about results, and produce structured or conversational outputs such as summaries or follow-up questions. Typical usage involves initializing an LLM, wiring MCP endpoints, and then running an agent flow that delegates tasks to one or more MCP tools automatically based on the prompt.

To use it, install the package, configure an MCP server endpoint, and then create an agent that combines llm providers with MCP tool access. Examples cover both single-agent workflows (one LLM + multiple tools) and multi-agent workflows (coordinators of specialized agents). The SDK exposes helper functions like agent, mcp, and llmOpenAI to streamline common patterns, including automatic tool selection, streaming results, and conversation-style summaries.

How to install

Prerequisites:

  • Node.js (14+ recommended) and npm or pnpm
  • Access to a network to install dependencies from npm

Install the Volcano Agent SDK package:

npm install @volcano.dev/agent

If you plan to run a standalone MCP server via npx (as configured above), ensure you have npm and npx available. Then you can start a server wrapper that exposes MCP endpoints and uses the Volcano Agent SDK under the hood. For local development, you may also clone the repository and build from source if you need to modify the library.

Configure environment variables for your LLM providers as needed (example keys shown in the mcp_config section).

Additional notes

Tips and common issues:

  • Ensure your LLM provider API keys are valid and have sufficient quotas for your workloads.
  • When using multiple MCP tools, consider enabling OpenTelemetry or your preferred observability backend to trace calls and latency.
  • If you encounter type errors in TypeScript, ensure your tsconfig includes appropriate lib/dom/settings for your environment.
  • The SDK supports streaming; enable streaming in your LLM config to receive tokens in real-time and update UIs accordingly.
  • For production deployments, monitor retries and timeouts to avoid cascading failures; tune them via your provider configuration and MCP tool options.
  • If you’re integrating custom MCP tools, ensure they expose a consistent command interface and return structured results that the SDK can interpret.

Related MCP Servers

Sponsor this space

Reach thousands of developers β†—