MCP-Workspace
🚀 Beyond Filesystem - Complete AI Development Environment - One MCP Server provides full Agent capability stack: web development, code execution, data processing, image generation. No need for multiple tools, configure once. Perfect support for Dify, FastGPT, Cherry Studio. 文件操作、Python/Node.js 代码执行、Web 应用一键部署(支持泛域名)、Excel 处理、图像生成。开箱即用
claude mcp add --transport stdio answerlink-mcp-workspace-server uv --directory /path/to/mcp-filesystem run run_server.py /path/to/allowed/dir1 /path/to/allowed/dir2
How to use
MCP-Workspace is an All-in-One MCP server that provides a complete AI development workspace in a single service. It combines file system operations, code execution in safe sandboxes, Web app deployment with domain handling, Excel/CSV data processing, and image generation capabilities (including Mermaid diagrams and visualizations) under one roof. The server is designed for multi-tenant use with session isolation, and it exposes tools such as fs_read, fs_write, fs_ops, fs_replace, fs_search for file management, exec for Python/Node.js sandbox execution, preview_frontend for one-click web deployment with wildcard domain support, and generate_image for charts and visuals. It also supports optional knowledge-base search and web crawling through kb_search, kb_read, crawl_url, and web_search when configured. Connections to AI platforms (like Dify, FastGPT, and Cherry Studio) can be established using SSE (server-sent events), enabling your agent to perform file operations, run code, deploy web apps, process data, and generate visuals seamlessly during conversations. The server enforces multi-tenancy and path safety to keep each user/session isolated within their virtual workspace. To use it, point your client or platform to the SSE endpoint (for example, http://your-server:8000/sse) and pass tenant headers such as X-User-ID and X-Chat-ID to ensure proper isolation.
How to install
Prerequisites: Docker (recommended), or a compatible Python 3.12 environment with the necessary system dependencies installed. If opting for Docker, you can use the provided docker-compose setup. If you prefer a native Python setup, ensure you have Python 3.12+ and install required Python packages from a requirements file.
Option A: Docker deployment (recommended)
- Ensure Docker and Docker Compose are installed on your machine.
- Clone the repository and navigate to the project directory:
git clone <repository-url>
cd <project-directory>
- Start the services in detached mode, building images if needed:
docker-compose up -d --build
- Monitor logs and verify the SSE endpoint is reachable:
docker-compose logs -f
- If dependencies change, rebuild and restart as needed:
git pull
docker-compose up -d --build
Option B: Native Python (uv) setup (if you must run without Docker) Prerequisites: Python 3.12+, pip, and system dependencies required by the server (e.g., for OCR or image processing).
- Create a virtual environment and install requirements:
python3.12 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- Run the server directly (adjust paths as needed):
uv --directory /path/to/mcp-filesystem run run_server.py /path/to/allowed/dir1 /path/to/allowed/dir2
- Ensure the server binds to the expected host/port and that the SSE endpoint is accessible.
Prerequisites recap: Docker is the easiest path and reduces environment mismatch. If you must run natively, ensure Python 3.12+ and all required system libraries are installed, and be prepared to manage dependencies and environment setup manually.
Additional notes
- The mcp-workspace server is designed for multi-tenant isolation. Always pass X-User-ID and X-Chat-ID headers from your client to ensure correct session scoping.
- For security, ensure allowed directories are explicitly configured and that path safety checks are enabled to prevent directory traversal attacks.
- The SSE-based integration is the recommended transport when connecting with AI platforms like Dify, FastGPT, or Cherry Studio. Use the provided SSE URL (e.g., http://your-server:8000/sse) and supply the tenant headers in requests.
- If you enable optional tools such as kb_search, kb_read, crawl_url, or web_search, configure their respective enabled flags (e.g., kb.enabled=true, web_crawl.enabled=true) in your environment or configuration to tighten security and control usage.
- The all-in-one design means you should monitor resource usage (CPU, memory, and I/O) per tenant to maintain performance and fairness across sessions.
- When deploying with wildcard domains, ensure your TLS termination and DNS are configured correctly to support per-session subdomains (e.g., user123_chat456.proxy.your-domain.com).
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP