Get the FREE Ultimate OpenClaw Setup Guide →

sui

MCP server from ProbonoBonobo/sui-mcp-server

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio probonobonobo-sui-mcp-server python main.py \
  --env GITHUB_TOKEN="your_github_token_here" \
  --env OPENAI_API_KEY="your_openai_api_key_here"

How to use

This MCP server provides a Retrieval-Augmented Generation (RAG) workflow focused on the Sui Move ecosystem. It powers a FastAPI-based MCP API that allows an AI agent to query a FAISS vector store containing Move code and related documents, download Move files from GitHub, index them, and run RAG queries against an LLM. The tooling supports downloading Move files, indexing them into a FAISS index, querying the index, and running a complete RAG pipeline that feeds retrieved context to an LLM to generate informed responses. You can interact with the MCP API at the /mcp/action endpoint to retrieve documents or index new ones, and you can run the included command-line utilities for local exploration (download, index, query, and RAG workflows).

How to install

Prerequisites:

  • Python 3.8+ (tested with modern Python environments)
  • Git
  • Optional: pipx for isolated Python app installation

Install the MCP server project dependencies and run the server:

  1. Prepare your environment
  • Install Python and Git on your system
  • (Optional) Install and configure pipx for isolated installs
  1. Install and run the MCP server locally Using pipx (recommended):
# If you want to install via pipx from the project directory (editable mode)
pipx install -e .

Or install dependencies and run directly:

# Install dependencies
pip install -r requirements.txt

# Run the server (from the project root)
python main.py
  1. Configure environment variables Create a .env file or export variables in your shell. Example:
# GitHub access for Move file extraction (higher rate limits)
export GITHUB_TOKEN=your_github_token_here

# OpenAI or other LLM API key for RAG integration
export OPENAI_API_KEY=your_openai_api_key_here
  1. Start the server
python main.py
# Server starts by default at http://localhost:8000
  1. Validate installation
  • Visit http://localhost:8000/docs for the FastAPI interactive docs (if enabled)
  • Use the MCP API endpoint at /mcp/action to perform retrieve/index operations.

Additional notes

Tips and notes:

  • The server uses FAISS for vector storage; ensure you have enough disk space for the index and documents.
  • If API keys are not configured, the RAG pipeline can simulate LLM responses, which is useful for local testing.
  • Environment variables like GITHUB_TOKEN and OPENAI_API_KEY are optional but recommended for full functionality and higher rate limits.
  • If you modify or extend the server (e.g., add new endpoints or custom processors), keep the mcp_server package structure aligned with the existing modules (mcp_api.py, index_move_files.py, rag_integration.py).
  • Common issues may include FAISS installation problems on certain platforms; ensure FAISS is installed correctly via requirements.txt or system-specific installation steps.
  • For debugging, check logs printed to the console and use the MCP CLI tools described in the README to test indexing and querying flows locally.

Related MCP Servers

Sponsor this space

Reach thousands of developers