Get the FREE Ultimate OpenClaw Setup Guide →

velociraptor

Repo to hold mcp server for velociraptor

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio socfortress-velociraptor-mcp-server python -m velociraptor_mcp_server \
  --env LOG_LEVEL="Logging level (e.g., INFO, DEBUG)" \
  --env MCP_SERVER_HOST="Server host to bind MCP server" \
  --env MCP_SERVER_PORT="Server port to bind MCP server" \
  --env VELOCIRAPTOR_API_KEY="Path to Velociraptor API config file (api.config.yaml)" \
  --env VELOCIRAPTOR_TIMEOUT="Request timeout in seconds" \
  --env VELOCIRAPTOR_SSL_VERIFY="SSL verification (true|false)" \
  --env VELOCIRAPTOR_DISABLED_TOOLS="Comma-separated list of disabled tools"

How to use

Velociraptor MCP Server exposes a set of MCP tools that bridge Velociraptor DFIR data with Large Language Models. It authenticates to Velociraptor using a provided api.config.yaml, talks to Velociraptor over gRPC, and exposes tools such as AuthenticateTool to verify the connection, GetAgentInfo to retrieve detailed client information by hostname, and RunVQLQueryTool to execute Velociraptor Query Language queries across connected clients. You can integrate this MCP server with LangChain or other automation frameworks to dynamically discover available tools, invoke Velociraptor data collection, and retrieve artifact results through a unified MCP interface. Deployers can adjust tool filtering via environment variables or CLI options to tailor the set of capabilities exposed to LLM agents.

How to install

Prerequisites:

  • Python 3.11 or higher
  • Git
  • Access to a Velociraptor server and a Velociraptor api.config.yaml configuration file

Installation steps:

  1. Clone the repository git clone https://github.com/socfortress/velociraptor-mcp-server.git cd velociraptor-mcp-server

  2. Set up a Python virtual environment python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate

  3. Install in development mode (recommended for contributing and testing) pip install -e ".[dev]"

  4. Optional: Install pre-commit hooks pre-commit install

  5. Prepare environment variables (see Configuration section) and run the server velociportor-mcp-server --help

  6. Run the server directly via module (example) python -m velociraptor_mcp_server --host 0.0.0.0 --port 8080 --log-level DEBUG

Additional notes

Tips and common considerations:

  • Ensure VELOCIRAPTOR_API_KEY points to a valid Velociraptor api.config.yaml file with appropriate permissions.
  • If using SSL verification, set VELOCIRAPTOR_SSL_VERIFY to true; otherwise disable verification for testing (not recommended in production).
  • Use MCP_SERVER_HOST and MCP_SERVER_PORT to expose the MCP API on the desired interface and port.
  • You can filter out specific Velociraptor tools by setting VELOCIRAPTOR_DISABLED_TOOLS to a comma-separated list, e.g., CollectArtifactTool,RunVQLQueryTool.
  • When running behind reverse proxies or load balancers, consider enabling HTTP/2 support and proper TLS termination as described by Velociraptor MCP server configuration.
  • If you encounter TLS or authentication errors, verify that the api.config.yaml being referenced is accessible by the process and that network reachability to the Velociraptor server is functioning.
  • For integration with LangChain, expose the MCP endpoints to the client in a manner compatible with the MultiServerMCPClient configuration shown in the examples, and ensure CORS or similar restrictions are addressed if applicable.

Related MCP Servers

Sponsor this space

Reach thousands of developers