mcp -conceal
Privacy-focused MCP proxy that intelligently pseudo-anonymizes PII in real-time before data reaches external AI providers, maintaining semantic relationships for accurate analysis
claude mcp add --transport stdio gbrigandi-mcp-server-conceal mcp-server-conceal --target-command python3 --target-args database-server.py --host localhost --config /path/to/mcp-server-conceal.toml \ --env DATABASE_URL="postgresql://localhost/mydb"
How to use
MCP Conceal is an MCP proxy that sits between your MCP client and your MCP server to perform pseudo-anonymization of PII before it reaches external AI providers. It detects sensitive data using a configurable mix of regex patterns and optional AI-based analysis (via an Ollama LLM) and replaces real identifiers with consistent fake data while preserving structure and relationships. The proxy keeps mappings in a local database so the same real data maps to the same fake data across requests. To use it, run the conceal proxy as a front for your MCP server by pointing the proxy at your target server process and configuring the target command/arguments. You can customize the detection approach (regex, LLM, or a hybrid), tailor the fake data generation (locale, seed, and consistency), and supply prompts to guide LLM-based detection. For integration, you typically start mcp-server-conceal with a target command that runs your MCP server logic (for example a Python script) and pass a config file that defines how detection should work and how LLMs should be invoked.
How to install
Prerequisites:
- Rust toolchain installed (Rustup recommended): https://rustup.rs/
- Optional: Ollama if you plan to use LLM-based detection (for local LLm deployment): https://ollama.ai/
Install from source (recommended if you need to customize):
- git clone https://github.com/gbrigandi/mcp-server-conceal
- cd mcp-server-conceal
- cargo build --release
Binary location after build:
- target/release/mcp-server-conceal
Alternative: Download pre-built binaries from the Releases page on GitHub and make them executable, then move to a directory in your PATH. For example:
- chmod +x mcp-server-conceal-linux-amd64
- sudo mv mcp-server-conceal-linux-amd64 /usr/local/bin/mcp-server-conceal
Usage example to run the proxy:
- mcp-server-conceal
--target-command python3
--target-args "my-mcp-server.py"
--config mcp-server-conceal.toml
Additional notes
Tips and common considerations:
- Keep the mappings database secure and with proper file permissions since it stores real-to-fake data mappings.
- If using LLM-based detection, ensure Ollama (or your chosen local LLM runner) is reachable and properly configured in the endpoint settings.
- The detection mode and thresholds affect latency and accuracy; start with regex_llm for a balance, then tune as needed.
- Use absolute paths for database_path and llm_cache in production to avoid path issues.
- When integrating with your MCP server, you can adjust target-command/target-args to point to your server process and pass environment variables as needed (e.g., database URLs, API keys).
- Enable verbose/debug logs during troubleshooting by setting RUST_LOG=debug.
- Regularly back up the mappings database to prevent data loss during migrations or crashes.
Related MCP Servers
hyper
📦️ A fast, secure MCP server that extends its capabilities through WebAssembly plugins.
mcp -wazuh
MCP Server for Wazuh SIEM
ollama
An MCP Server for Ollama
mcp-probe
A Model Context Protocol (MCP) client library and debugging toolkit in Rust. This foundation provides both a production-ready SDK for building MCP integrations and the core architecture for an interactive debugger.
cursor-rust-tools
A MCP server to allow the LLM in Cursor to access Rust Analyzer, Crate Docs and Cargo Commands.
LLaMa -Streamlit
AI assistant built with Streamlit, NVIDIA NIM (LLaMa 3.3:70B) / Ollama, and Model Control Protocol (MCP).