Get the FREE Ultimate OpenClaw Setup Guide →

AgentNull

AgentNull: AI System Security Threat Catalog + Proof-of-Concepts. Collection of PoCs for using Agents, MCP, and RAG in bad ways.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio thirdkeyai-agentnull node path/to/server.js \
  --env AGENTNULL_ENV="placeholder for future configuration"

How to use

AgentNull is a threat catalog and proof-of-concept collection focused on AI system security, attack vectors against MCP/agent ecosystems, RAG pipelines, vector stores, and related tooling. The repository organizes PoCs under the pocs/ directory, with each attack vector containing its own README, code, and sample inputs/outputs. To explore or demonstrate the MCP-related techniques, navigate into a specific PoC folder (for example pocs/AdvancedToolPoisoning) and follow its instructions to replicate the attack scenario. The documentation also provides guidance for running local LLMs with Ollama to power PoCs without incurring API costs, including commands to pull models, configure Ollama, and execute PoCs in simulation mode or with a real local model.

How to install

Prerequisites:

  • Git
  • Node.js (LTS) or Python 3.x if you plan to run Python-based PoCs or utilities
  • Optional: Docker for containerized runs

Installation steps:

  1. Clone the repository: git clone https://github.com/thirdkeyai-agentnull.git cd thirdkeyai-agentnull

  2. If you intend to run a Node.js server (as indicated by the mcp_config):

    • Ensure Node.js is installed
    • Install dependencies (if a package.json exists in the server path): npm install
    • Start the server (adjust the path to your actual server file): npm run start
  3. If you plan to run Python-based PoCs or utilities:

    • Ensure Python 3.8+ is installed
    • Create a virtual environment and install requirements (if provided by PoCs): python3 -m venv venv source venv/bin/activate (Linux/macOS) or venv\Scripts\activate (Windows) pip install -r requirements.txt
  4. If using Ollama for local LLMs (as recommended in the README):

    • Install Ollama (instructions in the README): curl -fsSL https://ollama.ai/install.sh | sh
    • Pull a model: ollama pull gemma3 ollama pull deepseek-r1 ollama pull qwen3
  5. Run PoCs following the specific PoC README instructions located under pocs/.

Note: This repository is a catalog of attack vectors intended for educational and security research purposes. Use in controlled environments and ensure you have proper authorization before attempting any PoCs against systems.

Additional notes

Tips and notes:

  • This repository focuses on red-team style attack vectors targeting AI systems, including MCPs, LangGraph, AutoGPT, RAG pipelines, vector databases, and embedding retrieval systems. The PoCs are educational and demonstrate potential misuse; do not deploy techniques against systems you do not own or are authorized to test.
  • For local testing with LLMs, Ollama is recommended to avoid API costs. Ensure your hardware meets the memory requirements for the models you pull (typical ranges are ~4GB RAM per model).
  • The MCP server entry in this repo is a placeholder guide for running a server and may require adjustments to paths and environment variables to fit your environment.
  • If you encounter configuration or environment variable issues, consult the individual PoC READMEs under pocs/ for per-PoC requirements and dependencies.

Related MCP Servers

Sponsor this space

Reach thousands of developers