NornicDB
NornicDB is a high-performance graph + vector database built for AI agents and knowledge systems. It speaks Neo4j's (Bolt + Cypher) and qdrant's (gRPC) languages so you can use Nornic with zero code changes, while adding intelligent features including a graphql endpoint, air-gapped embeddings, GPU accelerated search, and other intelligent features.
claude mcp add --transport stdio orneryd-nornicdb docker run -i timothyswt/nornicdb-arm64-metal-bge:latest
How to use
NornicDB is a Neo4j-compatible graph database with AI-native capabilities, including vector search, memory decay, and memory that evolves over time. It speaks Bolt and Cypher, so existing Neo4j drivers and applications can connect with zero code changes while benefiting from GPU-accelerated execution paths and integrated vector functionality. In practice, you can run NornicDB in Docker (or other supported deployment methods), connect with your favorite Neo4j driver, and issue Cypher queries to traverse and manipulate your knowledge graph, while benefiting from built-in memory tiers and canonical graph ledger features for auditing and versioning.
To interact with the database, use Bolt-based drivers (Python, JavaScript, Java, Go, .NET, etc.) and connect to bolt://localhost:7687 (or the host/port you expose). Cypher remains the primary query language, with added capabilities for vector indexing, memory decay, and AI-aware reasoning embedded into the engine. If you are deploying via Docker, ensure ports 7474 (UI) and 7687 (Bolt) are exposed and that the data directory is persisted to a Docker volume to retain your graph and index data across restarts. For model and vector storage, you can run BYOM images or use the included models depending on the deployment profile you choose.
How to install
Prerequisites:
- Docker installed on your host (Docker Desktop on macOS/Windows or docker engine on Linux)
- Sufficient GPU drivers if you plan to use GPU-accelerated paths (as applicable to your host and image)
Install steps:
-
Install Docker on your system.
- macOS: https://docs.docker.com/desktop/mac/
- Windows: https://docs.docker.com/desktop/windows/
- Linux: follow your distro's Docker installation guide
-
Run NornicDB via Docker (example using the official ARM64 image for Apple Silicon):
# Apple Silicon (ARM64) example
docker run -d --name nornicdb \
-p 7474:7474 -p 7687:7687 \
-v nornicdb-data:/data \
timothyswt/nornicdb-arm64-metal-bge:latest
-
Open the Neo4j UI at http://localhost:7474 and authenticate as configured by the image (default credentials may vary by image; consult the image docs).
-
Connect a client and start issuing Cypher queries via Bolt (bolt://localhost:7687).
If you need a different deployment profile (BYOM, CPU-only, headless, or Vulkan path), refer to the Docker image quick reference and Docker images section in the docs for the appropriate image tags and run commands.
Additional notes
Tips and considerations:
- The repository supports multiple deployment profiles; choose the image that matches your hardware (e.g., Apple Silicon vs. NVIDIA GPUs) and needs (full image with models vs. headless API-only).
- For persistence, always mount a Docker volume to /data as shown in the example to preserve graph data and indexes across restarts.
- Neo4j-compatible features such as Bolt/Cypher allow seamless migration from Neo4j, but take advantage of NornicDB-specific capabilities like vector search and memory decay for AI-driven workloads.
- If you encounter port conflicts, adjust host ports or container port mappings accordingly. For GPU-specific paths, ensure the host has the required GPU drivers and that the chosen image supports your GPU stack (Metal/CUDA/Vulkan).
- Environment variables for fine-grained control (e.g., memory budgets, decay rates, index configurations) can typically be supplied via -e VAR=value in the docker run command or via a composed YAML in production setups.
Related MCP Servers
netdata
The fastest path to AI-powered full stack observability, even for lean teams.
MCPBench
The evaluation benchmark on MCP servers
github-brain
An experimental GitHub MCP server with local database.
kitwork
Automate kit workflows effortlessly with a lightweight, high-performance, fast, and flexible engine for cloud or self-hosted environments.
gopls
MCP server for golang projects development: Expand AI Code Agent ability boundary to have a semantic understanding and determinisic information for golang projects.
tempo
An MCP ( Model Context Protocol ) Server for Grafana Tempo