aeron-cache
A KV store built with Aeron, SBE and Agrona. RAFT clustered and fast by default. HTTP, WS & SSE API with JSON payloads. UI with NextJS and K8s deployable.
claude mcp add --transport stdio bhf-aeron-cache docker run -i bhf/aeron-cache
How to use
Aeron Cache is a clustered key-value store built on Aeron, Agrona and SBE with a RAFT-based consensus layer. It exposes REST over HTTP, as well as WebSocket and Server-Sent Events interfaces for real-time updates, and supports multi-cache subscriptions. The project ships with a UI (Next.js) and exposes near-cache capabilities to accelerate reads. To use it, run the MCP server (via Docker, Docker Compose, or Kubernetes) and then interact with the provided HTTP, WS, or SSE endpoints to store and retrieve values, subscribe to cache updates, or explore cluster status. The MCP layer (cache-mcp) provides Swagger-based AutoMCP interfaces for programmatic access to the cache-management features.
Key capabilities include:
- Clustered and single-node operation with RAFT consensus
- Near caching for faster reads and reduced latency
- Embedded clients in multiple languages and a Rust-based CLI
- HTTP, WebSocket, and SSE interfaces for real-time and REST access
- Observability via Prometheus, cAdvisor, Jaeger/OTEL, and tracing
- UI and APIs to manage and monitor caches, subscriptions, and cluster health
To use the tools, connect to the HTTP API endpoints for CRUD operations on cache entries, use the WS endpoint to receive real-time updates for subscribed caches, and leverage the SSE feed for scalable event streams. If you need multi-cache updates, subscribe to multiple cache IDs via the provided endpoints in the documentation or UI.
How to install
Prerequisites:
- Docker and Docker Compose installed on your machine
- Git and Java build tool if you want to build from source (optional)
- Access to the project repository
Installation steps:
-
Clone the repository: git clone https://github.com/bhf/aeron-cache cd aeron-cache/
-
Build the project (optional if using prebuilt images): ./gradlew build
Wait for the build to complete; this produces the cache-cluster and HTTP/WS/SSE components
-
Run with Docker (recommended for MCP): docker compose build docker compose up This starts the UI, HTTP, WS, and SSE interfaces and the cache cluster in a containerized environment.
-
Alternative: Run in Kubernetes (if you have manifests/Helm):
Follow the repository's k8s/helm instructions (make install-all) to deploy to your cluster.
-
Access the UI and APIs:
- UI: http://localhost:3000
- HTTP API: http://localhost:7071/api/...
- WS: ws://localhost:7071/api/ws/...
Prerequisites recap: ensure Docker is running, network ports 3000, 7071/7072 (for HTTP/WS/SSE) are available, and you have the cluster resources allocated (CPU/memory) for the containerized services.
Additional notes
Tips and common issues:
- If you see port binding errors, ensure no other services occupy 3000, 7071, or 7072 on your host.
- For multi-cache subscriptions, use the provided endpoints to subscribe to multiple cache IDs via WS or SSE; ensure your client handles reconnects gracefully.
- The UI is Next.js-based and may require a build step if you customize the frontend. Use the repository’s Makefile for Kubernetes deployment automation.
- If you iterate on the cache logic, you can leverage the embedded clients (Java, Rust, TypeScript, Python) to test against the running cluster.
- For observability, enable Jaeger/OTEL tracing in your environment to trace requests across services.
Related MCP Servers
kubefwd
Bulk port forwarding Kubernetes services for local development.
k8m
一款轻量级、跨平台的 Mini Kubernetes AI Dashboard,支持大模型+智能体+MCP(支持设置操作权限),集成多集群管理、智能分析、实时异常检测等功能,支持多架构并可单文件部署,助力高效集群管理与运维优化。
mcpcan
MCPCAN is a centralized management platform for MCP services. It deploys each MCP service using a container deployment method. The platform supports container monitoring and MCP service token verification, solving security risks and enabling rapid deployment of MCP services. It uses SSE, STDIO, and STREAMABLEHTTP access protocols to deploy MCP。
k8s
K8s-mcp-server is a Model Context Protocol (MCP) server that enables AI assistants like Claude to securely execute Kubernetes commands. It provides a bridge between language models and essential Kubernetes CLI tools including kubectl, helm, istioctl, and argocd, allowing AI systems to assist with cluster management, troubleshooting, and deployments
kom
kom 是一个用于 Kubernetes 操作的工具,SDK级的kubectl、client-go的使用封装。并且支持作为管理k8s 的 MCP server。 它提供了一系列功能来管理 Kubernetes 资源,包括创建、更新、删除和获取资源,甚至使用SQL查询k8s资源。这个项目支持多种 Kubernetes 资源类型的操作,并能够处理自定义资源定义(CRD)。 通过使用 kom,你可以轻松地进行资源的增删改查和日志获取以及操作POD内文件等动作。
k8s-gpu
NVIDIA GPU hardware introspection for Kubernetes clusters via MCP