spring-ai-playground
A self-hosted web UI that simplifies AI experimentation and testing for Java developers. It provides playgrounds for all major vector databases and MCP tools, supports intuitive interaction with LLMs, and enables rapid development and testing of RAG workflows, MCP integrations, and unified chat experiences.
claude mcp add --transport stdio jm-lab-spring-ai-playground docker run -d -p 8282:8282 --name spring-ai-playground -e SPRING_AI_OLLAMA_BASE_URL=http://host.docker.internal:11434 -v spring-ai-playground:/home --restart unless-stopped jmlab/spring-ai-playground:latest \ --env SPRING_AI_OLLAMA_BASE_URL="http://host.docker.internal:11434"
How to use
Spring AI Playground is a self-hosted web UI that provides an integrated environment for experimenting with large language models, vector databases, prompt engineering, and MCP (Model Context Protocol) integrations. It supports multiple model providers (including Ollama locally and major cloud providers) and includes tooling to test retrieval-augmented generation (RAG) workflows, manage model configurations, and evaluate MCP transports. The MCP playground area lets you explore conversations and data flows that involve external tools and context sharing, making it easier to prototype Spring AI-based applications with richer contextual awareness.
To use it with Docker as described, start the container and open the UI in your browser at the exposed port (default: http://localhost:8282). The UI exposes model selection (Ollama or other providers), embeddings, RAG workflows, and MCP-specific panels that show how context is managed and transported between components. You can configure which chat and embedding models are available, adjust default options, and experiment with different MCP transports to see how context is threaded through prompts and tool calls.
How to install
Prerequisites:
- Docker installed and running
- Optional: Java 21+ if building locally from source
Option A: Run with Docker (recommended)
- Clone the repository (optional for reference): git clone https://github.com/JM-Lab/spring-ai-playground.git cd spring-ai-playground
- Build and run the Docker image (as shown in Quick Start):
./mvnw spring-boot:build-image -Pproduction -DskipTests=true
-Dspring-boot.build-image.imageName=jmlab/spring-ai-playground:latest docker run -d -p 8282:8282 --name spring-ai-playground
-e SPRING_AI_OLLAMA_BASE_URL=http://host.docker.internal:11434
-v spring-ai-playground:/home
--restart unless-stopped
jmlab/spring-ai-playground:latest - Open http://localhost:8282 in your browser.
Option B: Run locally from source (without Docker)
- Ensure Java 21+ is installed.
- Build the project: ./mvnw clean install -Pproduction -DskipTests=true
- Run the application locally: ./mvnw spring-boot:run
- Open http://localhost:8282 in your browser.
Additional notes
Environment variables and configuration options:
- SPRING_AI_OLLAMA_BASE_URL must point to your Ollama instance when using Ollama locally or in Docker networking scenarios. The recommended value for Docker runs is http://host.docker.internal:11434, but you may need to adjust for Linux hosts (e.g., using the host IP or --network=host).
- The MCP transport (STDIO) may require running outside of containers for reliable direct process-to-process communication. If testing MCP STDIO transport, consider running locally instead of in Docker.
- Data persistence is achieved via a Docker volume (spring-ai-playground in the example); ensure volume backups if needed. If running locally, data will be stored in your project directory unless configured otherwise.
- If you modify model sources (e.g., switching from Ollama to OpenAI), update dependencies in pom.xml accordingly as described in the OpenAI switching notes in the README.
Related MCP Servers
SearChat
Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。
daan
✨Lightweight LLM Client with MCP 🔌 & Characters 👤
AutoDocs
We handle what engineers and IDEs won't: generating and maintaining technical documentation for your codebase, while also providing search with dependency-aware context to help your AI tools understand your codebase and its conventions.
spring-ai
From Java Dev to AI Engineer: Spring AI Fast Track
langchain -client
This Streamlit application provides a user interface for connecting to MCP (Model Context Protocol) servers and interacting with them using different LLM providers (OpenAI, Anthropic, Google, Ollama).
mcp -templates
A flexible platform that provides Docker & Kubernetes backends, a lightweight CLI (mcpt), and client utilities for seamless MCP integration. Spin up servers from templates, route requests through a single endpoint with load balancing, and support both deployed (HTTP) and local (stdio) transports — all with sensible defaults and YAML-based configs.