omni-lpr
A multi-interface (REST and MCP) server for automatic license plate recognition 🚗
claude mcp add --transport stdio habedi-omni-lpr python -m omni_lpr
How to use
Omni-LPR exposes its capabilities through both a REST API and the MCP interface. It is a Python-based, self-hostable ALPR server that can run as a standalone service or be integrated into AI agents and LLM workflows via MCP. The server supports multiple interfaces and hardware acceleration options (CPU via ONNX, OpenVINO for Intel CPUs, and CUDA for NVIDIA GPUs), and it can be queried to list available detector and OCR models, recognize license plates from image data or from image paths, and expose these capabilities over MCP for programmatic use. You can interact with the REST endpoints for quick testing or connect to the MCP endpoint to drive the tools from an AI agent or LM Studio.
Key capabilities include:
- REST API access to tools for list_models, recognize_plate, detect_and_recognize_plate, and their path-based variants.
- MCP access to the same tools, enabling integration with agents and workflows that follow the MCP protocol.
- Multiple interfaces and pre-built Docker images for easy deployment.
- Asynchronous, high-performance I/O suitable for concurrent requests.
To use the MCP tools, connect to the MCP endpoint at http://127.0.0.1:8000/mcp/ and browse or invoke the available tools (e.g., list_models, detect_and_recognize_plate_from_path, recognize_plate_from_path) via MCP-compatible clients such as MCP Inspector or LM Studio. If you are testing via the REST API, you can call the corresponding endpoints under /api/v1/tools and the MCP stream under /mcp/.
How to install
Prerequisites:
- Python 3.10 or newer
- pip (Python package manager)
- Optional: Docker if you prefer containerized deployment
Install from PyPI and run:
# Install the Omni-LPR package
pip install omni-lpr
# Start the Omni-LPR server (MCP and REST enabled by default)
omni-lpr
By default, the server listens on http://127.0.0.1:8000. You can verify it's running with:
curl http://127.0.0.1:8000/api/health
If you prefer to run in a container, you can pull the Docker image pre-built for Omni-LPR and run it with a simple command:
# Example (Docker)
docker run --rm -p 8000:8000 ghcr.io/habedi/omni-lpr:<tag>
For LM Studio integration or MCP-based workflows, ensure your client points to the MCP endpoint at http://127.0.0.1:8000/mcp/.
Additional notes
Notes and tips:
- Omni-LPR exposes both a REST API and an MCP interface. The MCP endpoint is available at /mcp/ and can be explored with MCP Inspector or LM Studio.
- The default HTTP port is 8000; you can adapt host/port configuration as needed when running in different environments.
- The project is in active development; expect API changes and occasional breaking changes in new releases. If you depend on a stable contract, pin a specific version and test after upgrades.
- Hardware acceleration options (ONNX CPU, OpenVINO, CUDA) may require additional dependencies or runtime libraries on the host to leverage the respective backends.
- When starting via MCP, you can name your server instance (e.g., omni-lpr-local) and reference it in LM Studio configurations as shown in the examples.
Related MCP Servers
imagesorcery
An MCP server providing tools for image processing operations
mcp-tool-kit
Agentic abstraction layer for building high precision vertical AI agents written in python for Model Context Protocol.
Archive-Agent
Find your files with natural language and ask questions.
ToolsFilter
Fetch only relevant tools for the current conversation and save cost while increasing the precision of your LLM Response
cricket
A MCP server for fetching cricket data from Cricbuzz, including player statistics, live match scores, upcoming schedules, and the latest news.
rest-to -adapter
A Python library for converting REST API specifications into MCP (Model Context Protocol) tools for AI agents.