Get the FREE Ultimate OpenClaw Setup Guide →

omni-nli

A multi-interface (REST and MCP) server for natural language inference

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio cogitatortech-omni-nli python -m omni_nli

How to use

Omni-NLI exposes natural language inference capabilities through both a traditional REST API and the MCP interface used by AI agents. The REST API lets you send a JSON payload with a premise and hypothesis and receive a predicted label (entailment, contradiction, or neutral) along with a confidence score and model/backend information. The MCP interface provides a programmatic way for agents to request inferences, retrieve results, and integrate NLI checks into automated workflows or reasoning pipelines. The server is designed to be self-hostable, scalable, and configurable, with built-in caching to speed up repeated inferences and support for different backends such as Hugging Face models, Ollama, and OpenRouter. To start you’ll run the Omni-NLI server locally, after which you can query either the REST endpoints or the MCP endpoints depending on your integration needs.

How to install

Prerequisites:

  • Python 3.10 or newer
  • pip (comes with Python)
  • Optional: access to HuggingFace models (internet access) or local model backends

Installation steps:

  1. Create a virtual environment (recommended) python -m venv venv source venv/bin/activate # On Windows use: venv\Scripts\activate

  2. Install the Omni-NLI package (with optional HuggingFace extras) pip install omni-nli[huggingface]

  3. Run the Omni-NLI server

    This uses the module entry point configured by the package

    python -m omni_nli

  4. (Optional) Run tests or verify installation

    Example: curl the REST API once the server is running at http://127.0.0.1:8000/api/v1/nli/evaluate

Notes:

  • If you want a different backend or additional features, install with the appropriate extras or consult the documentation for configuration options.

Additional notes

Tips and common considerations:

  • The MCP interface is intended for agent-based integrations; use the MCP endpoints to perform inferences as part of an automated reasoning workflow.
  • The server provides caching to improve throughput for repeated inferences; configure cache size and expiry if needed via the documentation options.
  • Backend models can range from public HuggingFace models to private/gated deployments; ensure proper authentication and access as required by your chosen backend.
  • If you run into port conflicts or need to expose the MCP REST endpoints, check the configuration documentation for host/port settings.
  • For production deployments, consider containerizing the service (Docker) and tuning resources based on your workload; refer to the official docs for deployment patterns.

Related MCP Servers

Sponsor this space

Reach thousands of developers