Get the FREE Ultimate OpenClaw Setup Guide →

openinference

OpenTelemetry Instrumentation for AI Observability

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio arize-ai-openinference python -m openinference \
  --env OTEL_TRACES_EXPORTER="otlp" \
  --env OTEL_METRICS_EXPORTER="otlp" \
  --env OPENINFERENCE_LOG_LEVEL="info" \
  --env OTEL_EXPORTER_OTLP_ENDPOINT="URL of the OpenTelemetry collector (e.g., https://localhost:4317)"

How to use

OpenInference provides a collection of instrumentation and semantic conventions to enable end-to-end tracing of AI applications. By running the server module, you enable OpenTelemetry-compatible instrumentation for LLM-driven workflows, including model invocations, retrieval from vector stores, and tool usage (APIs or search). Once started, your applications can emit traces that capture the invocation context, including the surrounding application state, tool usage, and external API calls. This MCP server exposes the OpenInference instrumentation so you can adopt standardized tracing across your AI stack, integrate with your existing backends, and visualize latency, dependencies, and payloads in your observability tooling.

To use the tooling, ensure your application imports and integrates with the OpenInference instrumentation where applicable. When you run the server, it will register the semantic conventions and instrumentation hooks, allowing your OpenTelemetry SDKs to propagate trace context and span data through LLM calls, retrieval steps, and tool interactions. You can configure your OTLP exporters to forward traces and metrics to your collector, observability platform, or dashboard for analysis and alerting.

How to install

Prerequisites:

  • Python 3.8+ installed on your system
  • Access to install Python packages from PyPI
  1. Create and activate a virtual environment (optional but recommended):

    python -m venv venv source venv/bin/activate # on macOS/Linux .\venv\Scripts\activate # on Windows

  2. Install the OpenInference package:

    pip install openinference

  3. (Optional) Install additional instrumentations or semantic conventions if needed by your project:

    pip install openinference-semantic-conventions openinference-instrumentation

  4. Run the MCP server that hosts OpenInference instrumentation:

    python -m openinference

  5. Verify the server is reachable and emitting traces by configuring your OTLP exporter in your OpenTelemetry setup to point at the server’s endpoint.

Additional notes

Notes and tips:

  • Ensure your OTLP endpoint is reachable from the environment where you run the MCP server. Network egress or firewall rules can affect trace delivery.
  • If you need to customize the logging level, set OPENINFERENCE_LOG_LEVEL to debug, info, warn, or error.
  • You can enable or customize specific instrumentations by adjusting environment variables or code imports as documented in the OpenInference repository.
  • For production deployments, consider running behind a reverse proxy or load balancer and using a proper TLS configuration for the OTLP endpoint.
  • If you encounter import errors, verify that the installed package versions are compatible with your Python version and that your virtual environment (if used) is active.
  • This server exposes OpenTelemetry-compatible endpoints; you can switch exporters (OTLP, console, or third-party) by changing OTEL_EXPORTER_OTLP_ENDPOINT and related OTEL_ variables.

Related MCP Servers

Sponsor this space

Reach thousands of developers