openinference
OpenTelemetry Instrumentation for AI Observability
claude mcp add --transport stdio arize-ai-openinference python -m openinference \ --env OTEL_TRACES_EXPORTER="otlp" \ --env OTEL_METRICS_EXPORTER="otlp" \ --env OPENINFERENCE_LOG_LEVEL="info" \ --env OTEL_EXPORTER_OTLP_ENDPOINT="URL of the OpenTelemetry collector (e.g., https://localhost:4317)"
How to use
OpenInference provides a collection of instrumentation and semantic conventions to enable end-to-end tracing of AI applications. By running the server module, you enable OpenTelemetry-compatible instrumentation for LLM-driven workflows, including model invocations, retrieval from vector stores, and tool usage (APIs or search). Once started, your applications can emit traces that capture the invocation context, including the surrounding application state, tool usage, and external API calls. This MCP server exposes the OpenInference instrumentation so you can adopt standardized tracing across your AI stack, integrate with your existing backends, and visualize latency, dependencies, and payloads in your observability tooling.
To use the tooling, ensure your application imports and integrates with the OpenInference instrumentation where applicable. When you run the server, it will register the semantic conventions and instrumentation hooks, allowing your OpenTelemetry SDKs to propagate trace context and span data through LLM calls, retrieval steps, and tool interactions. You can configure your OTLP exporters to forward traces and metrics to your collector, observability platform, or dashboard for analysis and alerting.
How to install
Prerequisites:
- Python 3.8+ installed on your system
- Access to install Python packages from PyPI
-
Create and activate a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # on macOS/Linux .\venv\Scripts\activate # on Windows
-
Install the OpenInference package:
pip install openinference
-
(Optional) Install additional instrumentations or semantic conventions if needed by your project:
pip install openinference-semantic-conventions openinference-instrumentation
-
Run the MCP server that hosts OpenInference instrumentation:
python -m openinference
-
Verify the server is reachable and emitting traces by configuring your OTLP exporter in your OpenTelemetry setup to point at the server’s endpoint.
Additional notes
Notes and tips:
- Ensure your OTLP endpoint is reachable from the environment where you run the MCP server. Network egress or firewall rules can affect trace delivery.
- If you need to customize the logging level, set OPENINFERENCE_LOG_LEVEL to debug, info, warn, or error.
- You can enable or customize specific instrumentations by adjusting environment variables or code imports as documented in the OpenInference repository.
- For production deployments, consider running behind a reverse proxy or load balancer and using a proper TLS configuration for the OTLP endpoint.
- If you encounter import errors, verify that the installed package versions are compatible with your Python version and that your virtual environment (if used) is active.
- This server exposes OpenTelemetry-compatible endpoints; you can switch exporters (OTLP, console, or third-party) by changing OTEL_EXPORTER_OTLP_ENDPOINT and related OTEL_ variables.
Related MCP Servers
mindsdb
Query Engine for AI Analytics: Build self-reasoning agents across all your live data
gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with 1 fast & friendly API.
paperbanana
Open source implementation and extension of Google Research’s PaperBanana for automated academic figures, diagrams, and research visuals, expanded to new domains like slide generation.
open-ptc-agent
An open source implementation of code execution with MCP (Programatic Tool Calling)
tome
a magical LLM desktop client that makes it easy for *anyone* to use LLMs and MCP
nexus
Govern & Secure your AI