by Arize-ai
Arize Phoenix is an open-source AI and LLM observability tool for inspecting traces, managing prompts, curating datasets, and running experiments.
Arize Phoenix is an open-source tool for AI observability and evaluation. It helps developers and data scientists inspect traces, manage and evaluate prompts, curate datasets, and run experiments for their LLM applications.
Integrate the Phoenix library into your Python application to start logging traces and data. You can then visualize and analyze them in the Phoenix UI, which can be run locally.
Yes, Arize Phoenix is an open-source project available on GitHub.
Yes, Phoenix is designed to integrate with popular libraries like LangChain, LlamaIndex, and frameworks from OpenAI, Anthropic, etc.
Phoenix is a lightweight, open-source tool focused on local development and evaluation, while the Arize enterprise platform provides a more comprehensive, production-grade observability solution.
Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting. It provides:
Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks (🦙LlamaIndex, 🦜⛓LangChain, Haystack, 🧩DSPy, 🤗smolagents) and LLM providers (OpenAI, Bedrock, MistralAI, VertexAI, LiteLLM, Google GenAI and more). For details on auto-instrumentation, check out the OpenInference project.
Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud.
Install Phoenix via pip
or conda
pip install arize-phoenix
Phoenix container images are available via Docker Hub and can be deployed using Docker or Kubernetes.
The arize-phoenix
package includes the entire Phoenix platfom. However if you have deployed the Phoenix platform, there are light-weight Python sub-packages and TypeScript packages that can be used in conjunction with the platfrom.
Package | Language | Description |
---|---|---|
arize-phoenix-otel | Python |
Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults |
arize-phoenix-client | Python |
Lightweight client for interacting with the Phoenix server via its OpenAPI REST interface |
arize-phoenix-evals | Python |
Tooling to evaluate LLM applications including RAG relevance, answer relevance, and more |
@arizeai/phoenix-client | JavaScript |
Client for the Arize Phoenix API |
@arizeai/phoenix-mcp | JavaScript |
MCP server implementation for Arize Phoenix providing unified interface to Phoenix's capabilities |
Phoenix is built on top of OpenTelemetry and is vendor, language, and framework agnostic. For details about tracing integrations and example applications, see the OpenInference project.
Python Integrations
Integration | Package | Version Badge |
---|---|---|
OpenAI | openinference-instrumentation-openai |
|
OpenAI Agents | openinference-instrumentation-openai-agents |
|
LlamaIndex | openinference-instrumentation-llama-index |
|
DSPy | openinference-instrumentation-dspy |
|
AWS Bedrock | openinference-instrumentation-bedrock |
|
LangChain | openinference-instrumentation-langchain |
|
MistralAI | openinference-instrumentation-mistralai |
|
Google GenAI | openinference-instrumentation-google-genai |
|
Google ADK | openinference-instrumentation-google-adk |
|
Guardrails | openinference-instrumentation-guardrails |
|
VertexAI | openinference-instrumentation-vertexai |
|
CrewAI | openinference-instrumentation-crewai |
|
Haystack | openinference-instrumentation-haystack |
|
LiteLLM | openinference-instrumentation-litellm |
|
Groq | openinference-instrumentation-groq |
|
Instructor | openinference-instrumentation-instructor |
|
Anthropic | openinference-instrumentation-anthropic |
|
Smolagents | openinference-instrumentation-smolagents |
|
Agno | openinference-instrumentation-agno |
|
MCP | openinference-instrumentation-mcp |
|
Pydantic AI | openinference-instrumentation-pydantic-ai |
|
Autogen AgentChat | openinference-instrumentation-autogen-agentchat |
|
Portkey | openinference-instrumentation-portkey |
Integration | Package | Version Badge |
---|---|---|
OpenAI | @arizeai/openinference-instrumentation-openai |
|
LangChain.js | @arizeai/openinference-instrumentation-langchain |
|
Vercel AI SDK | @arizeai/openinference-vercel |
|
BeeAI | @arizeai/openinference-instrumentation-beeai |
|
Mastra | @arizeai/openinference-mastra |
Phoenix has native integrations with LangFlow, LiteLLM Proxy, and BeeAI.
Join our community to connect with thousands of AI builders.
See the migration guide for a list of breaking changes.
Copyright 2025 Arize AI, Inc. All Rights Reserved.
Portions of this code are patent protected by one or more U.S. Patents. See the IP_NOTICE.
This software is licensed under the terms of the Elastic License 2.0 (ELv2). See LICENSE.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by netdata
Real-time, per‑second infrastructure monitoring platform that provides instant insights, auto‑discovery, edge‑based machine‑learning anomaly detection, and lightweight visualizations without requiring complex configuration.
by msgbyte
Provides website analytics, uptime monitoring, and server status in a single self‑hosted application.
by grafana
Provides programmatic access to Grafana dashboards, datasources, alerts, incidents, and related operational data through a Model Context Protocol server, enabling AI assistants and automation tools to query and manipulate Grafana resources.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for problem retrieval, DQL execution, Slack notifications, workflow automation, and AI‑assisted troubleshooting.
by pydantic
Provides tools to retrieve, query, and visualize OpenTelemetry traces and metrics from Pydantic Logfire via a Model Context Protocol server.
by VictoriaMetrics-Community
Access VictoriaMetrics instances through Model Context Protocol, enabling AI assistants and tools to query metrics, explore labels, debug configurations, and retrieve documentation without leaving the conversational interface.
by axiomhq
Axiom MCP Server implements the Model Context Protocol (MCP) for Axiom, enabling AI agents to query logs, traces, and other event data using the Axiom Processing Language (APL). It allows AI agents to perform monitoring, observability, and natural language analysis of data for debugging and incident response.
by GeLi2001
Datadog MCP Server is a Model Context Protocol (MCP) server that interacts with the official Datadog API. It enables users to access and manage various Datadog functionalities, including monitoring, dashboards, metrics, events, logs, and incidents.
by last9
Provides AI agents with real‑time production context—logs, metrics, and traces—through a Model Context Protocol server that can be queried from development environments.