by pydantic
Provides tools to retrieve, query, and visualize OpenTelemetry traces and metrics from Pydantic Logfire via a Model Context Protocol server.
Logfire MCP enables AI assistants and developers to access telemetry data—traces, exceptions, and metrics—collected by Pydantic Logfire. It exposes a set of MCP tools that let users query recent exceptions, run arbitrary SQL‑like queries, generate UI links for specific traces, and explore the underlying database schema.
LOGFIRE_READ_TOKEN=YOUR_READ_TOKEN uvx logfire-mcp@latest
You can also place the token in a .env
file or pass it with the --read-token
flag.uvx logfire-mcp@latest
and the required LOGFIRE_READ_TOKEN
environment variable as shown in the README.find_exceptions_in_file
, arbitrary_query
, logfire_link
, or schema_reference
through your MCP‑enabled IDE or assistant.Q: Do I need to install Python to run Logfire MCP?
A: Yes, the server is a Python package run via uvx
. Installing uv
(the package manager) is sufficient.
Q: How long can I look back when querying data?
A: The age
parameter accepts minutes up to 7 days (10,080 minutes).
Q: Can I run Logfire MCP in a Docker container?
A: Absolutely – just ensure uv
and the LOGFIRE_READ_TOKEN
environment variable are available inside the container.
Q: What if I use a self‑hosted Logfire instance?
A: Set LOGFIRE_BASE_URL
(or use --base-url
) to point to your custom endpoint.
Q: Which MCP clients are supported? A: The README provides configuration snippets for Cursor, Claude (code & Desktop), Cline, VS Code, and Zed.
This repository contains a Model Context Protocol (MCP) server with tools that can access the OpenTelemetry traces and metrics you've sent to Pydantic Logfire.
This MCP server enables LLMs to retrieve your application's telemetry data, analyze distributed traces, and make use of the results of arbitrary SQL queries executed using the Pydantic Logfire APIs.
find_exceptions_in_file
- Get the details about the 10 most recent exceptions on the file.
filepath
(string) - The path to the file to find exceptions in.age
(integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.arbitrary_query
- Run an arbitrary query on the Pydantic Logfire database.
query
(string) - The query to run, as a SQL string.age
(integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.logfire_link
- Creates a link to help the user to view the trace in the Logfire UI.
trace_id
(string) - The trace ID to link to.schema_reference
- The database schema for the Logfire DataFusion database.
uv
The first thing to do is make sure uv
is installed, as uv
is used to run the MCP server.
For installation instructions, see the uv
installation docs.
If you already have an older version of uv
installed, you might need to update it with uv self update
.
In order to make requests to the Pydantic Logfire APIs, the Pydantic Logfire MCP server requires a "read token".
You can create one under the "Read Tokens" section of your project settings in Pydantic Logfire: https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
[!IMPORTANT] Pydantic Logfire read tokens are project-specific, so you need to create one for the specific project you want to expose to the Pydantic Logfire MCP server.
Once you have uv
installed and have a Pydantic Logfire read token, you can manually run the MCP server using uvx
(which is provided by uv
).
You can specify your read token using the LOGFIRE_READ_TOKEN
environment variable:
LOGFIRE_READ_TOKEN=YOUR_READ_TOKEN uvx logfire-mcp@latest
You can also set LOGFIRE_READ_TOKEN
in a .env
file:
LOGFIRE_READ_TOKEN=pylf_v1_us_...
NOTE: for this to work, the MCP server needs to run with the directory containing the .env
file in its working directory.
or using the --read-token
flag:
uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
[!NOTE] If you are using Cursor, Claude Desktop, Cline, or other MCP clients that manage your MCP servers for you, you do NOT need to manually run the server yourself. The next section will show you how to configure these clients to make use of the Pydantic Logfire MCP server.
If you are running Logfire in a self hosted environment, you need to specify the base URL.
This can be done using the LOGFIRE_BASE_URL
environment variable:
LOGFIRE_BASE_URL=https://logfire.my-company.com uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
You can also use the --base-url
argument:
uvx logfire-mcp@latest --base-url=https://logfire.my-company.com --read-token=YOUR_READ_TOKEN
Create a .cursor/mcp.json
file in your project root:
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest", "--read-token=YOUR-TOKEN"]
}
}
}
The Cursor doesn't accept the env
field, so you need to use the --read-token
flag instead.
Run the following command:
claude mcp add logfire -e LOGFIRE_READ_TOKEN=YOUR_TOKEN -- uvx logfire-mcp@latest
Add to your Claude settings:
{
"command": ["uvx"],
"args": ["logfire-mcp@latest"],
"type": "stdio",
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
Add to your Cline settings in cline_mcp_settings.json
:
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"disabled": false,
"autoApprove": []
}
}
}
Make sure you enabled MCP support in VS Code.
Create a .vscode/mcp.json
file in your project's root directory:
{
"servers": {
"logfire": {
"type": "stdio",
"command": "uvx", // or the absolute /path/to/uvx
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
}
}
Create a .zed/settings.json
file in your project's root directory:
{
"context_servers": {
"logfire": {
"source": "custom",
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"enabled": true
}
}
}
{
"name": "find_exceptions_in_file",
"arguments": {
"filepath": "app/api.py",
"age": 1440
}
}
Response:
[
{
"created_at": "2024-03-20T10:30:00Z",
"message": "Failed to process request",
"exception_type": "ValueError",
"exception_message": "Invalid input format",
"function_name": "process_request",
"line_number": "42",
"attributes": {
"service.name": "api-service",
"code.filepath": "app/api.py"
},
"trace_id": "1234567890abcdef"
}
]
{
"name": "arbitrary_query",
"arguments": {
"query": "SELECT trace_id, message, created_at, attributes->>'service.name' as service FROM records WHERE severity_text = 'ERROR' ORDER BY created_at DESC LIMIT 10",
"age": 1440
}
}
First, obtain a Pydantic Logfire read token from: https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
Run the MCP server:
uvx logfire-mcp@latest --read-token=YOUR_TOKEN
Configure your preferred client (Cursor, Claude Desktop, or Cline) using the configuration examples above
Start using the MCP server to analyze your OpenTelemetry traces and metrics!
We welcome contributions to help improve the Pydantic Logfire MCP server. Whether you want to add new trace analysis tools, enhance metrics querying functionality, or improve documentation, your input is valuable.
For examples of other MCP servers and implementation patterns, see the Model Context Protocol servers repository.
Pydantic Logfire MCP is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by netdata
Real-time, per‑second infrastructure monitoring platform that provides instant insights, auto‑discovery, edge‑based machine‑learning anomaly detection, and lightweight visualizations without requiring complex configuration.
by Arize-ai
Arize Phoenix is an open-source AI and LLM observability tool for inspecting traces, managing prompts, curating datasets, and running experiments.
by msgbyte
Provides website analytics, uptime monitoring, and server status in a single self‑hosted application.
by grafana
Provides programmatic access to Grafana dashboards, datasources, alerts, incidents, and related operational data through a Model Context Protocol server, enabling AI assistants and automation tools to query and manipulate Grafana resources.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for problem retrieval, DQL execution, Slack notifications, workflow automation, and AI‑assisted troubleshooting.
by VictoriaMetrics-Community
Access VictoriaMetrics instances through Model Context Protocol, enabling AI assistants and tools to query metrics, explore labels, debug configurations, and retrieve documentation without leaving the conversational interface.
by axiomhq
Axiom MCP Server implements the Model Context Protocol (MCP) for Axiom, enabling AI agents to query logs, traces, and other event data using the Axiom Processing Language (APL). It allows AI agents to perform monitoring, observability, and natural language analysis of data for debugging and incident response.
by GeLi2001
Datadog MCP Server is a Model Context Protocol (MCP) server that interacts with the official Datadog API. It enables users to access and manage various Datadog functionalities, including monitoring, dashboards, metrics, events, logs, and incidents.
by last9
Provides AI agents with real‑time production context—logs, metrics, and traces—through a Model Context Protocol server that can be queried from development environments.