by honeycombio
Provides a self‑hosted Model Context Protocol server that enables large language models to query and analyze Honeycomb observability data across multiple environments.
Honeycomb MCP is a self‑hosted server that implements the Model Context Protocol, allowing LLMs such as Claude to directly interact with Honeycomb datasets, alerts, dashboards, SLOs, and triggers. It bridges production telemetry with code‑level context for enterprise customers.
pnpm install
(Node.js 18+ required).pnpm run build
; the compiled artifact appears in the build
folder.{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": ["/full/path/to/honeycomb-mcp/build/index.mjs"],
"env": { "HONEYCOMB_API_KEY": "<YOUR_API_KEY>" }
}
}
}
list_datasets
, run_query
, list_slos
, …).environment
URI scheme (honeycomb://prod/api-requests
)Q: Do I need a Honeycomb Enterprise license? A: Yes, the server requires an Enterprise account with full API permissions.
Q: Is the server authentication‑free? A: The server itself runs unauthenticated; access control is enforced by the API keys supplied via environment variables.
Q: Can I run the server in a container?
A: Absolutely. Build the project, copy the build
directory into an image, set the required env vars, and start the Node process.
Q: How does caching affect data freshness? A: Cached resources have configurable TTLs (default 5 minutes). Queries are never cached, ensuring real‑time analytics.
Q: What clients are officially supported? A: Claude Desktop, Claude Code, Cursor, Windsurf, Goose, and any MCP‑compatible client.
⚠️ DEPRECATED: This self-hosted MCP server is deprecated. Please migrate to the hosted Honeycomb Model Context Protocol (MCP) solution at Honeycomb MCP Documentation.
A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.
Honeycomb MCP is effectively a complete alternative interface to Honeycomb, and thus you need broad permissions for the API.
Currently, this is only available for Honeycomb Enterprise customers.
Today, this is a single server process that you must run on your own computer. It is not authenticated. All information uses STDIO between your client and the server.
pnpm install
pnpm run build
The build artifact goes into the /build
folder.
To use this MCP server, you need to provide Honeycomb API keys via environment variables in your MCP config.
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_API_KEY": "your_api_key"
}
}
}
}
For multiple environments:
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_ENV_PROD_API_KEY": "your_prod_api_key",
"HONEYCOMB_ENV_STAGING_API_KEY": "your_staging_api_key"
}
}
}
}
Important: These environment variables must bet set in the env
block of your MCP config.
EU customers must also set a HONEYCOMB_API_ENDPOINT
configuration, since the MCP defaults to the non-EU instance.
# Optional custom API endpoint (defaults to https://api.honeycomb.io)
HONEYCOMB_API_ENDPOINT=https://api.eu1.honeycomb.io/
The MCP server implements caching for all non-query Honeycomb API calls to improve performance and reduce API usage. Caching can be configured using these environment variables:
# Enable/disable caching (default: true)
HONEYCOMB_CACHE_ENABLED=true
# Default TTL in seconds (default: 300)
HONEYCOMB_CACHE_DEFAULT_TTL=300
# Resource-specific TTL values in seconds (defaults shown)
HONEYCOMB_CACHE_DATASET_TTL=900 # 15 minutes
HONEYCOMB_CACHE_COLUMN_TTL=900 # 15 minutes
HONEYCOMB_CACHE_BOARD_TTL=900 # 15 minutes
HONEYCOMB_CACHE_SLO_TTL=900 # 15 minutes
HONEYCOMB_CACHE_TRIGGER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_MARKER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_RECIPIENT_TTL=900 # 15 minutes
HONEYCOMB_CACHE_AUTH_TTL=3600 # 1 hour
# Maximum cache size (items per resource type)
HONEYCOMB_CACHE_MAX_SIZE=1000
Honeycomb MCP has been tested with the following clients:
It will likely work with other clients.
Access Honeycomb datasets using URIs in the format:
honeycomb://{environment}/{dataset}
For example:
honeycomb://production/api-requests
honeycomb://staging/backend-services
The resource response includes:
list_datasets
: List all datasets in an environment
{ "environment": "production" }
get_columns
: Get column information for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
run_query
: Run analytics queries with rich options
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{ "op": "COUNT" },
{ "op": "P95", "column": "duration_ms" }
],
"breakdowns": ["service.name"],
"time_range": 3600
}
analyze_columns
: Analyzes specific columns in a dataset by running statistical queries and returning computed metrics.
list_slos
: List all SLOs for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_slo
: Get detailed SLO information
{
"environment": "production",
"dataset": "api-requests",
"sloId": "abc123"
}
list_triggers
: List all triggers for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_trigger
: Get detailed trigger information
{
"environment": "production",
"dataset": "api-requests",
"triggerId": "xyz789"
}
get_trace_link
: Generate a deep link to a specific trace in the Honeycomb UI
get_instrumentation_help
: Provides OpenTelemetry instrumentation guidance
{
"language": "python",
"filepath": "app/services/payment_processor.py"
}
Ask Claude things like:
All tool responses are optimized to reduce context window usage while maintaining essential information:
This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.
run_query
The run_query
tool supports a comprehensive query specification:
calculations: Array of operations to perform
{"op": "HEATMAP", "column": "duration_ms"}
filters: Array of filter conditions
{"column": "error", "op": "=", "value": true}
filter_combination: "AND" or "OR" (default is "AND")
breakdowns: Array of columns to group results by
["service.name", "http.status_code"]
orders: Array specifying how to sort results
{"op": "COUNT", "order": "descending"}
time_range: Relative time range in seconds (e.g., 3600 for last hour)
start_time and end_time: UNIX timestamps for absolute time ranges
having: Filter results based on calculation values
{"calculate_op": "COUNT", "op": ">", "value": 100}
Here are some real-world example queries:
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"},
{"column": "duration_ms", "op": "MAX"}
],
"filters": [
{"column": "trace.parent_id", "op": "does-not-exist"}
],
"breakdowns": ["http.target", "name"],
"orders": [
{"column": "duration_ms", "op": "MAX", "order": "descending"}
]
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"}
],
"filters": [
{"column": "db.statement", "op": "exists"}
],
"breakdowns": ["db.statement"],
"time_range": 604800
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"op": "COUNT"}
],
"filters": [
{"column": "exception.message", "op": "exists"},
{"column": "parent_name", "op": "exists"}
],
"breakdowns": ["exception.message", "parent_name"],
"orders": [
{"op": "COUNT", "order": "descending"}
]
}
pnpm install
pnpm run build
MIT
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by netdata
Real-time, per‑second infrastructure monitoring platform that provides instant insights, auto‑discovery, edge‑based machine‑learning anomaly detection, and lightweight visualizations without requiring complex configuration.
by Arize-ai
Arize Phoenix is an open-source AI and LLM observability tool for inspecting traces, managing prompts, curating datasets, and running experiments.
by msgbyte
Provides website analytics, uptime monitoring, and server status in a single self‑hosted application.
by grafana
Provides programmatic access to Grafana dashboards, datasources, alerts, incidents, and related operational data through a Model Context Protocol server, enabling AI assistants and automation tools to query and manipulate Grafana resources.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for problem retrieval, DQL execution, Slack notifications, workflow automation, and AI‑assisted troubleshooting.
by pydantic
Provides tools to retrieve, query, and visualize OpenTelemetry traces and metrics from Pydantic Logfire via a Model Context Protocol server.
by VictoriaMetrics-Community
Access VictoriaMetrics instances through Model Context Protocol, enabling AI assistants and tools to query metrics, explore labels, debug configurations, and retrieve documentation without leaving the conversational interface.
by axiomhq
Axiom MCP Server implements the Model Context Protocol (MCP) for Axiom, enabling AI agents to query logs, traces, and other event data using the Axiom Processing Language (APL). It allows AI agents to perform monitoring, observability, and natural language analysis of data for debugging and incident response.
by GeLi2001
Datadog MCP Server is a Model Context Protocol (MCP) server that interacts with the official Datadog API. It enables users to access and manage various Datadog functionalities, including monitoring, dashboards, metrics, events, logs, and incidents.