by last9
Provides AI agents with real‑time production context—logs, metrics, and traces—through a Model Context Protocol server that can be queried from development environments.
Enables seamless access to observability data (exceptions, service performance, Prometheus metrics, logs, alerts, and drop‑rule management) via a standard protocol that AI assistants can invoke directly from IDEs such as Claude Desktop, Cursor, Windsurf, and VS Code.
brew install last9-mcp
) or the preferred npm approach:
npx @last9/mcp-server
LAST9_BASE_URL
– URL of the Last9 API endpoint.LAST9_AUTH_TOKEN
– Authentication token for read‑only access.LAST9_REFRESH_TOKEN
– Refresh token with write permissions for control‑plane actions.get_service_summary
, promptheus_range_query
, get_logs
) to fetch or modify observability data on demand.env
parameter.Q: Do I need a Last9 account? A: Yes. The server requires a Last9 API URL and authentication tokens generated in the Last9 Control Plane.
Q: Can I run the server locally? A: Absolutely. The npm command starts a local STDIO server that AI tools communicate with.
Q: Which IDEs are supported? A: Claude Desktop, Cursor, Windsurf, and VS Code (Copilot chat) are officially supported.
Q: How are secrets handled? A: Provide the tokens via environment variables. Do not commit them to source control.
Q: What if I need to add custom tools? A: The server follows the Model Context Protocol specification, so additional tool definitions can be added by extending the JSON schema used for tool registration.
A Model Context Protocol server implementation for Last9 that enables AI agents to seamlessly bring real-time production context — logs, metrics, and traces — into your local environment to auto-fix code faster.
Works with Claude desktop app, or Cursor, Windsurf, and VSCode (Github Copilot) IDEs. Implements the following MCP tools:
Observability & APM Tools:
get_exceptions
: Get the list of exceptions.get_service_summary
: Get service summary with throughput, error rate, and response time.get_service_environments
: Get available environments for services.get_service_performance_details
: Get detailed performance metrics for a service.get_service_operations_summary
: Get operations summary for a service.get_service_dependency_graph
: Get service dependency graph showing incoming/outgoing dependencies.Prometheus/PromQL Tools:
promptheus_range_query
: Execute PromQL range queries for metrics data.prometheus_instant_query
: Execute PromQL instant queries for metrics data.prometheus_label_values
: Get label values for PromQL queries.prometheus_labels
: Get available labels for PromQL queries.Logs Management:
get_logs
: Get logs filtered by service name and/or severity level.get_drop_rules
: Get drop rules for logs that determine what logs get
filtered out at Last9 Control Planeadd_drop_rule
: Create a drop rule for logs at
Last9 Control Planeget_service_logs
: Get raw log entries for a specific service over a time range. Can apply filters on severity and body.Alert Management:
get_alert_config
: Get alert configurations (alert rules) from Last9.get_alerts
: Get currently active alerts from Last9 monitoring system.Alert Management:
get_alert_config
: Get alert configurations (alert rules) from Last9.get_alerts
: Get currently active alerts from Last9 monitoring system.Retrieves server-side exceptions over a specified time range.
Parameters:
limit
(integer, optional): Maximum number of exceptions to return.
Default: 20.lookback_minutes
(integer, recommended): Number of minutes to look back from
now. Default: 60. Examples: 60, 30, 15.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD
HH:MM:SS). Leave empty to use lookback_minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD
HH:MM:SS). Leave empty to default to current time.span_name
(string, optional): Name of the span to filter by.Get service summary over a given time range. Includes service name, environment, throughput, error rate, and response time. All values are p95 quantiles over the time range.
Parameters:
start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to end_time_iso - 1 hour.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.env
(string, optional): Environment to filter by. Defaults to 'prod'.Get available environments for services. Returns an array of environments that can be used with other APM tools.
Parameters:
start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to end_time_iso - 1 hour.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.Note: All other APM tools that retrieve service information (like get_service_performance_details
, get_service_dependency_graph
, get_service_operations_summary
, get_service_summary
) require an env
parameter. This parameter must be one of the environments returned by this tool. If this tool returns an empty array, use an empty string ""
for the env parameter.
Get detailed performance metrics for a specific service over a given time range.
Parameters:
service_name
(string, required): Name of the service to get performance details for.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.env
(string, optional): Environment to filter by. Defaults to 'prod'.Get a summary of operations inside a service over a given time range. Returns operations like HTTP endpoints, database queries, messaging producer and HTTP client calls.
Parameters:
service_name
(string, required): Name of the service to get operations summary for.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.env
(string, optional): Environment to filter by. Defaults to 'prod'.Get details of the throughput, response times and error rates of incoming, outgoing and infrastructure components of a service. Useful for analyzing cascading effects of errors and performance issues.
Parameters:
service_name
(string, optional): Name of the service to get the dependency graph for.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.env
(string, optional): Environment to filter by. Defaults to 'prod'.Perform a Prometheus range query to get metrics data over a specified time range. Recommended to check available labels first using prometheus_labels
tool.
Parameters:
query
(string, required): The range query to execute.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.Perform a Prometheus instant query to get metrics data at a specific point in time. Typically should use rollup functions like sum_over_time, avg_over_time, quantile_over_time over a time window.
Parameters:
query
(string, required): The instant query to execute.time_iso
(string, optional): Time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.Return the label values for a particular label and PromQL filter query. Similar to Prometheus /label_values call.
Parameters:
match_query
(string, required): A valid PromQL filter query.label
(string, required): The label to get values for.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.Return the labels for a given PromQL match query. Similar to Prometheus /labels call.
Parameters:
match_query
(string, required): A valid PromQL filter query.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - 60 minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.Gets logs filtered by optional service name and/or severity level within a specified time range.
Parameters:
service
(string, optional): Name of the service to get logs for.severity
(string, optional): Severity of the logs to get.lookback_minutes
(integer, recommended): Number of minutes to look back from
now. Default: 60. Examples: 60, 30, 15.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD
HH:MM:SS). Leave empty to use lookback_minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD
HH:MM:SS). Leave empty to default to current time.limit
(integer, optional): Maximum number of logs to return. Default: 20.Gets drop rules for logs, which determine what logs get filtered out from reaching Last9.
Adds a new drop rule to filter out specific logs at Last9 Control Plane
Parameters:
name
(string, required): Name of the drop rule.filters
(array, required): List of filter conditions to apply. Each filter
has:
key
(string, required): The key to filter on. Only attributes and
resource.attributes keys are supported. For resource attributes, use format:
resource.attributes[key_name] and for log attributes, use format:
attributes[key_name] Double quotes in key names must be escaped.value
(string, required): The value to filter against.operator
(string, required): The operator used for filtering. Valid
values:
conjunction
(string, required): The logical conjunction between filters.
Valid values:
Get alert configurations (alert rules) from Last9. Returns all configured alert rules including their conditions, labels, and annotations.
Parameters:
None - This tool retrieves all available alert configurations.
Returns information about:
Get currently active alerts from Last9 monitoring system. Returns all alerts that are currently firing or have fired recently within the specified time window.
Parameters:
timestamp
(integer, optional): Unix timestamp for the query time. Leave empty to default to current time.window
(integer, optional): Time window in seconds to look back for alerts. Defaults to 900 seconds (15 minutes). Range: 60-86400 seconds.Returns information about:
Get raw log entries for a specific service over a time range. Can apply filters on severity and body.
Parameters:
service
(string, required): Name of the service to get logs for.start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to now - lookback_minutes.end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time.lookback_minutes
(integer, recommended): Number of minutes to look back from now. Default: 60. Examples: 60, 30, 15.limit
(integer, optional): Maximum number of logs to return. Default: 20.severity_filters
(array, optional): List of severity filters to apply. Valid values: "debug", "info", "warn", "error", "fatal".body_filters
(array, optional): List of body filters to apply.You can install the Last9 Observability MCP server using either:
# Add the Last9 tap
brew tap last9/tap
# Install the Last9 MCP CLI
brew install last9-mcp
# Install globally
npm install -g @last9/mcp-server
# Or run directly with npx
npx @last9/mcp-server
The Last9 MCP server requires the following environment variables:
LAST9_BASE_URL
: (required) Last9 API URL from
OTel integrationLAST9_AUTH_TOKEN
: (required) Authentication token for Last9 MCP server from
OTel integrationLAST9_REFRESH_TOKEN
: (required) Refresh Token with Write permissions, needed
for accessing control plane APIs from
API AccessConfigure the Claude app to use the MCP server:
claude_desktop_config.json
file{
"mcpServers": {
"last9": {
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_BASE_URL": "<last9_otlp_host>",
"LAST9_AUTH_TOKEN": "<last9_otlp_auth_token>",
"LAST9_REFRESH_TOKEN": "<last9_write_refresh_token>"
}
}
}
}
Configure Cursor to use the MCP server:
{
"mcpServers": {
"last9": {
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_BASE_URL": "<last9_otlp_host>",
"LAST9_AUTH_TOKEN": "<last9_otlp_auth_token>",
"LAST9_REFRESH_TOKEN": "<last9_write_refresh_token>"
}
}
}
}
Configure Windsurf to use the MCP server:
windsurf_config.json
file{
"mcpServers": {
"last9": {
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_BASE_URL": "<last9_otlp_host>",
"LAST9_AUTH_TOKEN": "<last9_otlp_auth_token>",
"LAST9_REFRESH_TOKEN": "<last9_write_refresh_token>"
}
}
}
}
Note: MCP support in VS Code is available starting v1.99 and is currently in preview. For advanced configuration options and alternative setup methods, view the VS Code MCP documentation.
{
"mcp": {
"servers": {
"last9": {
"type": "stdio",
"command": "/opt/homebrew/bin/last9-mcp",
"env": {
"LAST9_BASE_URL": "<last9_otlp_host>",
"LAST9_AUTH_TOKEN": "<last9_otlp_auth_token>",
"LAST9_REFRESH_TOKEN": "<last9_write_refresh_token>"
}
}
}
}
}
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "last9": { "command": "npx", "args": [ "@last9/mcp-server" ], "env": { "LAST9_BASE_URL": "<YOUR_BASE_URL>", "LAST9_AUTH_TOKEN": "<YOUR_AUTH_TOKEN>", "LAST9_REFRESH_TOKEN": "<YOUR_REFRESH_TOKEN>" } } } }
Discover more MCP servers with similar functionality and use cases
by netdata
Real-time, per‑second infrastructure monitoring platform that provides instant insights, auto‑discovery, edge‑based machine‑learning anomaly detection, and lightweight visualizations without requiring complex configuration.
by Arize-ai
Arize Phoenix is an open-source AI and LLM observability tool for inspecting traces, managing prompts, curating datasets, and running experiments.
by msgbyte
Provides website analytics, uptime monitoring, and server status in a single self‑hosted application.
by grafana
Provides programmatic access to Grafana dashboards, datasources, alerts, incidents, and related operational data through a Model Context Protocol server, enabling AI assistants and automation tools to query and manipulate Grafana resources.
by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for problem retrieval, DQL execution, Slack notifications, workflow automation, and AI‑assisted troubleshooting.
by pydantic
Provides tools to retrieve, query, and visualize OpenTelemetry traces and metrics from Pydantic Logfire via a Model Context Protocol server.
by VictoriaMetrics-Community
Access VictoriaMetrics instances through Model Context Protocol, enabling AI assistants and tools to query metrics, explore labels, debug configurations, and retrieve documentation without leaving the conversational interface.
by axiomhq
Axiom MCP Server implements the Model Context Protocol (MCP) for Axiom, enabling AI agents to query logs, traces, and other event data using the Axiom Processing Language (APL). It allows AI agents to perform monitoring, observability, and natural language analysis of data for debugging and incident response.
by GeLi2001
Datadog MCP Server is a Model Context Protocol (MCP) server that interacts with the official Datadog API. It enables users to access and manage various Datadog functionalities, including monitoring, dashboards, metrics, events, logs, and incidents.