by dynatrace-oss
Provides a local server that enables real‑time interaction with the Dynatrace observability platform, exposing tools for problem retrieval, DQL execution, Slack notifications, workflow automation, and AI‑assisted troubleshooting.
Enables developers to fetch, analyze, and act on Dynatrace telemetry directly from their IDE or terminal without leaving the development workflow.
{
"mcpServers": {
"dynatrace-mcp": {
"command": "npx",
"args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"],
"env": {
"DT_PLATFORM_TOKEN": "<YOUR_PLATFORM_TOKEN>",
"DT_ENVIRONMENT": "https://<YOUR_ENV>.apps.dynatrace.com",
"SLACK_CONNECTION_ID": "<OPTIONAL_SLACK_CONNECTION_ID>"
}
}
}
}
--http
flag for HTTP transport.list_problems
, execute_dql
, send_slack_message
, or interact with the AI‑powered assistants for natural‑language to DQL conversion.Q: Do I incur costs when using the server?
A: The server itself is free, but any execute_dql
queries that scan Dynatrace Grail storage may generate usage‑based charges. Start with short time windows and use built‑in buckets to limit scanned data.
Q: Which authentication methods are supported?
A: Platform Tokens (recommended) or OAuth client credentials. Set DT_PLATFORM_TOKEN
and DT_ENVIRONMENT
for token‑based auth, or OAUTH_CLIENT_ID
and OAUTH_CLIENT_SECRET
for OAuth.
Q: What scopes are required?
A: At minimum app-engine:apps:run
and app-engine:functions:run
. Additional scopes depend on features (e.g., storage:logs:read
for DQL logs, automation:workflows:*
for workflow management, davis-copilot:*
for AI assistants, etc.).
Q: Can I run the server as a web service?
A: Yes. Use the --http
flag (default port 3000) or specify custom host/port with --host
and --port
.
Q: How do I add the rule set for AI assistants?
A: Copy the rules/
directory into the appropriate rules folder for your assistant (e.g., .amazonq/rules/
, .cursor/rules/
, .clinerules/
, etc.) and initialize with load dynatrace mcp
in the chat.
Q: What should I do if authentication fails? A: Verify that the token/client ID is valid, has not expired, and includes all required scopes. Use the minimal test API call shown in the README to confirm connectivity.
This local MCP server allows interaction with the Dynatrace observability platform. Bring real-time observability data directly into your development workflow.
Important: While this local MCP server is provided for free, using certain capabilities to access data in Dynatrace Grail may incur additional costs based
on your Dynatrace consumption model. This affects execute_dql
tool and other capabilities that query Dynatrace Grail storage, and costs
depend on the volume (GB scanned).
Before using this MCP server extensively, please:
To understand costs that occured:
Execute the following DQL statement in a notebook to see how much bytes have been queried from Grail (Logs, Events, etc...):
fetch dt.system.events
| filter event.kind == "QUERY_EXECUTION_EVENT" and contains(client.client_context, "dynatrace-mcp")
| sort timestamp desc
| fields timestamp, query_id, query_string, scanned_bytes, table, bucket, user.id, user.email, client.client_context
| maketimeSeries sum(scanned_bytes), by: { user.email, user.id, table }
Note: While Davis CoPilot AI is generally available (GA), the Davis CoPilot APIs are currently in preview. For more information, visit the Davis CoPilot Preview Community.
Enhance your AI assistant with comprehensive Dynatrace observability analysis capabilities through our streamlined workshop rules. These rules provide hierarchical workflows for security, compliance, incident response, and distributed systems investigation.
Copy the comprehensive rule files from the rules/
directory to your AI assistant's rules directory:
IDE-Specific Locations:
.amazonq/rules/
(project) or ~/.aws/amazonq/rules/
(global).cursor/rules/
(project) or via Settings → Rules (global).windsurfrules/
(project) or via Customizations → Rules (global).clinerules/
(project) or ~/Documents/Cline/Rules/
(global).github/copilot-instructions.md
(project only)Then initialize the agent in your AI chat:
load dynatrace mcp
The workshop rules unlock advanced observability analysis modes:
The rules are organized in a context-window optimized structure:
rules/
├── DynatraceMcpIntegration.md # 🎯 MAIN ORCHESTRATOR
├── workflows/ # 🔧 ANALYSIS WORKFLOWS
│ ├── incidentResponse.md # Core incident investigation
│ ├── DynatraceSecurityCompliance.md # Security & compliance analysis
│ ├── DynatraceDevOpsIntegration.md # CI/CD automation
│ └── dataSourceGuides/ # 📊 DATA ANALYSIS GUIDES
│ ├── dataInvestigation.md # Logs, services, processes
│ └── DynatraceSpanAnalysis.md # Transaction tracing
└── reference/ # 📚 TECHNICAL DOCUMENTATION
├── DynatraceQueryLanguage.md # DQL syntax foundation
├── DynatraceExplore.md # Field discovery patterns
├── DynatraceSecurityEvents.md # Security events schema
└── DynatraceProblemsSpec.md # Problems schema reference
Key Architectural Benefits:
For detailed information about the workshop rules, see the Rules README.
You can add this MCP server (using STDIO) to your MCP Client like VS Code, Claude, Cursor, Amazon Q Developer CLI, Windsurf Github Copilot via the package @dynatrace-oss/dynatrace-mcp-server
.
We recommend to always set it up for your current workspace instead of using it globally.
VS Code
{
"servers": {
"npx-dynatrace-mcp-server": {
"command": "npx",
"cwd": "${workspaceFolder}",
"args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"],
"envFile": "${workspaceFolder}/.env"
}
}
}
Please note: In this config, the ${workspaceFolder}
variable is used.
This only works if the config is stored in the current workspaces, e.g., <your-repo>/.vscode/mcp.json
. Alternatively, this can also be stored in user-settings, and you can define env
as follows:
{
"servers": {
"npx-dynatrace-mcp-server": {
"command": "npx",
"args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"],
"env": {
"DT_PLATFORM_TOKEN": "",
"DT_ENVIRONMENT": ""
}
}
}
}
Claude Desktop
{
"mcpServers": {
"mobile-mcp": {
"command": "npx",
"args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"],
"env": {
"DT_PLATFORM_TOKEN": "",
"DT_ENVIRONMENT": ""
}
}
}
}
Amazon Q Developer CLI
The Amazon Q Developer CLI provides an interactive chat experience directly in your terminal. You can ask questions, get help with AWS services, troubleshoot issues, and generate code snippets without leaving your command line environment.
{
"mcpServers": {
"mobile-mcp": {
"command": "npx",
"args": ["-y", "@dynatrace-oss/dynatrace-mcp-server@latest"],
"env": {
"DT_PLATFORM_TOKEN": "",
"DT_ENVIRONMENT": ""
}
}
}
}
This configuration should be stored in <your-repo>/.amazonq/mcp.json
.
For scenarios where you need to run the MCP server as an HTTP service instead of using stdio (e.g., for stateful sessions, load balancing, or integration with web clients), you can use the HTTP server mode:
Running as HTTP server:
# Get help and see all available options
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --help
# Run with HTTP server on default port 3000
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --http
# Run with custom port (using short or long flag)
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --server -p 8080
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --http --port 3001
# Run with custom host/IP (using short or long flag)
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --http --host 127.0.0.1
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --http -H 192.168.0.1
# Check version
npx -y @dynatrace-oss/dynatrace-mcp-server@latest --version
Configuration for MCP clients that support HTTP transport:
{
"mcpServers": {
"dynatrace-http": {
"url": "http://localhost:3000",
"transport": "http"
}
}
}
Configuration for MCP clients that support HTTP transport:
{
"mcpServers": {
"dynatrace-http": {
"url": "http://localhost:3000",
"transport": "http"
}
}
}
For efficient result retrieval from Dynatrace, please consider creating a rule file (e.g., .github/copilot-instructions.md, .amazonq/rules/), instructing coding agents on how to get more details for your component/app/service. Here is an example for easytrade, please adapt the names and filters to fit your use-cases and components:
# Observability
We use Dynatrace as an Observability solution. This document provides instructions on how to get data for easytrade from Dynatrace using DQL.
## How to get any data for my App
Depending on the query and tool used, the following filters can be applied to narrow down results:
* `contains(entity.name, "easytrade")`
* `contains(affected_entity.name, "easytrade")`
* `contains(container.name, "easytrade")`
For best results, you can combine these filters with an `OR` operator.
## Logs
To fetch logs for easytrade, execute `fetch logs | filter contains(container.name, "easyatrade")`.
For fetching just error-logs, add `| filter loglevel == "ERROR"`.
You can set up authentication via Platform Tokens (recommended) or OAuth Client via the following environment variables:
DT_ENVIRONMENT
(string, e.g., https://abc12345.apps.dynatrace.com) - URL to your Dynatrace Platform (do not use Dynatrace classic URLs like abc12345.live.dynatrace.com
)DT_PLATFORM_TOKEN
(string, e.g., dt0s16.SAMPLE.abcd1234
) - Recommended: Dynatrace Platform TokenOAUTH_CLIENT_ID
(string, e.g., dt0s02.SAMPLE
) - Alternative: Dynatrace OAuth Client ID (for advanced use cases)OAUTH_CLIENT_SECRET
(string, e.g., dt0s02.SAMPLE.abcd1234
) - Alternative: Dynatrace OAuth Client Secret (for advanced use cases)Platform Tokens are recommended for most use cases as they provide a simpler authentication flow. OAuth Clients should only be used when specific OAuth features are required.
For more information, please have a look at the documentation about creating a Platform Token in Dynatrace, as well as creating an OAuth Client in Dynatrace for advanced scenarios.
In addition, depending on the features you use, the following variables can be configured:
SLACK_CONNECTION_ID
(string) - connection ID of a Slack ConnectionDepending on the features you are using, the following scopes are needed:
Available for both Platform Tokens and OAuth Clients:
app-engine:apps:run
- needed for almost all tools
app-engine:functions:run
- needed for for almost all tools
environment-api:entities:read
- for retrieving ownership details from monitored entities (currently not available for Platform Tokens)
automation:workflows:read
- read Workflows
automation:workflows:write
- create and update Workflows
automation:workflows:run
- run Workflows
storage:buckets:read
- needed for execute_dql
tool to read all system data stored on Grail
storage:logs:read
- needed for execute_dql
tool to read logs for reliability guardian validations
storage:metrics:read
- needed for execute_dql
tool to read metrics for reliability guardian validations
storage:bizevents:read
- needed for execute_dql
tool to read bizevents for reliability guardian validations
storage:spans:read
- needed for execute_dql
tool to read spans from Grail
storage:entities:read
- needed for execute_dql
tool to read Entities from Grail
storage:events:read
- needed for execute_dql
tool to read Events from Grail
storage:security.events:read
- needed for execute_dql
tool to read Security Events from Grail
storage:system:read
- needed for execute_dql
tool to read System Data from Grail
storage:user.events:read
- needed for execute_dql
tool to read User events from Grail
storage:user.sessions:read
- needed for execute_dql
tool to read User sessions from Grail
davis-copilot:conversations:execute
- execute conversational skill (chat with Copilot)
davis-copilot:nl2dql:execute
- execute Davis Copilot Natural Language (NL) to DQL skill
davis-copilot:dql2nl:execute
- execute DQL to Natural Language (NL) skill
settings:objects:read
- needed for reading ownership information and Guardians (SRG) from settings
Note: Please ensure that settings:objects:read
is used, and not the similarly named scope app-settings:objects:read
.
Important: Some features requiring environment-api:entities:read
will only work with OAuth Clients. For most use cases, Platform Tokens provide all necessary functionality.
Use these example prompts as a starting point. Just copy them into your IDE or agent setup, adapt them to your services/stack/architecture, and extend them as needed. They're here to help you imagine how real-time observability and automation work together in the MCP context in your IDE.
Write a DQL query from natural language:
Show me error rates for the payment service in the last hour
Explain a DQL query:
What does this DQL do?
fetch logs | filter dt.source_entity == 'SERVICE-123' | summarize count(), by:{severity} | sort count() desc
Chat with Davis CoPilot:
How can I investigate slow database queries in Dynatrace?
Multi-phase incident response:
Our checkout service is experiencing high error rates. Start a systematic 4-phase incident investigation:
1. Detect and triage the active problems
2. Assess user impact and affected services
3. Perform cross-data source analysis (problems → spans → logs)
4. Identify root cause with file/line-level precision
Cross-service failure analysis:
We have cascading failures across our microservices architecture.
Analyze the entity relationships and trace the failure propagation from the initial problem
through all downstream services. Show me the correlation timeline.
Latest-scan vulnerability assessment:
Perform a comprehensive security analysis using the latest scan data:
- Check for new vulnerabilities in our production environment
- Focus on critical and high-severity findings
- Provide evidence-based remediation paths
- Generate risk scores with team-specific guidance
Multi-cloud compliance monitoring:
Run a compliance assessment across our AWS, Azure, and Kubernetes environments.
Check for configuration drift and security posture changes in the last 24 hours.
Deployment health gate analysis:
Our latest deployment is showing performance degradation.
Run deployment health gate analysis with:
- Golden signals monitoring (Rate, Errors, Duration, Saturation)
- SLO/SLI validation with error budget calculations
- Generate automated rollback recommendation if needed
Infrastructure as Code remediation:
Generate Infrastructure as Code templates to remediate the current alert patterns.
Include automated scaling policies and resource optimization recommendations.
Business logic error investigation:
Our payment processing is showing intermittent failures.
Perform advanced transaction analysis:
- Extract exception details with full stack traces
- Correlate with deployment events and ArgoCD changes
- Identify the exact code location causing the issue
Performance correlation analysis:
Analyze the performance impact across our distributed system for the slow checkout flow.
Show me the complete trace analysis with business context and identify bottlenecks.
Find open vulnerabilities on production, setup alert:
I have this code snippet here in my IDE, where I get a dependency vulnerability warning for my code.
Check if I see any open vulnerability/cve on production.
Analyze a specific production problem.
Setup a workflow that sends Slack alerts to the #devops-alerts channel when availability problems occur.
Debug intermittent 503 errors:
Our load balancer is intermittently returning 503 errors during peak traffic.
Pull all recent problems detected for our front-end services and
run a query to correlate error rates with service instance health indicators.
I suspect we have circuit breakers triggering, but need confirmation from the telemetry data.
Correlate memory issue with logs:
There's a problem with high memory usage on one of our hosts.
Get the problem details and then fetch related logs to help understand
what's causing the memory spike? Which file in this repo is this related to?
Trace request flow analysis:
Our users are experiencing slow checkout processes.
Can you execute a DQL query to show me the full request trace for our checkout flow,
so I can identify which service is causing the bottleneck?
Analyze Kubernetes cluster events:
Our application deployments seem to be failing intermittently.
Can you fetch recent events from our "production-cluster"
to help identify what might be causing these deployment issues?
In most cases, authentication issues are related to missing scopes or invalid tokens. Please ensure that you have added all required scopes as listed above.
For Platform Tokens:
For OAuth Clients: In case of OAuth-related problems, you can troubleshoot SSO/OAuth issues based on our Dynatrace Developer Documentation.
It is recommended to test access with the following API (which requires minimal scopes app-engine:apps:run
and app-engine:functions:run
):
curl --request POST 'https://sso.dynatrace.com/sso/oauth2/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_id={your-client-id}' \
--data-urlencode 'client_secret={your-client-secret}' \
--data-urlencode 'scope=app-engine:apps:run app-engine:functions:run'
access_token
from the response of the above call as the bearer-token in the next call:curl -X GET https://abc12345.apps.dynatrace.com/platform/management/v1/environment \
-H 'accept: application/json' \
-H 'Authorization: Bearer {your-bearer-token}'
{
"environmentId": "abc12345",
"createTime": "2023-01-01T00:10:57.123Z",
"blockTime": "2025-12-07T00:00:00Z",
"state": "ACTIVE"
}
Grail has a dedicated section about permissions in the Dynatrace Docs. Please refer to https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/data-model/assign-permissions-in-grail for more details.
For local development purposes, you can use VSCode and GitHub Copilot.
First, enable Copilot for your Workspace .vscode/settings.json
:
{
"github.copilot.enable": {
"*": true
}
}
and make sure that you are using Agent Mode in Copilot.
Second, add the MCP to .vscode/mcp.json
:
{
"servers": {
"my-dynatrace-mcp-server": {
"command": "node",
"args": ["--watch", "${workspaceFolder}/dist/index.js"],
"envFile": "${workspaceFolder}/.env"
}
}
}
Third, create a .env
file in this repository (you can copy from .env.template
) and configure environment variables as described above.
Finally, make changes to your code and compile it with npm run build
or just run npm run watch
and it auto-compiles.
When you are preparing for a release, you can use GitHub Copilot to guide you through the preparations.
In Visual Studio Code, you can use /release
in the chat with Copilot in Agent Mode, which will execute release.prompt.md.
You may include additional information such as the version number. If not specified, you will be asked.
This will
This product is not officially supported by Dynatrace. Please contact us via GitHub Issues if you have feature requests, questions, or need help.
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "dynatrace-mcp": { "command": "npx", "args": [ "-y", "@dynatrace-oss/dynatrace-mcp-server@latest" ], "env": { "DT_PLATFORM_TOKEN": "<YOUR_PLATFORM_TOKEN>", "DT_ENVIRONMENT": "https://<YOUR_ENV>.apps.dynatrace.com", "SLACK_CONNECTION_ID": "<OPTIONAL_SLACK_CONNECTION_ID>" } } } }
Discover more MCP servers with similar functionality and use cases
by netdata
Real-time, per‑second infrastructure monitoring platform that provides instant insights, auto‑discovery, edge‑based machine‑learning anomaly detection, and lightweight visualizations without requiring complex configuration.
by Arize-ai
Arize Phoenix is an open-source AI and LLM observability tool for inspecting traces, managing prompts, curating datasets, and running experiments.
by msgbyte
Provides website analytics, uptime monitoring, and server status in a single self‑hosted application.
by grafana
Provides programmatic access to Grafana dashboards, datasources, alerts, incidents, and related operational data through a Model Context Protocol server, enabling AI assistants and automation tools to query and manipulate Grafana resources.
by pydantic
Provides tools to retrieve, query, and visualize OpenTelemetry traces and metrics from Pydantic Logfire via a Model Context Protocol server.
by VictoriaMetrics-Community
Access VictoriaMetrics instances through Model Context Protocol, enabling AI assistants and tools to query metrics, explore labels, debug configurations, and retrieve documentation without leaving the conversational interface.
by axiomhq
Axiom MCP Server implements the Model Context Protocol (MCP) for Axiom, enabling AI agents to query logs, traces, and other event data using the Axiom Processing Language (APL). It allows AI agents to perform monitoring, observability, and natural language analysis of data for debugging and incident response.
by GeLi2001
Datadog MCP Server is a Model Context Protocol (MCP) server that interacts with the official Datadog API. It enables users to access and manage various Datadog functionalities, including monitoring, dashboards, metrics, events, logs, and incidents.
by last9
Provides AI agents with real‑time production context—logs, metrics, and traces—through a Model Context Protocol server that can be queried from development environments.