by marctheshark3
ergo-mcp provides a standardized interface for AI assistants to access Ergo blockchain data. It bridges the gap between AI models and the Ergo blockchain ecosystem by offering structured blockchain data in AI-friendly formats and enabling complex blockchain analysis through natural language queries.
ergo-mcp is a project that provides a standardized interface for AI assistants to access Ergo blockchain data. It aims to bridge the gap between AI models and the Ergo blockchain ecosystem by offering structured blockchain data in AI-friendly formats and enabling complex blockchain analysis through natural language queries.
git clone https://github.com/ergo-mcp/ergo-explorer-mcp.git
cd ergo-explorer-mcp
pip install -r requirements.txt
ERGO_EXPLORER_API
, ERGO_NODE_API
, ERGO_NODE_API_KEY
).python -m ergo_explorer.server
docker build -t ergo-explorer-mcp .
docker run -d -p 8000:8000 \
-e ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" \
-e ERGO_NODE_API="http://your-node-address:9053" \
-e ERGO_NODE_API_KEY="your-api-key" \
--name ergo-mcp ergo-explorer-mcp
MCPResponseStandardizer
class can be used to transform various output formats (JSON, Markdown, plaintext) into a consistent JSON structure.
from mcp_response_standardizer import MCPResponseStandardizer
standardizer = MCPResponseStandardizer()
standardized = standardizer.standardize_response(endpoint_name, response_content, status_code)
python mcp_response_standardizer.py blockchain_status response.txt
from ergo_explorer.api import make_request
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
[Tool: openwebui_entity_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
Q: What problem does ergo-mcp solve regarding API responses? A: The MCP API returns responses in inconsistent formats (JSON, Markdown, plaintext, mixed), making integration difficult. ergo-mcp standardizes these responses into a consistent JSON structure.
Q: What kind of data can AI assistants access through ergo-mcp? A: AI assistants can access structured blockchain data, including blocks, transactions, network statistics, address balances, transaction history, token information, and more.
Q: Does ergo-mcp support historical data analysis? A: Yes, ergo-mcp includes comprehensive functionality for tracking the historical ownership of tokens and analyzing how distribution changes over time, including complete token history and block height tracking.
Q: How does ergo-mcp help with token estimation for LLMs? A: It includes built-in token estimation capabilities that provide an estimate of the number of tokens in each response for various LLM models, helping AI assistants optimize their context window usage.
Q: Are there any external dependencies required for ergo-mcp?
A: The MCPResponseStandardizer
itself has no external dependencies. However, the full ergo-mcp project requires Python 3.8+ and access to Ergo Explorer API, with optional access to Ergo Node API.
A standardization tool for Ergo MCP API responses that transforms various output formats (JSON, Markdown, plaintext) into a consistent JSON structure for improved integration and usability.
The MCP API returns responses in inconsistent formats:
This inconsistency makes it difficult to integrate with other systems and requires custom handling for each endpoint.
The MCPResponseStandardizer
transforms all responses into a consistent JSON structure:
{
"success": true,
"data": {
// Standardized response data extracted from the original
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
For error responses:
{
"success": false,
"error": {
"code": 400,
"message": "Error message"
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
from mcp_response_standardizer import MCPResponseStandardizer
# Initialize the standardizer
standardizer = MCPResponseStandardizer()
# Standardize a response
endpoint_name = "blockchain_status"
response_content = "..." # Content from the MCP API
status_code = 200 # HTTP status code from the API call
# Get standardized response
standardized = standardizer.standardize_response(
endpoint_name,
response_content,
status_code
)
# Access the standardized data
if standardized["success"]:
data = standardized["data"]
# Use the standardized data...
else:
error = standardized["error"]
print(f"Error {error['code']}: {error['message']}")
You can also use the standardizer from the command line:
python mcp_response_standardizer.py blockchain_status response.txt
Where:
blockchain_status
is the endpoint nameresponse.txt
is a file containing the response contentA test script test_standardizer.py
is provided to demonstrate the standardizer with sample responses:
python test_standardizer.py
This script:
sample_responses
directoryThe standardizer uses the following approach:
Ergo Explorer Model Context Protocol (MCP) is a comprehensive server that provides AI assistants with direct access to Ergo blockchain data through a standardized interface.
This project bridges the gap between AI assistants and the Ergo blockchain ecosystem by:
All endpoints in the Ergo Explorer MCP implement a standardized response format system that:
@standardize_response
decorator for automatic format conversion{
"status": "success", // or "error"
"data": {
// Endpoint-specific structured data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789
}
}
For more information on response standardization, see RESPONSE_STANDARDIZATION.md.
The Ergo Explorer MCP provides advanced entity identification capabilities through address clustering algorithms. This feature helps identify groups of addresses likely controlled by the same entity.
The following endpoints are available for entity identification:
/address_clustering/identify
/address_clustering/visualize
/address_clustering/openwebui_entity_tool
/address_clustering/openwebui_viz_tool
Ergo Explorer MCP integrates with Open WebUI to provide enhanced visualization and interaction capabilities:
To identify entities related to an address:
from ergo_explorer.api import make_request
# Identify entities for an address
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Get visualization for an address
viz_response = make_request("address_clustering/visualize", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Access entity clusters
entities = response["data"]["clusters"]
for entity_id, entity_data in entities.items():
print(f"Entity {entity_id}: {len(entity_data['addresses'])} addresses")
print(f"Confidence: {entity_data['confidence_score']}")
To use the Open WebUI tools:
[Tool: openwebui_entity_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
[Tool: openwebui_viz_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
The Ergo Explorer MCP includes built-in token estimation capabilities to help AI assistants optimize their context window usage. This feature provides an estimate of the number of tokens in each response for various LLM models.
tiktoken
is not availableToken estimation is included in the metadata
section of all standardized responses:
{
"status": "success",
"data": {
// Response data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789,
"token_breakdown": {
"data": 650,
"metadata": 89,
"status": 50
}
}
}
To access token estimates in responses:
from ergo_explorer.api import make_request
# Make a request to any endpoint
response = make_request("blockchain/status")
# Access token estimation information
token_count = response["metadata"]["token_estimate"]
is_truncated = response["metadata"]["is_truncated"]
print(f"Response contains approximately {token_count} tokens")
if is_truncated:
print("Response was truncated to fit within token limits")
You can specify which LLM model to use for token estimation:
from ergo_explorer.api import make_request
# Request with specific model type for token estimation
response = make_request("blockchain/address_info",
{"address": "9hdcMw4eRpJPJGx8RJhvdRgFRsE1URpQCsAWM3wG547gQ9awZgi"},
model_type="gpt-4")
# The token_estimate will be calculated based on GPT-4's tokenization
Response Type | Target Token Range | Optimization Strategy |
---|---|---|
Simple queries | < 500 tokens | Full response without truncation |
Standard queries | 500-2000 tokens | Selective field inclusion |
Complex queries | 2000-5000 tokens | Pagination or truncated response |
Data-intensive | > 5000 tokens | Summary with optional detail retrieval |
The Ergo Explorer MCP includes comprehensive functionality for tracking the historical ownership of tokens and analyzing how distribution changes over time:
// Simple request with just essential parameters
GET /token/historical_token_holders
{
"token_id": "d71693c49a84fbbecd4908c94813b46514b18b67a99952dc1e6e4791556de413",
"max_transactions": 200
}
Response format includes detailed token transfer history and snapshots of token distribution at various points in time (or block heights).
Clone the repository:
git clone https://github.com/ergo-mcp/ergo-explorer-mcp.git
cd ergo-explorer-mcp
Install dependencies:
pip install -r requirements.txt
Configure your environment:
# Set up environment variables
export ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1"
export ERGO_NODE_API="http://your-node-address:9053" # Optional
export ERGO_NODE_API_KEY="your-api-key" # Optional
Run the MCP server:
python -m ergo_explorer.server
Build the Docker image:
docker build -t ergo-explorer-mcp .
Run the container:
docker run -d -p 8000:8000 \
-e ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" \
-e ERGO_NODE_API="http://your-node-address:9053" \
-e ERGO_NODE_API_KEY="your-api-key" \
--name ergo-mcp ergo-explorer-mcp
To contribute to the project:
pip install -r requirements.txt
pip install -r requirements.test.txt
pytest
For comprehensive documentation, see:
This project is licensed under the MIT License - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by mckinsey
Build high-quality data visualization apps quickly with low‑code configuration, leveraging Plotly, Dash, and Pydantic while allowing deep customisation through Python, JavaScript, HTML, and CSS.
by antvis
mcp-server-chart is a Model Context Protocol (MCP) server developed by AntV that generates over 25 types of visual charts. It provides robust chart generation and data analysis capabilities, integrating with various AI clients and platforms.
by reading-plus-ai
mcp-server-data-exploration is an MCP server designed for autonomous data exploration on CSV-based datasets. It acts as a personal Data Scientist assistant, providing intelligent insights with minimal effort.
by Canner
Wren Engine is a semantic engine designed for Model Context Protocol (MCP) clients and AI agents, enabling accurate and context-aware access to enterprise data.
by GongRzhe
A Model Context Protocol (MCP) server for generating various types of charts using QuickChart.io, enabling chart creation through MCP tools.
by ergut
mcp-bigquery-server is a Model Context Protocol (MCP) server that enables Large Language Models (LLMs) to securely and efficiently interact with Google BigQuery datasets. It acts as a translator, allowing LLMs to query and analyze data in BigQuery using natural language instead of SQL.
by isaacwasserman
Provides tools for saving data tables and generating Vega‑Lite visualizations via an MCP interface, supporting both textual specifications and PNG image output.
by surendranb
Google Analytics MCP Server is a Python-based tool that enables Large Language Models (LLMs) to access and analyze Google Analytics 4 (GA4) data using natural language, providing conversational querying of over 200 GA4 dimensions and metrics.
by tinybirdco
Provides a Model Context Protocol server implementation for Tinybird, allowing analytics agents to forward data to Tinybird's platform.