by LucasHild
mcp-server-bigquery is a Python-based Model Context Protocol (MCP) server that enables large language models (LLMs) to interact with Google BigQuery. It allows LLMs to inspect database schemas and execute SQL queries on BigQuery, facilitating natural language database interaction and automated data analysis.
mcp-server-bigquery is a Model Context Protocol (MCP) server written in Python that provides large language models (LLMs) with access to Google BigQuery. It allows LLMs to inspect database schemas and execute SQL queries on BigQuery.
Via Smithery:
npx -y @smithery/cli install mcp-server-bigquery --client claude
Manual Configuration for Claude Desktop:
For development/unpublished servers, add the following to your Claude Desktop configuration file (e.g., ~/Library/Application Support/Claude/claude_desktop_config.json
on MacOS):
"mcpServers": {
"bigquery": {
"command": "uv",
"args": [
"--directory",
"{{PATH_TO_REPO}}",
"run",
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
For published servers:
"mcpServers": {
"bigquery": {
"command": "uvx",
"args": [
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
Replace {{PATH_TO_REPO}}
, {{GCP_PROJECT_ID}}
, and {{GCP_LOCATION}}
with your specific values.
The server can be configured using command-line arguments or environment variables:
Argument | Environment Variable | Required | Description |
---|---|---|---|
--project |
BIGQUERY_PROJECT |
Yes | The GCP project ID. |
--location |
BIGQUERY_LOCATION |
Yes | The GCP location (e.g. europe-west9 ). |
--dataset |
BIGQUERY_DATASETS |
No | Only take specific BigQuery datasets into consideration. Several datasets can be specified by repeating the argument (e.g. --dataset my_dataset_1 --dataset my_dataset_2 ) or by joining them with a comma in the environment variable (e.g. BIGQUERY_DATASETS=my_dataset_1,my_dataset_2 ). If not provided, all datasets in the project will be considered. |
--key-file |
BIGQUERY_KEY_FILE |
No | Path to a service account key file for BigQuery. If not provided, the server will use the default credentials. |
execute-query
: Executes a SQL query using BigQuery dialect.list-tables
: Lists all tables in the BigQuery database.describe-table
: Describes the schema of a specific table.Q: What is the Model Context Protocol (MCP)? A: The Model Context Protocol is a standard that allows LLMs to interact with external tools and services, providing them with context and capabilities beyond their core language generation.
Q: What are the required configuration parameters?
A: The --project
(or BIGQUERY_PROJECT
) and --location
(or BIGQUERY_LOCATION
) parameters are required to specify your GCP project ID and location, respectively.
Q: Can I limit the BigQuery datasets the server considers?
A: Yes, you can use the --dataset
argument or BIGQUERY_DATASETS
environment variable to specify particular datasets. If not provided, all datasets in the project will be considered.
Q: How can I debug the mcp-server-bigquery?
A: It is recommended to use the MCP Inspector for debugging. You can launch it with npx @modelcontextprotocol/inspector uv --directory {{PATH_TO_REPO}} run mcp-server-bigquery
.
Q: How can I install mcp-server-bigquery?
A: You can install it via Smithery using npx -y @smithery/cli install mcp-server-bigquery --client claude
or configure it manually for Claude Desktop.
A Model Context Protocol server that provides access to BigQuery. This server enables LLMs to inspect database schemas and execute queries.
The server implements one tool:
execute-query
: Executes a SQL query using BigQuery dialectlist-tables
: Lists all tables in the BigQuery databasedescribe-table
: Describes the schema of a specific tableThe server can be configured either with command line arguments or environment variables.
Argument | Environment Variable | Required | Description |
---|---|---|---|
--project |
BIGQUERY_PROJECT |
Yes | The GCP project ID. |
--location |
BIGQUERY_LOCATION |
Yes | The GCP location (e.g. europe-west9 ). |
--dataset |
BIGQUERY_DATASETS |
No | Only take specific BigQuery datasets into consideration. Several datasets can be specified by repeating the argument (e.g. --dataset my_dataset_1 --dataset my_dataset_2 ) or by joining them with a comma in the environment variable (e.g. BIGQUERY_DATASETS=my_dataset_1,my_dataset_2 ). If not provided, all datasets in the project will be considered. |
--key-file |
BIGQUERY_KEY_FILE |
No | Path to a service account key file for BigQuery. If not provided, the server will use the default credentials. |
To install BigQuery Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install mcp-server-bigquery --client claude
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Development/Unpublished Servers Configuration
"mcpServers": {
"bigquery": {
"command": "uv",
"args": [
"--directory",
"{{PATH_TO_REPO}}",
"run",
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
Published Servers Configuration
"mcpServers": {
"bigquery": {
"command": "uvx",
"args": [
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
Replace {{PATH_TO_REPO}}
, {{GCP_PROJECT_ID}}
, and {{GCP_LOCATION}}
with the appropriate values.
To prepare the package for distribution:
Increase the version number in pyproject.toml
Sync dependencies and update lockfile:
uv sync
uv build
This will create source and wheel distributions in the dist/
directory.
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
--token
or UV_PUBLISH_TOKEN
--username
/UV_PUBLISH_USERNAME
and --password
/UV_PUBLISH_PASSWORD
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm
with this command:
npx @modelcontextprotocol/inspector uv --directory {{PATH_TO_REPO}} run mcp-server-bigquery
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by googleapis
Provides a configurable MCP server that abstracts connection pooling, authentication, observability, and tool management to accelerate development of database‑backed AI tools.
by bytebase
DBHub is a universal database gateway that implements the Model Context Protocol (MCP) server interface, enabling MCP-compatible clients to interact with various databases.
by neo4j-contrib
Provides Model Context Protocol servers for interacting with Neo4j databases, managing Aura instances, and handling personal knowledge graph memory through natural‑language interfaces.
by mongodb-js
Provides a Model Context Protocol server that connects to MongoDB databases and Atlas clusters, exposing a rich set of tools for querying, managing, and administering data and infrastructure.
by benborla
A Model Context Protocol (MCP) server that provides read-only access to MySQL databases, enabling Large Language Models (LLMs) to inspect database schemas and execute read-only queries.
by ClickHouse
Provides tools that let AI assistants run read‑only SQL queries against ClickHouse clusters or the embedded chDB engine, plus a health‑check endpoint for service monitoring.
by elastic
Provides direct, natural‑language access to Elasticsearch indices via the Model Context Protocol, allowing AI agents to query and explore data without writing DSL.
by motherduckdb
Provides an MCP server that enables SQL analytics on DuckDB and MotherDuck databases, allowing AI assistants and IDEs to execute queries via a unified interface.
by redis
Provides a natural language interface for agentic applications to manage and search data in Redis efficiently.