by mendableai
Firecrawl MCP Server is an official Model Context Protocol (MCP) server implementation that integrates with Firecrawl to provide powerful web scraping capabilities to Large Language Models (LLMs). It acts as a bridge between LLMs and the web, allowing them to access and process web content for various tasks.
Firecrawl MCP Server is an official Model Context Protocol (MCP) server implementation that integrates with Firecrawl to provide powerful web scraping capabilities to Large Language Models (LLMs) such as Cursor, Claude, and other LLM clients. It allows LLMs to access and process web content for various tasks, acting as a bridge between LLMs and the web.
Firecrawl MCP Server can be used by integrating it with your LLM client. Installation methods include:
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
and access via http://localhost:3000/sse
.npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
Configuration can be further customized using environment variables for retry behavior (e.g., FIRECRAWL_RETRY_MAX_ATTEMPTS
, FIRECRAWL_RETRY_INITIAL_DELAY
) and credit usage monitoring (e.g., FIRECRAWL_CREDIT_WARNING_THRESHOLD
, FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
).
Firecrawl MCP Server provides a suite of tools for various web data extraction and research tasks:
firecrawl_scrape
: Single page content extraction when the URL is known.firecrawl_batch_scrape
: Efficiently scrape content from multiple known URLs.firecrawl_map
: Discover all indexed URLs on a website.firecrawl_search
: Search the web for specific information and optionally extract content from results.firecrawl_crawl
: Asynchronously crawl a website to extract content from multiple related pages (use with caution due to potential token limits).firecrawl_extract
: Extract structured information from web pages using LLM capabilities based on a defined schema.firecrawl_deep_research
: Conduct in-depth web research on a query using intelligent crawling, search, and LLM analysis.firecrawl_generate_llmstxt
: Generate a standardized llms.txt
file for a given domain, defining how LLMs should interact with the site.Q: How do I choose the right tool for my task?
A: A quick reference table is provided in the documentation:
* scrape
: For single page content.
* batch_scrape
: For multiple known URLs.
* map
: For discovering URLs on a site.
* crawl
: For multi-page extraction (with limits).
* search
: For web search for information.
* extract
: For structured data from pages.
* deep_research
: For in-depth, multi-source research.
* generate_llmstxt
: For generating LLMs.txt
for a domain.
Q: What are the common mistakes when using firecrawl_batch_scrape
or firecrawl_crawl
?
A: For batch_scrape
, using too many URLs at once may hit rate limits or token overflow. For crawl
, setting limit
or maxDepth
too high can cause token overflow, and it's not recommended for single pages (use scrape
instead).
Q: How does Firecrawl MCP Server handle rate limiting and batch processing? A: It utilizes Firecrawl's built-in capabilities, including automatic rate limit handling with exponential backoff, efficient parallel processing for batch operations, smart request queuing and throttling, and automatic retries for transient errors.
Q: What kind of logging and error handling does the server provide? A: The server includes comprehensive logging for operation status, performance, credit usage, rate limits, and errors. It also provides robust error handling with automatic retries, rate limit handling with backoff, detailed error messages, credit usage warnings, and network resilience.
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Play around with our MCP Server on MCP.so's playground or on Klavis AI.
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.48.6
{
"mcpServers": {
"firecrawl-mcp": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR-API-KEY"
}
}
}
}
To configure Firecrawl MCP in Cursor v0.45.6
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
If you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Add this to your ./codeium/windsurf/model_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Use the url: http://localhost:3000/sse
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
For one-click installation, click one of the install buttons below...
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open User Settings (JSON)
.
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
}
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others:
{
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
FIRECRAWL_API_KEY
: Your Firecrawl API key
FIRECRAWL_API_URL
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instances
https://firecrawl.your-domain.com
Retry Configuration
FIRECRAWL_RETRY_MAX_ATTEMPTS
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR
: Exponential backoff multiplier (default: 2)Credit Usage Monitoring
FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
: Credit usage critical threshold (default: 100)For cloud API usage with custom retry and credit monitoring:
# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
Retry Behavior
Credit Usage Monitoring
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
Use this guide to select the right tool for your task:
Tool | Best for | Returns |
---|---|---|
scrape | Single page content | markdown/html |
batch_scrape | Multiple known URLs | markdown/html[] |
map | Discovering URLs on a site | URL[] |
crawl | Multi-page extraction (with limits) | markdown/html[] |
search | Web search for info | results[] |
extract | Structured data from pages | JSON |
deep_research | In-depth, multi-source research | summary, sources |
generate_llmstxt | LLMs.txt for a domain | text |
firecrawl_scrape
)Scrape content from a single URL with advanced options.
Best for:
Not recommended for:
Common mistakes:
Prompt Example:
"Get the content of the page at https://example.com."
Usage Example:
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}
Returns:
firecrawl_batch_scrape
)Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Best for:
Not recommended for:
Common mistakes:
Prompt Example:
"Get the content of these three blog posts: [url1, url2, url3]."
Usage Example:
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
firecrawl_check_batch_status
)Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
firecrawl_map
)Map a website to discover all indexed URLs on the site.
Best for:
Not recommended for:
Common mistakes:
Prompt Example:
"List all URLs on example.com."
Usage Example:
{
"name": "firecrawl_map",
"arguments": {
"url": "https://example.com"
}
}
Returns:
firecrawl_search
)Search the web and optionally extract content from search results.
Best for:
Not recommended for:
Common mistakes:
Usage Example:
{
"name": "firecrawl_search",
"arguments": {
"query": "latest AI research papers 2023",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
Prompt Example:
"Find the latest research papers on AI published in 2023."
firecrawl_crawl
)Starts an asynchronous crawl job on a website and extract content from all pages.
Best for:
Not recommended for:
Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
Common mistakes:
Prompt Example:
"Get all blog posts from the first two levels of example.com/blog."
Usage Example:
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com/blog/*",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
Returns:
{
"content": [
{
"type": "text",
"text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
}
],
"isError": false
}
firecrawl_check_crawl_status
)Check the status of a crawl job.
{
"name": "firecrawl_check_crawl_status",
"arguments": {
"id": "550e8400-e29b-41d4-a716-446655440000"
}
}
Returns:
firecrawl_extract
)Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for:
Not recommended for:
Arguments:
urls
: Array of URLs to extract information fromprompt
: Custom prompt for the LLM extractionsystemPrompt
: System prompt to guide the LLMschema
: JSON schema for structured data extractionallowExternalLinks
: Allow extraction from external linksenableWebSearch
: Enable web search for additional contextincludeSubdomains
: Include subdomains in extractionWhen using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service. Prompt Example:
"Extract the product name, price, and description from these product pages."
Usage Example:
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Returns:
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
firecrawl_deep_research
)Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
Best for:
Not recommended for:
Arguments:
Prompt Example:
"Research the environmental impact of electric vehicles versus gasoline vehicles."
Usage Example:
{
"name": "firecrawl_deep_research",
"arguments": {
"query": "What are the environmental impacts of electric vehicles compared to gasoline vehicles?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}
Returns:
firecrawl_generate_llmstxt
)Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
Best for:
Not recommended for:
Arguments:
Prompt Example:
"Generate an LLMs.txt file for example.com."
Usage Example:
{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}
Returns:
The server includes comprehensive logging:
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
The server provides robust error handling:
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
npm test
Thanks to @vrknetha, @cawstudios for the initial implementation!
Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.
MIT License - see LICENSE file for details
Reviews feature coming soon
Stay tuned for community discussions and feedback