by video-db
Automatically synchronizes SDK versions, documentation, and example notebooks into context files (llms.txt and llms-full.txt) for seamless consumption by LLMs and AI agents, and distributes them through an MCP server.
Provides continuously updated, modular context files that expose VideoDB SDKs, docs, and real‑world examples to large language models and autonomous agents. The toolkit generates both a lightweight metadata file (llms.txt
) and a comprehensive context bundle (llms-full.txt
) and makes them available via a Model Context Protocol (MCP) server.
git clone https://github.com/video-db/agent-toolkit.git
cd agent-toolkit
npx
):
npx -y videodb-director-mcp
Set the required API key via environment variable API_KEY
before running:
export API_KEY=<YOUR_API_KEY>
npx -y videodb-director-mcp
https://videodb.io/llms.txt
to your agent’s discovery list for quick metadata.https://videodb.io/llms-full.txt
when deep integration is needed.config.yaml
to include or exclude specific docs, notebooks, or SDK sections.llms.txt
for discovery and full‑featured llms-full.txt
for deep integration.config.yaml
(patterns, custom LLM prompts, layout).llms.txt
.llms.txt
and llms-full.txt
files directly from the hosted URLs. Run the MCP server only if you need dynamic context retrieval.config.yaml
file to provide custom LLM prompts for each content type.The VideoDB Agent Toolkit exposes VideoDB context to LLMs and agents. It enables integration to AI-driven IDEs like Cursor, chat agents like Claude Code etc. This toolkit automates context generation, maintenance, and discoverability. It auto-syncs SDK versions, docs, and examples and is distributed through MCP and llms.txt
The toolkit offers context files designed for use with LLMs, structured around key components:
llms-full.txt
— Comprehensive context for deep integration.
llms.txt
— Lightweight metadata for quick discovery.
MCP (Model Context Protocol)
— A standardized protocol.
These components leverage automated workflows to ensure your AI applications always operate with accurate, up-to-date context.
llms-full.txt
consolidates everything your LLM agent needs, including:
Comprehensive VideoDB overview.
Complete SDK usage instructions and documentation.
Detailed integration examples and best practices.
Real-world Examples:
code-assistant
agent (View Implementation )llms-full.txt
directly into your LLM-powered workflows, agent systems, or AI coding environments.A streamlined file following the Answer.AI llms.txt proposal. Ideal for quick metadata exposure and LLM discovery.
ℹ️ Recommendation: Use
llms.txt
for lightweight discovery and metadata integration. Usellms-full.txt
for complete functionality.
The VideoDB MCP Server connects with the Director backend framework, providing a single tool for many workflows. For development, it can be installed and used via uvx for isolated environments. For more details on MCPs, please visit here
Install uv
We need to install uv first.
For macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
For Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
You can also visit the installation steps of uv
for more details here
Run the MCP Server
You can run the MCP server using uvx
using the following command
uvx videodb-director-mcp --api-key=VIDEODB_API_KEY
Update VideoDB Director MCP package
To ensure you're using the latest version of the MCP server with uvx
, start by clearing the cache:
uv cache clean
This command removes any outdated cached packages of videodb-director-mcp
, allowing uvx
to fetch the most recent version.
If you always want to use the latest version of the MCP server, update your command as follows:
uvx videodb-director-mcp@latest --api-key=<VIDEODB_API_KEY>
LLM context files in VideoDB are modular, automatically generated, and continuously updated from multiple sources:
Instructions — Best practices and prompt guidelines View »
SDK Context — SDK structure, classes, and interface definitions View »
Docs Context — Summarized product documentation View »
Examples Context — Real-world notebook examples View »
config.yaml
file.Automatic context generation ensures your applications always have the latest information:
llms-full.txt
.llms.txt
.config.yaml
The config.yaml
file centralizes all configurations, allowing easy customization:
config.yaml
> llms_full_txt_file
defines how llms-full.txt
is assembled:
llms_full_txt_file:
input_files:
- name: Instructions
file_path: "context/instructions/prompt.md"
- name: SDK Context
file_path: "context/sdk/context/index.md"
- name: Docs Context
file_path: "context/docs/docs_context.md"
- name: Examples Context
file_path: "context/examples/examples_context.md"
output_files:
- name: llms_full_txt
file_path: "context/llms-full.txt"
- name: llms_full_md
file_path: "context/llms-full.md"
layout: |
{{FILE1}}
{{FILE2}}
{{FILE3}}
{{FILE4}}
By following these practices, you ensure your AI applications have reliable, relevant, and up-to-date context—critical for effective agent performance and developer productivity.
Clone the toolkit repository and follow the setup instructions in config.yaml
to start integrating VideoDB contexts into your LLM-powered applications today.
Explore further:
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "videodb": { "command": "npx", "args": [ "-y", "videodb-director-mcp" ], "env": { "API_KEY": "<YOUR_API_KEY>" } } } }
Discover more MCP servers with similar functionality and use cases
by zed-industries
Provides real-time collaborative editing powered by Rust, enabling developers to edit code instantly across machines with a responsive, GPU-accelerated UI.
by cline
Provides autonomous coding assistance directly in the IDE, enabling file creation, editing, terminal command execution, browser interactions, and tool extension with user approval at each step.
by continuedev
Provides continuous AI assistance across IDEs, terminals, and CI pipelines, offering agents, chat, inline editing, and autocomplete to accelerate software development.
by github
Enables AI agents, assistants, and chatbots to interact with GitHub via natural‑language commands, providing read‑write access to repositories, issues, pull requests, workflows, security data and team activity.
by block
Automates engineering tasks by installing, executing, editing, and testing code using any large language model, providing end‑to‑end project building, debugging, workflow orchestration, and external API interaction.
by RooCodeInc
An autonomous coding agent that lives inside VS Code, capable of generating, refactoring, debugging code, managing files, running terminal commands, controlling a browser, and adapting its behavior through custom modes and instructions.
by lastmile-ai
A lightweight, composable framework for building AI agents using Model Context Protocol and simple workflow patterns.
by firebase
Provides a command‑line interface to manage, test, and deploy Firebase projects, covering hosting, databases, authentication, cloud functions, extensions, and CI/CD workflows.
by gptme
Empowers large language models to act as personal AI assistants directly inside the terminal, providing capabilities such as code execution, file manipulation, web browsing, vision, and interactive tool usage.