by topoteretes
Enables AI agents to store, retrieve, and reason over past conversations, documents, images, and audio transcriptions by loading data into graph and vector databases with minimal code.
Cognee provides a modular, scalable pipeline (Extract, Cognify, Load) that turns raw data into a knowledge graph and vector store, allowing agents to query their own memory instead of relying on traditional Retrieval‑Augmented Generation (RAG) setups.
pip install cognee
.env
file.import cognee, asyncio, os
os.environ["LLM_API_KEY"] = "YOUR_OPENAI_API_KEY"
async def main():
await cognee.add("Natural language processing (NLP) …")
await cognee.cognify()
results = await cognee.search("Tell me about NLP")
for r in results:
print(r)
asyncio.run(main())
Q: Which Python versions are supported? A: 3.10 – 3.13.
Q: Do I need a specific database? A: Cognee can work with any compatible graph or vector database; Neo4j is the primary example.
Q: Can I run Cognee locally without an LLM provider? A: Yes, by configuring an open‑source LLM endpoint (e.g., Ollama) and setting the appropriate environment variables.
Q: How can I contribute?
A: See CONTRIBUTING.md
in the repo; the project welcomes first‑time contributors and sponsors.
Q: Is there a hosted service? A: Cogwit beta offers a fully hosted AI memory platform; sign‑up at https://platform.cognee.ai/.
cognee - Memory for AI Agents in 5 lines of code
🚀 We launched Cogwit beta (Fully-hosted AI Memory): Sign up here! 🚀
Build dynamic memory for Agents and replace RAG using scalable, modular ECL (Extract, Cognify, Load) pipelines.
Get started quickly with a Google Colab notebook , Deepnote notebook or starter repo
Your contributions are at the core of making this a true open source project. Any contributions you make are greatly appreciated. See CONTRIBUTING.md
for more information.
You can install Cognee using either pip, poetry, uv or any other python package manager. Cognee supports Python 3.10 to 3.13
pip install cognee
You can install the local Cognee repo using pip, poetry and uv. For local pip installation please make sure your pip version is above version 21.3.
uv sync --all-extras
import os
os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"
You can also set the variables by creating .env file, using our template. To use different LLM providers, for more info check out our documentation
This script will run the default pipeline:
import cognee
import asyncio
async def main():
# Add text to cognee
await cognee.add("Natural language processing (NLP) is an interdisciplinary subfield of computer science and information retrieval.")
# Generate the knowledge graph
await cognee.cognify()
# Query the knowledge graph
results = await cognee.search("Tell me about NLP")
# Display the results
for result in results:
print(result)
if __name__ == '__main__':
asyncio.run(main())
Example output:
Natural Language Processing (NLP) is a cross-disciplinary and interdisciplinary field that involves computer science and information retrieval. It focuses on the interaction between computers and human language, enabling machines to understand and process natural language.
You can also cognify your files and query using cognee UI.
Try cognee UI out locally here.
We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.
Thanks to the following companies for sponsoring the ongoing development of cognee.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by basicmachines-co
Basic Memory is a local-first knowledge management system that allows users to build a persistent semantic graph from conversations with AI assistants. It addresses the ephemeral nature of most LLM interactions by providing a structured, bi-directional knowledge base that both humans and LLMs can read and write to.
by smithery-ai
mcp-obsidian is a connector that allows Claude Desktop to read and search an Obsidian vault or any directory containing Markdown notes.
by qdrant
Provides a semantic memory layer on top of the Qdrant vector search engine, enabling storage and retrieval of information via the Model Context Protocol.
by GreatScottyMac
A database‑backed MCP server that stores project decisions, progress, architecture, custom data, and vector embeddings, allowing AI assistants in IDEs to retrieve precise, up‑to‑date context for generation tasks.
by StevenStavrakis
Enables AI assistants to read, create, edit, move, delete, and organize notes and tags within an Obsidian vault.
by mem0ai
Provides tools to store, retrieve, and semantically search coding preferences via an SSE endpoint for integration with MCP clients.
by graphlit
Enables integration between MCP clients and the Graphlit platform, providing ingestion, retrieval, RAG, and publishing capabilities across a wide range of data sources and tools.
by chroma-core
Provides vector, full‑text, and metadata‑based retrieval powered by Chroma for LLM applications, supporting in‑memory, persistent, HTTP, and cloud clients as well as multiple embedding functions.
by andrea9293
MCP Documentation Server is a TypeScript-based server that provides local document management and AI-powered semantic search capabilities, designed to bridge the AI knowledge gap.