by skydeckai
An MCP Server to enable global access to Rememberizer, facilitating enhanced knowledge retrieval for Large Language Models.
mcp-server-rememberizer is a Model Context Protocol (MCP) server designed to interact with Rememberizer.ai, a document and knowledge management API. It enables Large Language Models (LLMs) to seamlessly search, retrieve, and manage documents and integrations within the Rememberizer ecosystem.
mcp-server-rememberizer can be installed and used in several ways:
npx @michaellatman/mcp-get@latest install mcp-server-rememberizer
npx -y @smithery/cli install mcp-server-rememberizer --client claude
Configuration:
REMEMBERIZER_API_TOKEN
with your API token from Rememberizer.ai.claude_desktop_config.json
.REMEMBERIZER_API_TOKEN
within the app.Once configured, you can interact with the server through your LLM (e.g., Claude Desktop, SkyDeck AI GenStudio) by asking questions like "What is my Rememberizer account?" or "List all documents that I have there."
retrieve_semantically_similar_internal_knowledge
: Retrieves semantically similar text chunks from your Rememberizer knowledge base.smart_search_internal_knowledge
: Performs agentic searches across various sources like Slack, Gmail, Dropbox, Google Drive, and uploaded files.list_internal_knowledge_systems
: Lists available integrations and sources of internal knowledge.rememberizer_account_information
: Retrieves your Rememberizer.ai account details.list_personal_team_knowledge_documents
: Provides a paginated list of all documents in your knowledge system.remember_this
: Saves text information into your Rememberizer.ai knowledge system for future retrieval.Q: What kind of resources can I access through this server? A: The server provides access to Documents and Slack discussions from your Rememberizer.ai knowledge system.
Q: How can I filter search results by date?
A: The retrieve_semantically_similar_internal_knowledge
and smart_search_internal_knowledge
tools allow you to specify from_datetime_ISO8601
and to_datetime_ISO8601
parameters to filter results by date.
Q: Can I save new information using this server?
A: Yes, the remember_this
tool allows you to save text information into your Rememberizer.ai knowledge system.
A Model Context Protocol server for interacting with Rememberizer's document and knowledge management API. This server enables Large Language Models to search, retrieve, and manage documents and integrations through Rememberizer.
Please note that mcp-server-rememberizer
is currently in development and the functionality may be subject to change.
The server provides access to two types of resources: Documents or Slack discussions
retrieve_semantically_similar_internal_knowledge
match_this
(string): Up to a 400-word sentence for which you wish to find semantically similar chunks of knowledgen_results
(integer, optional): Number of semantically similar chunks of text to return. Use 'n_results=3' for up to 5, and 'n_results=10' for more informationfrom_datetime_ISO8601
(string, optional): Start date in ISO 8601 format with timezone (e.g., 2023-01-01T00:00:00Z). Use this to filter results from a specific dateto_datetime_ISO8601
(string, optional): End date in ISO 8601 format with timezone (e.g., 2024-01-01T00:00:00Z). Use this to filter results until a specific datesmart_search_internal_knowledge
query
(string): Up to a 400-word sentence for which you wish to find semantically similar chunks of knowledgeuser_context
(string, optional): The additional context for the query. You might need to summarize the conversation up to this point for better context-awared resultsn_results
(integer, optional): Number of semantically similar chunks of text to return. Use 'n_results=3' for up to 5, and 'n_results=10' for more informationfrom_datetime_ISO8601
(string, optional): Start date in ISO 8601 format with timezone (e.g., 2023-01-01T00:00:00Z). Use this to filter results from a specific dateto_datetime_ISO8601
(string, optional): End date in ISO 8601 format with timezone (e.g., 2024-01-01T00:00:00Z). Use this to filter results until a specific datelist_internal_knowledge_systems
rememberizer_account_information
list_personal_team_knowledge_documents
page
(integer, optional): Page number for pagination, starts at 1 (default: 1)page_size
(integer, optional): Number of documents per page, range 1-1000 (default: 100)remember_this
name
(string): Name of the information. This is used to identify the information in the futurecontent
(string): The information you wish to memorizenpx @michaellatman/mcp-get@latest install mcp-server-rememberizer
npx -y @smithery/cli install mcp-server-rememberizer --client claude
If you have SkyDeck AI Helper app installed, you can search for "Rememberizer" and install the mcp-server-rememberizer.
The following environment variables are required:
REMEMBERIZER_API_TOKEN
: Your Rememberizer API tokenYou can register an API key by creating your own Common Knowledge in Rememberizer.
Add this to your claude_desktop_config.json
:
"mcpServers": {
"rememberizer": {
"command": "uvx",
"args": ["mcp-server-rememberizer"],
"env": {
"REMEMBERIZER_API_TOKEN": "your_rememberizer_api_token"
}
},
}
Add the env REMEMBERIZER_API_TOKEN to mcp-server-rememberizer.
With support from the Rememberizer MCP server, you can now ask the following questions in your Claude Desktop app or SkyDeck AI GenStudio
What is my Rememberizer account?
List all documents that I have there.
Give me a quick summary about "..."
and so on...
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by topoteretes
Enables AI agents to store, retrieve, and reason over past conversations, documents, images, and audio transcriptions by loading data into graph and vector databases with minimal code.
by basicmachines-co
Basic Memory is a local-first knowledge management system that allows users to build a persistent semantic graph from conversations with AI assistants. It addresses the ephemeral nature of most LLM interactions by providing a structured, bi-directional knowledge base that both humans and LLMs can read and write to.
by smithery-ai
mcp-obsidian is a connector that allows Claude Desktop to read and search an Obsidian vault or any directory containing Markdown notes.
by qdrant
Provides a semantic memory layer on top of the Qdrant vector search engine, enabling storage and retrieval of information via the Model Context Protocol.
by GreatScottyMac
A database‑backed MCP server that stores project decisions, progress, architecture, custom data, and vector embeddings, allowing AI assistants in IDEs to retrieve precise, up‑to‑date context for generation tasks.
by StevenStavrakis
Enables AI assistants to read, create, edit, move, delete, and organize notes and tags within an Obsidian vault.
by mem0ai
Provides tools to store, retrieve, and semantically search coding preferences via an SSE endpoint for integration with MCP clients.
by graphlit
Enables integration between MCP clients and the Graphlit platform, providing ingestion, retrieval, RAG, and publishing capabilities across a wide range of data sources and tools.
by chroma-core
Provides vector, full‑text, and metadata‑based retrieval powered by Chroma for LLM applications, supporting in‑memory, persistent, HTTP, and cloud clients as well as multiple embedding functions.