by knocklabs
Provides utilities for integrating Knock notification APIs into AI agent frameworks such as OpenAI, Vercel AI SDK, LangChain, and Model Context Protocol clients.
Enables developers to embed Knock's cross‑channel notification capabilities directly into LLM‑driven workflows and agents, offering ready‑made tool definitions that can be called from models.
npm install @knocklabs/agent-toolkit
toolkit.getAllTools()
and attach them to the LLM call.toolkit.handleToolCall
or the framework’s tool invocation pattern.Example (AI SDK):
import { createKnockToolkit } from "@knocklabs/agent-toolkit/ai-sdk";
const toolkit = await createKnockToolkit({ serviceToken: "kst_12345", permissions: { workflows: { read: true, run: true } } });
const result = streamText({ model: openai("gpt-4o"), messages, tools: toolkit.getAllTools() });
environment
, userId
, and tenantId
can be set for all tool calls.Q: Do I need a Knock account? A: Yes, a Knock account and a service token are required.
Q: Can I limit which tools are exposed?
A: Yes, use the --tools
flag with the CLI or configure permissions when creating the toolkit.
Q: How do I run the MCP server locally?
A: npx -y @knocklabs/agent-toolkit -p local-mcp --service-token <YOUR_SERVICE_TOKEN>
.
Q: What environments can I target?
A: Set the environment
option (e.g., development
, production
) to scope API calls.
Q: Is there TypeScript support? A: The package is written in TypeScript and provides full typings.
The Knock Agent toolkit enables popular agent frameworks including OpenAI and Vercel's AI SDK to integrate with Knock's APIs using tools (otherwise known as function calling). It also allows you to integrate Knock into a Model Context Protocol (MCP) client such as Cursor, Windsurf, or Claude Code.
Using the Knock agent toolkit allows you to build powerful agent systems that are capable of sending cross-channel notifications to the humans who need to be in the loop. As a developer, it also helps you build Knock integrations and manage your Knock account.
You can read more in the documentation.
The Knock Agent Toolkit provides four main entry points:
@knocklabs/agent-toolkit/ai-sdk
: Helpers for integrating with Vercel's AI SDK.@knocklabs/agent-tookkit/langchain
: Helpers for integrating with Langchain's JS SDK.@knocklabs/agent-toolkit/openai
: Helpers for integrating with the OpenAI SDK.@knocklabs/agent-toolkit/modelcontextprotocol
: Low level helpers for integrating with the Model Context Protocol (MCP).The agent toolkit exposes a large subset of the Knock Management API and API that you might need to invoke via an agent. You can see the full list of tools in the source code.
It's possible to pass additional context to the configuration of each library to help scope the calls made by the agent toolkit to Knock. The available properties to configure are:
environment
: The slug of the Knock environment you wish to execute actions in by default, such as development
.userId
: The user ID of the current user. When set, this will be the default passed to user tools.tenantId
: The ID of the current tenant. When set, will be the default passed to any tool that accepts the tenant.To start using the Knock MCP as a local server, you must start it with a service token. You can run it using npx
.
npx -y @knocklabs/agent-toolkit -p local-mcp --service-token kst_12345
By default, the MCP server will expose all tools to the LLM. To limit the tools available you can use the --tools
(-t
) flag:
// Pass all tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools="*"
// Specific category
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.*"
// Specific tools
npx -y @knocklabs/agent-toolkit -p local-mcp --tools "workflows.triggerWorkflow"
If you wish to enable workflows-as-tools within the MCP server, you must set the --workflows
flag to pass in a list of approved workflow keys to expose. This ensures that you keep the number of tools exposed to your MCP client to a minimum.
npx -y @knocklabs/agent-toolkit -p local-mcp --workflows comment-created activate-account
It's also possible to pass environment
, userId
, and tenant
to the local MCP server to set default values. Use the --help
flag to view additional server options.
The agent toolkit provides a createKnockToolkit
under the /ai-sdk
path for easily integrating into the AI SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit
helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/ai-sdk";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { systemPrompt } from "@/lib/ai/prompts";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
workflows: { read: true, run: true, manage: true },
},
});
const result = streamText({
model: openai("gpt-4o"),
messages,
tools: {
// The tools given here are determined by the `permissions`
// list above in the configuration. For instance, here we're only
// allowing the workflows
...toolkit.getAllTools(),
},
});
return result.toDataStreamResponse();
}
The agent toolkit provides a createKnockToolkit
under the /openai
path for easily integrating into the Open AI SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit
helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/openai";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
// Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
});
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages,
// The tools given here are determined by the `permissions`
// list above in the configuration. For instance, here we're only
// allowing the workflows
tools: toolkit.getAllTools(),
});
// Execute the tool calls
const toolMessages = await Promise.all(
message.tool_calls.map((tc) => toolkit.handleToolCall(tc))
);
}
main();
The agent toolkit provides a createKnockToolkit
under the /langchain
path for easily integrating into the Lanchain JS SDK and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit
helper, configure it, and use it in your LLM calling:import { createKnockToolkit } from "@knocklabs/agent-toolkit/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { LangChainAdapter } from "ai";
const systemPrompt = `You are a helpful assistant.`;
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt } = await req.json();
// Optional - get the auth context from the request
const authContext = await auth.protect();
// Instantiate a new Knock toolkit
const toolkit = await createKnockToolkit({
serviceToken: "kst_12345",
permissions: {
// (optional but recommended): Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
});
const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0 });
const modelWithTools = model.bindTools(toolkit.getAllTools());
const messages = [new SystemMessage(systemPrompt), new HumanMessage(prompt)];
const aiMessage = await modelWithTools.invoke(messages);
messages.push(aiMessage);
for (const toolCall of aiMessage.tool_calls || []) {
// Call the selected tool by its `name`
const selectedTool = toolkit.getToolMap()[toolCall.name];
const toolMessage = await selectedTool.invoke(toolCall);
messages.push(toolMessage);
}
// To simplify the setup, this example uses the ai-sdk langchain adapter
// to stream the results back to the /langchain page.
// For more details, see: https://sdk.vercel.ai/providers/adapters/langchain
const stream = await modelWithTools.stream(messages);
return LangChainAdapter.toDataStreamResponse(stream);
}
The agent toolkit provides a createKnockToolkit
under the /mastra
path for easily integrating into the Mastra framework and returning tools ready for use.
npm install @knocklabs/agent-toolkit
createKnockToolkit
helper, configure it, and use it in your LLM calling:import { anthropic } from "@ai-sdk/anthropic";
import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { createKnockToolkit } from "@knocklabs/agent-toolkit/mastra";
const toolkit = await createKnockToolkit({
serviceToken: "knock_st_",
permissions: {
// (optional but recommended): Set the permissions of the tools to expose
workflows: { read: true, run: true, manage: true },
},
userId: "10",
});
export const weatherAgent = new Agent({
name: "Weather Agent",
instructions: `You are a helpful weather assistant that provides accurate weather information.`,
model: anthropic("claude-3-5-sonnet-20241022"),
tools: toolkit.getAllTools(),
memory: new Memory({
storage: new LibSQLStore({
url: "file:../mastra.db", // path is relative to the .mastra/output directory
}),
}),
});
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "knock-local-mcp": { "command": "npx", "args": [ "-y", "@knocklabs/agent-toolkit", "-p", "local-mcp", "--service-token", "<YOUR_SERVICE_TOKEN>" ], "env": { "SERVICE_TOKEN": "<YOUR_SERVICE_TOKEN>" } } } }
Discover more MCP servers with similar functionality and use cases
by zed-industries
Provides real-time collaborative editing powered by Rust, enabling developers to edit code instantly across machines with a responsive, GPU-accelerated UI.
by cline
Provides autonomous coding assistance directly in the IDE, enabling file creation, editing, terminal command execution, browser interactions, and tool extension with user approval at each step.
by continuedev
Provides continuous AI assistance across IDEs, terminals, and CI pipelines, offering agents, chat, inline editing, and autocomplete to accelerate software development.
by github
Enables AI agents, assistants, and chatbots to interact with GitHub via natural‑language commands, providing read‑write access to repositories, issues, pull requests, workflows, security data and team activity.
by block
Automates engineering tasks by installing, executing, editing, and testing code using any large language model, providing end‑to‑end project building, debugging, workflow orchestration, and external API interaction.
by RooCodeInc
An autonomous coding agent that lives inside VS Code, capable of generating, refactoring, debugging code, managing files, running terminal commands, controlling a browser, and adapting its behavior through custom modes and instructions.
by lastmile-ai
A lightweight, composable framework for building AI agents using Model Context Protocol and simple workflow patterns.
by firebase
Provides a command‑line interface to manage, test, and deploy Firebase projects, covering hosting, databases, authentication, cloud functions, extensions, and CI/CD workflows.
by gptme
Empowers large language models to act as personal AI assistants directly inside the terminal, providing capabilities such as code execution, file manipulation, web browsing, vision, and interactive tool usage.