by Klavis-AI
Klavis AI provides open-source Multi-platform Control Protocol (MCP) integrations and a hosted API for AI applications. It simplifies connecting AI to various third-party services by managing secure MCP servers and authentication.
Klavis AI provides open-source MCP (Multi-platform Control Protocol) integrations for AI applications. It offers a hosted API that manages secure MCP servers, simplifying authentication and eliminating the need for client-side code. This allows developers to easily connect their AI applications to various third-party services.
Klavis AI can be used in two primary ways:
If you have an existing MCP client implementation, you can integrate Klavis by installing the Python or TypeScript SDKs (pip install klavis
or npm install klavis
). After obtaining an API key from klavis.ai, you can create server instances for specific services (e.g., YouTube, Gmail) and interact with them programmatically. For OAuth-based services, Klavis provides the OAuth URL for user authentication.
Klavis AI can be integrated directly with LLM providers or AI agent frameworks using function calling. After creating a server instance, you can retrieve available tools in a format compatible with your LLM (e.g., OpenAI format). The LLM can then call these tools, and Klavis handles the execution and returns the results. This enables AI models to interact with external services and perform actions based on user prompts.
For those who prefer to host their own MCP servers, Klavis provides the open-source code. You can clone the repository and run specific MCP servers using Docker.
While a dedicated FAQ section isn't explicitly present in the provided README, the "What is Klavis AI?" and "How to use Klavis AI?" sections address common questions regarding its purpose and usage. The documentation links provided (API Documentation, SDK Documentation, MCP Protocol Guide, Authentication Guide) would likely contain more detailed FAQs.
Klavis AI is open source MCP integrations for AI Applications. Our API provides hosted, secure MCP servers, eliminating auth management and client-side code.
Python
pip install klavis
TypeScript/JavaScript
npm install klavis
Sign up at klavis.ai and create your API key.
If you already have an MCP client implementation in your codebase:
Python Example
from klavis import Klavis
from klavis.types import McpServerName, ConnectionType
klavis_client = Klavis(api_key="your-klavis-key")
# Create a YouTube MCP server instance
youtube_server = klavis_client.mcp_server.create_server_instance(
server_name=McpServerName.YOUTUBE,
user_id="user123", # Change to user id in your platform
platform_name="MyApp" # change to your platform
)
print(f"Server created: {youtube_server.server_url}")
TypeScript Example
import { KlavisClient, Klavis } from 'klavis';
const klavisClient = new KlavisClient({ apiKey: 'your-klavis-key' });
// Create Gmail MCP server with OAuth
const gmailServer = await klavisClient.mcpServer.createServerInstance({
serverName: Klavis.McpServerName.Gmail,
userId: "user123",
platformName: "MyApp"
});
// Gmail needs OAuth flow
await window.open(gmailServer.oauthUrl);
Integrate directly with your LLM provider or AI agent framework using function calling:
Python + OpenAI Example
import json
from openai import OpenAI
from klavis import Klavis
from klavis.types import McpServerName, ConnectionType, ToolFormat
OPENAI_MODEL = "gpt-4o-mini"
openai_client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
klavis_client = Klavis(api_key="YOUR_KLAVIS_API_KEY")
# Create server instance
youtube_server = klavis_client.mcp_server.create_server_instance(
server_name=McpServerName.YOUTUBE,
user_id="user123",
platform_name="MyApp"
)
# Get available tools in OpenAI format
tools = klavis_client.mcp_server.list_tools(
server_url=youtube_server.server_url,
format=ToolFormat.OPENAI,
)
# Initial conversation
messages = [{"role": "user", "content": "Summarize this video: https://youtube.com/watch?v=..."}]
# First OpenAI call with function calling
response = openai_client.chat.completions.create(
model=OPENAI_MODEL,
messages=messages,
tools=tools.tools
)
messages.append(response.choices[0].message)
# Handle tool calls
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
result = klavis_client.mcp_server.call_tools(
server_url=youtube_server.server_url,
tool_name=tool_call.function.name,
tool_args=json.loads(tool_call.function.arguments),
)
# Add tool result to conversation
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(result)
})
# Second OpenAI call to process tool results and generate final response
final_response = openai_client.chat.completions.create(
model=OPENAI_MODEL,
messages=messages
)
print(final_response.choices[0].message.content)
TypeScript + OpenAI Example
import OpenAI from 'openai';
import { KlavisClient, Klavis } from 'klavis';
// Constants
const OPENAI_MODEL = "gpt-4o-mini";
const EMAIL_RECIPIENT = "john@example.com";
const EMAIL_SUBJECT = "Hello from Klavis";
const EMAIL_BODY = "This email was sent using Klavis MCP Server!";
const openaiClient = new OpenAI({ apiKey: 'your-openai-key' });
const klavisClient = new KlavisClient({ apiKey: 'your-klavis-key' });
// Create server and get tools
const gmailServer = await klavisClient.mcpServer.createServerInstance({
serverName: Klavis.McpServerName.Gmail,
userId: "user123",
platformName: "MyApp"
});
// Handle OAuth authentication for Gmail
if (gmailServer.oauthUrl) {
console.log("Please complete OAuth authorization:", gmailServer.oauthUrl);
await window.open(gmailServer.oauthUrl);
}
const tools = await klavisClient.mcpServer.listTools({
serverUrl: gmailServer.serverUrl,
format: Klavis.ToolFormat.Openai
});
// Initial conversation
const messages = [{
role: "user",
content: `Please send an email to ${EMAIL_RECIPIENT} with subject "${EMAIL_SUBJECT}" and body "${EMAIL_BODY}"`
}];
// First OpenAI call with function calling
const response = await openaiClient.chat.completions.create({
model: OPENAI_MODEL,
messages: messages,
tools: tools.tools
});
messages.push(response.choices[0].message);
// Handle tool calls
if (response.choices[0].message.tool_calls) {
for (const toolCall of response.choices[0].message.tool_calls) {
const result = await klavisClient.mcpServer.callTools({
serverUrl: gmailServer.serverUrl,
toolName: toolCall.function.name,
toolArgs: JSON.parse(toolCall.function.arguments)
});
// Add tool result to conversation
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content: JSON.stringify(result)
});
}
}
// Second OpenAI call to process tool results and generate final response
const finalResponse = await openaiClient.chat.completions.create({
model: OPENAI_MODEL,
messages: messages
});
console.log(finalResponse.choices[0].message.content);
We're constantly expanding our MCP server ecosystem. Here are the upcoming integrations planned for release:
Add more examples & docs for integrating popular AI platforms and LLMs with the Klavis AI SDK
Want to see a specific integration? Let us know or contribute to help build it!
Many MCP servers require authentication. Klavis handles this seamlessly:
# For OAuth services (Gmail, Google Drive, etc.)
server = klavis_client.mcp_server.create_server_instance(
server_name=McpServerName.GMAIL,
user_id="user123",
platform_name="MyApp"
)
# Option 1 - OAuth URL is provided in server.oauth_url, redirect user to OAuth URL for authentication
import webbrowser
webbrowser.open(server.oauth_url)
# Option 2 - or for API key services
klavis_client.mcp_server.set_auth_token(
instance_id=server.instance_id,
auth_token="your-service-api-key"
)
Combine multiple MCP servers for complex workflows:
# Create multiple servers
github_server = klavis_client.mcp_server.create_server_instance(...)
slack_server = klavis_client.mcp_server.create_server_instance(...)
# Use tools from both servers in a single AI conversation
all_tools = []
all_tools.extend(klavis_client.mcp_server.list_tools(github_server.server_url).tools)
all_tools.extend(klavis_client.mcp_server.list_tools(slack_server.server_url).tools)
# Initialize conversation
messages = [{"role": "user", "content": "Create a GitHub issue and notify the team on Slack"}]
# Loop to let LLM work with multiple tools
max_iterations = 5
for iteration in range(max_iterations):
response = openai_client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=all_tools
)
messages.append(response.choices[0].message)
# Check if LLM wants to use tools
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
# Determine which server to use based on tool name
server_url = github_server.server_url if "github" in tool_call.function.name else slack_server.server_url
# Execute tool
result = klavis_client.mcp_server.call_tools(
server_url=server_url,
tool_name=tool_call.function.name,
tool_args=json.loads(tool_call.function.arguments)
)
# Add tool result to conversation
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(result)
})
else:
# LLM finished the task
print(f"Task completed in {iteration + 1} iterations")
print(response.choices[0].message.content)
break
Want to run MCP servers yourself? All our servers are open-source:
# Clone the repository
git clone https://github.com/klavis-ai/klavis.git
cd klavis
# Run a specific MCP server
cd mcp_servers/github
docker build -t klavis-github .
docker run -p 8000:8000 klavis-github
Build custom integrations with our MCP clients:
We welcome contributions! Here's how to get started:
This project is licensed under the MIT License - see the LICENSE file for details.
Reviews feature coming soon
Stay tuned for community discussions and feedback