by stippi
Provides an LLM‑powered autonomous coding assistant with CLI, GUI, and MCP server modes for real‑time code analysis, modification, and project‑wide context management.
Code Assistant enables autonomous code exploration and modification by leveraging large language models. It operates in three interfaces—graphical UI, terminal, and headless MCP server—so developers can interact with their codebase through natural language commands.
git clone https://github.com/stippi/code-assistant
cd code-assistant
cargo build --release
The executable is placed at target/release/code-assistant
.~/.config/code-assistant/projects.json
with project paths.code-assistant --ui
code-assistant --task "Explain this module"
code-assistant server
Additional flags let you choose provider (--provider openai
), model (--model gpt-4o
), and tool syntax (--tool-syntax xml
).Q: Which LLM providers are supported? A: Anthropic (default), OpenAI, SAP AI Core, Ollama, Vertex AI, OpenRouter, Groq, MistralAI, and any provider exposing a compatible API.
Q: How do I choose a tool‑syntax mode?
A: Use --tool-syntax native
for providers with built‑in function calling, xml
for XML‑style tags, or caret
for triple‑caret blocks. The choice is fixed for the lifetime of a session.
Q: Can I run Code Assistant without a GUI?
A: Yes, the terminal mode works fully headless, and the server
subcommand launches an MCP server for client integration.
Q: Where are project configurations stored?
A: In ~/.config/code-assistant/projects.json
. Temporary projects are created automatically for unregistered folders.
Q: Is there sandboxing for tool execution? A: Currently tools reject absolute paths but do not enforce strict sandboxing. Future releases aim to improve security with git‑aware and sandboxed execution.
An AI coding assistant built in Rust that provides both command-line and graphical interfaces for autonomous code analysis and modification.
Multi-Modal Tool Execution: Adapts to different LLM capabilities with pluggable tool invocation modes - native function calling, XML-style tags, and triple-caret blocks - ensuring compatibility across various AI providers.
Real-Time Streaming Interface: Advanced streaming processors parse and display tool invocations as they stream from the LLM, with smart filtering to prevent unsafe tool combinations.
Session-Based Project Management: Each chat session is tied to a specific project and maintains persistent state, working memory, and draft messages with attachment support.
Multiple Interface Options: Choose between a modern GUI built on Zed's GPUI framework, traditional terminal interface, or headless MCP server mode for integration with MCP clients such as Claude Desktop.
Intelligent Project Exploration: Autonomously builds understanding of codebases through working memory that tracks file structures, dependencies, and project context.
git clone https://github.com/stippi/code-assistant
cd code-assistant
cargo build --release
The binary will be available at target/release/code-assistant
.
Create ~/.config/code-assistant/projects.json
to define available projects:
{
"code-assistant": {
"path": "/Users/<username>/workspace/code-assistant"
},
"my-project": {
"path": "/Users/<username>/workspace/my-project"
}
}
Important Notes:
# Start with graphical interface
code-assistant --ui
# Start GUI with initial task
code-assistant --ui --task "Analyze the authentication system"
# Basic usage
code-assistant --task "Explain the purpose of this codebase"
# With specific provider and model
code-assistant --task "Add error handling" --provider openai --model gpt-5
code-assistant server
Configure in Claude Desktop settings (Developer tab → Edit Config):
{
"mcpServers": {
"code-assistant": {
"command": "/path/to/code-assistant/target/release/code-assistant",
"args": ["server"],
"env": {
"PERPLEXITY_API_KEY": "pplx-...", // optional, enables perplexity_ask tool
"SHELL": "/bin/zsh" // your login shell, required when configuring "env" here
}
}
}
}
Anthropic (default):
export ANTHROPIC_API_KEY="sk-ant-..."
code-assistant --provider anthropic --model claude-sonnet-4-20250514
OpenAI:
export OPENAI_API_KEY="sk-..."
code-assistant --provider openai --model gpt-4o
SAP AI Core:
Create ~/.config/code-assistant/ai-core.json
:
{
"auth": {
"client_id": "<service-key-client-id>",
"client_secret": "<service-key-client-secret>",
"token_url": "https://<your-url>/oauth/token",
"api_base_url": "https://<your-url>/v2/inference"
},
"models": {
"claude-sonnet-4": "<deployment-id>"
}
}
Ollama:
code-assistant --provider ollama --model llama2 --num-ctx 4096
Other providers: Vertex AI (Google), OpenRouter, Groq, MistralAI
Tool Syntax Modes:
--tool-syntax native
: Use the provider's built-in tool calling (most reliable, but streaming of parameters depends on provider)--tool-syntax xml
: XML-style tags for streaming of parameters--tool-syntax caret
: Triple-caret blocks for token-efficency and streaming of parametersSession Recording:
# Record session (Anthropic only)
code-assistant --record session.json --task "Optimize database queries"
# Playback session
code-assistant --playback session.json --fast-playback
Other Options:
--continue-task
: Resume from previous session state--use-diff-format
: Enable alternative diff format for file editing--verbose
: Enable detailed logging--base-url
: Custom API endpointThe code-assistant features several innovative architectural decisions:
Adaptive Tool Syntax: Automatically generates different system prompts and streaming processors based on the target LLM's capabilities, allowing the same core logic to work across providers with varying function calling support.
Smart Tool Filtering: Real-time analysis of tool invocation patterns prevents logical errors like attempting to edit files before reading them, with the ability to truncate responses mid-stream when unsafe combinations are detected.
Multi-Threaded Streaming: Sophisticated async architecture that handles real-time parsing of tool invocations while maintaining responsive UI updates and proper state management across multiple chat sessions.
Contributions are welcome! The codebase demonstrates advanced patterns in async Rust, AI agent architecture, and cross-platform UI development.
This section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.
replace_in_file
and we know in which file quite early.
If we also know this file has changed since the LLM last read it, we can block the attempt with an appropriate error message.execute_command
tool runs a shell with the provided command line, which at the moment is completely unchecked.\n
line endings, no trailing white space).
This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more.
Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file.
Even when the error output of the replace_in_file
tool includes the complete file and tells the LLM not to re-read the file.Please log in to share your review and rating for this MCP.
Discover more MCP servers with similar functionality and use cases
by zed-industries
Provides real-time collaborative editing powered by Rust, enabling developers to edit code instantly across machines with a responsive, GPU-accelerated UI.
by cline
Provides autonomous coding assistance directly in the IDE, enabling file creation, editing, terminal command execution, browser interactions, and tool extension with user approval at each step.
by continuedev
Provides continuous AI assistance across IDEs, terminals, and CI pipelines, offering agents, chat, inline editing, and autocomplete to accelerate software development.
by github
Enables AI agents, assistants, and chatbots to interact with GitHub via natural‑language commands, providing read‑write access to repositories, issues, pull requests, workflows, security data and team activity.
by block
Automates engineering tasks by installing, executing, editing, and testing code using any large language model, providing end‑to‑end project building, debugging, workflow orchestration, and external API interaction.
by RooCodeInc
An autonomous coding agent that lives inside VS Code, capable of generating, refactoring, debugging code, managing files, running terminal commands, controlling a browser, and adapting its behavior through custom modes and instructions.
by lastmile-ai
A lightweight, composable framework for building AI agents using Model Context Protocol and simple workflow patterns.
by firebase
Provides a command‑line interface to manage, test, and deploy Firebase projects, covering hosting, databases, authentication, cloud functions, extensions, and CI/CD workflows.
by gptme
Empowers large language models to act as personal AI assistants directly inside the terminal, providing capabilities such as code execution, file manipulation, web browsing, vision, and interactive tool usage.