by stippi
Code Assistant is an LLM-powered, autonomous coding assistant built in Rust that helps developers with various code-related tasks by intelligently exploring codebases, reading and writing files, and interacting with users.
Code Assistant is an LLM-powered, autonomous coding assistant built in Rust. It is designed to help developers with various code-related tasks by intelligently exploring codebases, reading and writing files, and interacting with users for better decision-making. It also offers an MCP (Model Context Protocol) server mode, allowing it to function as a plugin for MCP client applications like Claude Desktop.
git clone https://github.com/stippi/code-assistant
cd code-assistant
cargo build --release
The executable will be located at target/release/code-assistant
.~/.config/code-assistant/projects.json
to define available projects for the MCP server mode. This file maps project names to their absolute paths.
{
"code-assistant": {
"path": "/Users/<username>/workspace/code-assistant"
},
"asteroids": {
"path": "/Users/<username>/workspace/asteroids"
}
}
claude_desktop_config.json
and add a configuration for code-assistant
under mcpServers
.
{
"mcpServers": {
"code-assistant": {
"command": "/Users/<username>/workspace/code-assistant/target/release/code-assistant",
"args": [
"server"
]
}
}
}
1. Agent Mode (Default)
Run tasks directly from the command line:
code-assistant --task <TASK> [OPTIONS]
Key Options:
--path <PATH>
: Path to the code directory (default: current directory).-t, --task <TASK>
: Task to perform (required in terminal mode, optional with --ui
).--ui
: Start with a GUI interface.-p, --provider <PROVIDER>
: LLM provider to use (e.g., anthropic
, openai
, ollama
).-m, --model <MODEL>
: Specific model name to use.Examples:
code-assistant --task "Explain the purpose of this codebase"
code-assistant --ui --task "Refactor the authentication module"
code-assistant --task "Document this API" --provider ollama --model llama2
2. Server Mode
Run as a Model Context Protocol server:
code-assistant server [OPTIONS]
-v, --verbose
: Enable verbose logging.gpui
.Q: What LLM providers does Code Assistant support? A: Code Assistant supports Anthropic, OpenAI, Ollama, Vertex, and OpenRouter.
Q: Can I use Code Assistant with a GUI?
A: Yes, you can start Code Assistant with a GUI using the --ui
flag.
Q: How do I configure Code Assistant for use with Claude Desktop?
A: You need to configure projects.json
to define your projects and claude_desktop_config.json
to add Code Assistant as an MCP server in Claude Desktop's settings.
Q: Is there a way to record and playback sessions?
A: Yes, you can record API responses to a file (currently for Anthropic provider only) using --record
and play them back using --playback
.
Q: What are the security considerations when using Code Assistant?
A: The project notes that it has insufficient protection against prompt injections and should be used with trusted repositories only. The execute_command
tool runs a shell with the provided command line, which is currently unchecked. Future improvements aim to enhance security by sandboxing tool execution and restricting file access.
A CLI tool built in Rust for assisting with code-related tasks.
Ensure you have Rust installed on your system. Then:
# Clone the repository
git clone https://github.com/stippi/code-assistant
# Navigate to the project directory
cd code-assistant
# Build the project
cargo build --release
# The binary will be available in target/release/code-assistant
The code-assistant
implements the Model Context Protocol by Anthropic.
This means it can be added as a plugin to MCP client applications such as Claude Desktop.
Create a file ~/.config/code-assistant/projects.json
.
This file adds available projects in MCP server mode (list_projects
and file operation tools).
It has the following structure:
{
"code-assistant": {
"path": "/Users/<username>/workspace/code-assistant"
},
"asteroids": {
"path": "/Users/<username>/workspace/asteroids"
},
"zed": {
"path": "Users/<username>/workspace/zed"
}
}
Notes:
A Finder window opens highlighting the file claude_desktop_config.json
.
Open that file in your favorite text editor.
An example configuration is given below:
{
"mcpServers": {
"code-assistant": {
"command": "/Users/<username>/workspace/code-assistant/target/release/code-assistant",
"args": [
"server"
],
"env": {
"PERPLEXITY_API_KEY": "pplx-...", // optional, enables perplexity_ask tool
"SHELL": "/bin/zsh" // your login shell, required when configuring "env" here
}
}
}
}
Code Assistant can run in two modes:
code-assistant --task <TASK> [OPTIONS]
Available options:
--path <PATH>
: Path to the code directory to analyze (default: current directory)-t, --task <TASK>
: Task to perform on the codebase (required in terminal mode, optional with --ui
)--ui
: Start with GUI interface--continue-task
: Continue from previous state-v, --verbose
: Enable verbose logging-p, --provider <PROVIDER>
: LLM provider to use [ai-core, anthropic, open-ai, ollama, vertex, open-router] (default: anthropic)-m, --model <MODEL>
: Model name to use (provider-specific defaults: anthropic="claude-sonnet-4-20250514", open-ai="gpt-4o", vertex="gemini-2.5-pro-preview-06-05", open-router="anthropic/claude-3-7-sonnet", ollama=required)--base-url <BASE_URL>
: API base URL for the LLM provider to use--tools-type <TOOLS_TYPE>
: Type of tool declaration [native, xml] (default: xml) - native
= tools via API, xml
= custom system message--num-ctx <NUM_CTX>
: Context window size in tokens (default: 8192, only relevant for Ollama)--record <RECORD>
: Record API responses to a file (only supported for Anthropic provider currently)--playback <PLAYBACK>
: Play back a recorded session from a file--fast-playback
: Fast playback mode - ignore chunk timing when playing recordingsEnvironment variables:
ANTHROPIC_API_KEY
: Required when using the Anthropic providerOPENAI_API_KEY
: Required when using the OpenAI providerGOOGLE_API_KEY
: Required when using the Vertex providerOPENROUTER_API_KEY
: Required when using the OpenRouter providerPERPLEXITY_API_KEY
: Required to use the Perplexity search API toolsExamples:
# Analyze code in current directory using Anthropic's Claude
code-assistant --task "Explain the purpose of this codebase"
# Use a different provider and model
code-assistant --task "Review this code for security issues" --provider openai --model gpt-4o
# Analyze a specific directory with verbose logging
code-assistant --path /path/to/project --task "Add error handling" --verbose
# Start with GUI interface
code-assistant --ui
# Start GUI with an initial task
code-assistant --ui --task "Refactor the authentication module"
# Use Ollama with a local model
code-assistant --task "Document this API" --provider ollama --model llama2 --num-ctx 4096
# Record a session for later playback (Anthropic only)
code-assistant --task "Optimize database queries" --record ./recordings/db-optimization.json
# Play back a recorded session with fast-forward (no timing delays)
code-assistant --playback ./recordings/db-optimization.json --fast-playback
Runs as a Model Context Protocol server:
code-assistant server [OPTIONS]
Available options:
-v, --verbose
: Enable verbose loggingThis section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.
execute_command
tool runs a shell with the provided command line, which at the moment is completely unchecked.\n
line endings, no trailing white space).
This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more.
Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file.
Even when the error output of the replace_in_file
tool includes the complete file and tells the LLM not to re-read the file.Contributions are welcome! Please feel free to submit a Pull Request.
Reviews feature coming soon
Stay tuned for community discussions and feedback