by jamsocket
Run arbitrary Python code securely in persistent, stateful sandboxes that remain available indefinitely.
ForeverVM provides an API and CLI for creating machines—isolated, stateful Python processes—that can execute arbitrary code one instruction at a time. Machines are automatically swapped to disk when idle and re‑hydrated on demand, enabling them to run "forever" without manual management.
npx forevervm login
npx forevervm repl
(creates a new machine) or npx forevervm repl <machine_name>
to reconnect.npx forevervm machine list
npm i @forevervm/sdk
) and use the ForeverVM
class to create machines, execute code, stream output, and filter by tags or memory limits.import { ForeverVM } from '@forevervm/sdk'
const fvm = new ForeverVM({ token: process.env.FOREVERVM_TOKEN })
const repl = fvm.repl()
const result = await repl.exec('4 + 4')
console.log('result:', await result.result)
memory_mb
per machine to control resource usage.Q: Do I need to manually stop a machine? A: No. Machines automatically persist to disk when idle and are cleaned up according to the service’s retention policy.
Q: How is security enforced? A: Each machine runs in a sandbox isolated from the host system, and execution is restricted to pure Python code without filesystem or network access.
Q: Can I set resource limits?
A: Yes. Use the memory_mb
field when creating a machine to cap its memory usage.
Q: What languages are supported? A: Currently only Python is supported.
Q: How do I authenticate the SDK?
A: Export the token as FOREVERVM_TOKEN
or pass it directly when constructing ForeverVM
.
repo | version |
---|---|
cli | |
sdk |
foreverVM provides an API for running arbitrary, stateful Python code securely.
The core concepts in foreverVM are machines and instructions.
Machines represent a stateful Python process. You interact with a machine by running instructions (Python statements and expressions) on it, and receiving the results. A machine processes one instruction at a time.
You will need an API token (if you need one, reach out to paul@jamsocket.com).
The easiest way to try out foreverVM is using the CLI. First, you will need to log in:
npx forevervm login
Once logged in, you can open a REPL interface with a new machine:
npx forevervm repl
When foreverVM starts your machine, it gives it an ID that you can later use to reconnect to it. You can reconnect to a machine like this:
npx forevervm repl [machine_name]
You can list your machines (in reverse order of creation) like this:
npx forevervm machine list
You don't need to terminate machines -- foreverVM will automatically swap them from memory to disk when they are idle, and then automatically swap them back when needed. This is what allows foreverVM to run repls “forever”.
import { ForeverVM } from '@forevervm/sdk'
const token = process.env.FOREVERVM_TOKEN
if (!token) {
throw new Error('FOREVERVM_TOKEN is not set')
}
// Initialize foreverVM
const fvm = new ForeverVM({ token })
// Connect to a new machine.
const repl = fvm.repl()
// Execute some code
let execResult = repl.exec('4 + 4')
// Get the result
console.log('result:', await execResult.result)
// We can also print stdout and stderr
execResult = repl.exec('for i in range(10):\n print(i)')
for await (const output of execResult.output) {
console.log(output.stream, output.data)
}
process.exit(0)
You can create machines with tags and filter machines by tags:
import { ForeverVM } from '@forevervm/sdk'
const fvm = new ForeverVM({ token: process.env.FOREVERVM_TOKEN })
// Create a machine with tags
const machineResponse = await fvm.createMachine({
tags: {
env: 'production',
owner: 'user123',
project: 'demo'
}
})
// List machines filtered by tags
const productionMachines = await fvm.listMachines({
tags: { env: 'production' }
})
You can create machines with memory limits by specifying the memory size in megabytes:
// Create a machine with 512MB memory limit
const machineResponse = await fvm.createMachine({
memory_mb: 512,
})
Please log in to share your review and rating for this MCP.
{ "mcpServers": { "forevervm": { "command": "npx", "args": [ "-y", "forevervm" ], "env": { "FOREVERVM_TOKEN": "<YOUR_API_KEY>" } } } }
Discover more MCP servers with similar functionality and use cases
by daytonaio
Provides a secure, elastic sandbox environment for executing AI‑generated code with isolated runtimes and sub‑90 ms provisioning.
by awslabs
Specialized servers that expose AWS capabilities through the Model Context Protocol, allowing AI assistants and other applications to retrieve up‑to‑date AWS documentation, manage infrastructure, query services, and perform workflow automation directly from their context.
by awslabs
AWS MCP Servers allow AI agents to interact with and manage a wide range of AWS services using natural language commands. They enable AI-powered cloud management, automated DevOps, and data-driven insights within the AWS ecosystem.
by cloudflare
Remote Model Context Protocol endpoints that let AI clients read, process, and act on data across Cloudflare services such as Workers, Radar, Observability, and more.
by supabase-community
Enables AI assistants to interact directly with Supabase projects, allowing them to query databases, fetch configuration, manage tables, and perform other project‑level operations.
by Azure
azure-mcp is a server that implements the Model Context Protocol (MCP) to connect AI agents with Azure services. It allows developers to interact with Azure resources like Storage, Cosmos DB, and the Azure CLI using natural language commands within their development environment.
by Flux159
MCP Server for Kubernetes management commands, enabling interaction with Kubernetes clusters to manage pods, deployments, and services.
by strowk
Provides a Golang‑based server that enables interaction with Kubernetes clusters via prompts, allowing listing of contexts, namespaces, resources, nodes, pods, events, logs, and executing commands inside pods.
by TencentEdgeOne
EdgeOne Pages MCP is a MCP service designed to deploy HTML content, folders, and zip files to EdgeOne Pages. It leverages EdgeOne Pages Edge Functions and KV Store to provide publicly accessible URLs for the deployed content, enabling fast edge delivery.