The AI app prototype maker that boots your VM before you type
Every other tool on this topic sells the feeling of “instant.” mk0r ships the system that makes instant possible: a Firestore-backed pool of pre-warmed sandboxes that your page-open silently tops up, so your first prompt runs on a VM that was already alive before you picked a prompt.
Why “instant” usually lies
Most tools that promise an instant AI prototype maker do two things. They skip signup, and they start booting your sandbox after you click the button. The first is a cosmetic win. The second is where time actually hides.
Booting a fresh E2B sandbox for a React + Vite + Playwright stack is not cheap. The container has to come up, a persistent node has to accept an ACP initialize call, a session has to be registered with MCP servers attached, a model has to be selected, and a git repo has to be initialized inside /app. On a cold path, that is tens of seconds before your first token is allowed to flow.
The honest way to collapse that gap is to pay the cost earlier, not to hide it behind a spinner. mk0r pays it at page-open time, for every visitor, regardless of whether they ever type a prompt. The pool is the product choice behind “no setup, no account.”
What the mount ping triggers
The anchor fact: there is a Firestore collection called vm_pool
It lives at path POOL_COLLECTION in src/core/e2b.ts, defined on line 1846. Each document represents one E2B sandbox and carries this shape:
status: “warming” while the VM is still finishing its boot pipeline, “ready” once ACP init, session creation, model pinning, and git provisioning are done.specHash:e2b-template-${E2B_TEMPLATE_ID}. When the template is rebuilt, old docs are evicted in cleanupStalePool before they can be claimed.sandboxId,host,acpUrl,previewUrl: the live coordinates of a running E2B sandbox.sessionId,modes,models,agentCapabilities: the ACP session already negotiated inside that sandbox.createdAt,readyAt: timestamps used by cleanupStalePool, bounded byPOOL_MAX_AGE_MS = 45 * 60 * 1000.
None of that is a diagram of what could happen in theory. That is the document shape the server writes every time it warms a sandbox.
The six things that finish before you hit send
When prewarmSession() flips a doc from warming to ready, every step below has already completed on the sandbox you will eventually claim.
E2B sandbox booted
createSandbox() returns a running container keyed by E2B_TEMPLATE_ID. Template IDs differ per environment: 2yi5lxazr1abcs2ew6h8 for production, 2dc172fstwlsuw8j5t55 for staging.
ACP initialize done
POST to /initialize with a shared ANTHROPIC_API_KEY returns the agent capabilities. The real user's key is rebound on claim.
Session created
POST /session/new registers cwd=/app, attaches the Playwright MCP server (npx @playwright/mcp with CDP endpoint :9222), and applies the default app-builder system prompt.
Model pinned to haiku
POST /session/set_model with modelId=haiku sets the Quick-start default so the first response streams in seconds.
Git initialized in /app
ensureSessionRepo() provisions a repo under a throwaway pw-${poolDocId} key. Every prompt the user later sends writes a real commit on top of this base.
Doc flipped to ready
The final act is a Firestore write that changes status from 'warming' to 'ready'. Only then is the sandbox eligible for a claim.
The claim is a single atomic transaction
The moment you submit your first prompt, the server runs a Firestore transaction that reads one ready doc matching the current specHash, deletes it, and returns it to the caller. Two concurrent users cannot claim the same sandbox; the losing transaction just falls through to a fresh boot path.
From mount to first commit
0 sec
of onboarding before your first prompt
A quick sanity check you can run
GET /api/vm/prewarm returns the same pool state the server uses to decide whether to spawn more sandboxes. The body exposes the current target, ready count, warming count, stale count, active specHash, and the host of one ready sandbox. If the pool is doing its job, ready is at or above target and warming has space to refill as claims drain it.
That endpoint is not a marketing diagram, it is the same one topupPool() reads on every landing-page mount.
Why the 45-minute cutoff, and not something rounder
The constant that governs pool freshness is POOL_MAX_AGE_MS = 45 * 60 * 1000. The comment in the source is honest about why: the E2B sandbox itself times out at one hour, and a doc that is 45 minutes old has at most 15 minutes of life left once a user claims it. That is the floor for “enough time to do useful prototyping without the VM dying under the prompt.” Shorter would churn sandboxes too hard. Longer would hand out dying VMs.
The other half of freshness is the specHash. Every ready doc carries the hash of the template it was booted from. When the template is rebuilt, the new prewarm calls tag fresh docs with the new hash, and cleanupStalePool kills any doc still sitting on the old one. That is how the pool rolls forward without a deploy window where a stale sandbox could be claimed.
How this compares to the rest of the field
Most tools in this space fall into one of two buckets. Some require signup and route you into a dashboard before the prototype maker ever sees your prompt. Others skip signup but boot a fresh sandbox on first submit, which means the latency gap you thought you escaped is still there, hidden under a loading screen.
How mk0r handles the zero-to-first-prompt path
| Feature | Typical AI app prototype maker | mk0r |
|---|---|---|
| Account required to try | Usually yes, or at least an email gate | No |
| Sandbox booting strategy | Cold start after user clicks build | Pool warmed on every page mount via topupPool() |
| ACP / session init time paid by user | Full init on every first run | Zero: already done before claim |
| First prompt lands on | An empty sandbox still installing packages | A running VM with dev server, Chromium, Playwright, git |
| Pool observability | Opaque | GET /api/vm/prewarm returns live counts + specHash |
| Rollover when the template changes | Manual flush or user-visible stale boots | cleanupStalePool evicts by specHash automatically |
Reading the pool yourself, in three steps
Open the network panel, then open mk0r.com
You will see an XHR POST to /api/vm/prewarm fire on the landing page's mount. The response body is {ok, ready, warming, spawned}.
Hit GET /api/vm/prewarm
Same route, different method. Returns target, ready, warming, stale, specHash, and readyHost. The host is the live E2B proxy for one ready sandbox.
Submit a prompt
Watch the claim transaction fire. The ready count drops, topupPool() runs again in the background to refill, and your prompt streams against the already-booted VM.
When a pre-warmed pool is the wrong answer
If the product is a plain LLM-in-a-text-box, a pool is overkill. There is nothing to warm. The pool matters when the unit of output is a running application: a dev server, a real browser, a file tree, a persistent process that takes real seconds to come up. That is what an AI app prototype maker produces, and that is why the architecture behind vm_pool is the differentiator, not the wording of the landing page.
You can verify the whole thing without trusting a screenshot. The source is in the repo, the endpoint is a single GET away, and the Firestore collection name is spelled out on line 1846 of src/core/e2b.ts.
Want to walk the pool with an engineer?
Twenty minutes, a live vm_pool doc on screen, and a sandbox you keep.
Frequently asked questions
What is an AI app prototype maker, and how is mk0r different?
An AI app prototype maker takes a plain-English description and returns a running app: either a single-file HTML page or a full Vite + React + TypeScript project. mk0r's difference is not in the prompt handling, it is in what happens before you even type. The moment you open mk0r.com, the landing page fires a POST to /api/vm/prewarm, which calls topupPool() on the server. topupPool reads the Firestore vm_pool collection and makes sure at least VM_POOL_TARGET_SIZE sandboxes are sitting in a 'ready' state. Each ready doc is a fully booted E2B sandbox that has already executed ACP initialize, ACP session/new with the Playwright MCP config, default-set the model to haiku, and run git init on /app. When you click send, claimPrewarmedSession pops one of those docs inside a Firestore transaction, re-runs /initialize with your credentials, and hands the sandbox to your session.
Does no signup really mean I can hit the prototype maker straight from a link?
Yes. There is no login wall on the landing page. A session key is created locally in your browser and written to localStorage as mk0r_session_key on first visit. The server's prewarm endpoint accepts unauthenticated POSTs when POOL_ADMIN_TOKEN is unset, which is the shipped configuration for mount pings, because the only side effect is creating VMs up to the configured pool target size. That is the line in src/app/api/vm/prewarm/route.ts that encodes the policy.
How fresh are the sandboxes the pool hands out?
Every pool doc carries a specHash equal to computeSpecHash(), which returns 'e2b-template-${E2B_TEMPLATE_ID}'. On every top-up the server calls cleanupStalePool, which deletes docs with a stale specHash or a createdAt older than POOL_MAX_AGE_MS. That constant is 45 minutes, chosen because the underlying E2B sandbox times out at one hour. So a claimed sandbox is always at least 15 minutes under its own lifetime ceiling, and the pool rolls forward as the E2B template changes. If the template rebuilds, stale docs are evicted before the next claim.
What does the sandbox contain by the time it reaches me?
A Vite + React + TypeScript + Tailwind v4 project at /app with the dev server running on port 5173. A headful Chromium launched against an Xvfb virtual display on :99 at 1600x900, with its remote debugging port on 9222. Playwright MCP bound to 127.0.0.1:3001 and attached to Chromium via CDP. An ACP bridge running Claude inside the VM. A persistent Chromium profile at /root/.chromium-profile so cookies survive across turns. A git repo initialized at /app so every prompt you send writes a real commit. None of that boots after you click send, all of it is already running when the claim transaction succeeds.
Why use Firestore for the pool and not an in-process map?
Because Cloud Run services scale to zero and can receive any request on any instance. An in-process map would give the next instance an empty pool every cold start. Firestore is the shared source of truth across instances, which means a prewarm POST to one container can produce a sandbox that a claim on a different container picks up. The transaction in claimPrewarmedSession uses a Firestore runTransaction to atomically read and delete a 'ready' doc, so two concurrent users cannot end up claiming the same sandbox.
What is Quick mode versus VM mode?
Quick mode streams a single HTML/CSS/JS file from Claude Haiku, starts returning tokens in a few seconds, and finishes in under a minute. It does not use the pool. VM mode is the full Vite + React + TypeScript sandbox, and it does use the pool. When the user picks VM mode after landing on the page, the claim path usually finds a ready doc and skips the five to ten minute first-boot cost.
Can I see the pool for myself?
The same /api/vm/prewarm route responds to GET with the current pool status: target size, ready count, warming count, stale count, specHash, and readyHost. The pool is backed by the vm_pool Firestore collection; each document has a status of 'warming' or 'ready', a sandboxId, acpUrl, previewUrl, sessionId, and the modes/models/agentCapabilities captured at boot. If you want evidence of the mechanism, that endpoint is the cleanest place to look.
Start a prototype on a VM that is already warm
No signup. No cold start. Firestore has a sandbox with your name on it the second the page loads.
Open mk0r