Guide

One-shot AI prototypes: the race that starts before you type

The interesting part of a one-shot AI prototype is not the prompt. It is the work that happens between the moment you load the page and the moment you click Send. Most tools do nothing in that window. mk0r boots a sandbox, initializes Claude, opens an agent session, and provisions a git repo, all before your first character. This is the line of code that starts the race.

m
mk0r engineering
9 min
4.9from builders shipping their first one-shot on mk0r
Page mount fires /api/vm/prewarm before your first keypress
Five operations done ahead of the prompt: boot, init, session, model, repo
Anonymous session via crypto.randomUUID(), no signup, no email
Pool target size is 1, refilled in the background after every claim

What everyone calls one-shot, and what almost no one explains

Open the existing playbooks on this topic and they all converge on the same definition: a one-shot AI prototype is an app produced from a single user prompt. The user types once and gets working code. The articles will then talk about prompt engineering, model choice, and how to phrase your description so the agent does what you want. None of that is wrong. It is also not where the time goes.

Where the time goes is sandbox boot, agent session open, model initialization, and git repo provisioning. On a tool with no preparation, those happen between your click and your first token. On mk0r they happen between your page-load and your click. The user-perceived latency is the difference between the two.

That difference is what this page is about. It exists because of one fetch in a useEffect, and one decision about how to generate the session identity.

The two lines that win the race

The landing page does two things on mount that most one-shot tools cannot do. First, it generates an anonymous session key with crypto.randomUUID(). Second, with no auth check in the way, it fires POST /api/vm/prewarm. That second call sets the entire prewarm machinery in motion on the server.

src/app/(landing)/page.tsx

The order matters. A signup-first tool cannot fire prewarm on mount because it does not yet know who the user is. mk0r does not care: the random UUID is the user, for now. If the visitor later signs in, the key is rebound to their account and their in-flight build comes with them.

What page mount actually starts

Browser
useEffect
/api/vm/prewarm
createSandbox
ACP /initialize
ACP /session/new
set_model haiku
ensureSessionRepo

The five operations done before your first character

The server-side handler for prewarm is prewarmSession(). Open the file and they are listed in order, each with its own abort signal so a wedged step cannot stall the rest.

src/core/e2b.ts

Step 3 is the one that pays off. ACP session/new is the slowest step in a cold one-shot path: it spins up an agent process inside the sandbox, hands it your system prompt, and waits for a session ID. Doing it ahead of time means the chat route, when it eventually runs, just claims the ready session out of the pool and forwards your prompt.

The two timelines, side by side

Usermk0r BrowserServerSandboxload mk0r.comPOST /api/vm/prewarmcreateSandbox()sandbox ready ~2.5sACP /initialize + /session/newsession readytype prompt + SendPOST /api/chatclaimPrewarmedSession (instant)stream tokens

Why signup-gated tools cannot do this

Tools that gate the first prompt behind email-and-password, magic links, or OAuth have a structural problem. They do not know the user when the page loads, so they cannot bind work to that user yet. They also will not pay to spin up sandbox capacity for visitors who might bounce on the signup form. The economics push them to wait until after auth to start the sandbox, which means every one-shot prompt pays the full cold boot.

mk0r’s anonymous-by-default model removes the constraint both ways: there is no auth in the path, so prewarm runs, and the visitor only sees one warm sandbox of cost (pool target size 1) regardless of whether they convert. The signup, when it eventually happens, is a project-claim step, not an entry gate.

Time spent before the first generated token

User lands on the marketing page, clicks 'Try it,' is asked for an email. Then a verification step. Then a plan picker. Then onboarding modals. Only then does the first prompt input render — at which point the sandbox boot, agent init, and session creation all still have to happen. The first token is several minutes from page-load.

  • Email + verification before any prompt UI
  • Plan or workspace picker in the critical path
  • Sandbox cold-boots only after auth resolves
  • First token: minutes after landing

Anonymous session

crypto.randomUUID() in src/app/(landing)/page.tsx line 47. Key persists in localStorage as mk0r_session_key. PostHog identifies on this same key.

Prewarm trigger

fetch('/api/vm/prewarm', { method: 'POST' }) in a useEffect at src/app/(landing)/page.tsx line 64. Fire-and-forget, no auth header, no body.

Pool target

VM_POOL_TARGET_SIZE defaults to 1. One warm sandbox per template hash, refilled in the background after every claim.

Free model preselected

FREE_MODEL = 'haiku' in src/app/api/chat/model/route.ts. Anonymous and free-tier users land here automatically; no model picker before the first prompt.

Five operations

createSandbox, ACP /initialize, ACP /session/new, set_model haiku, ensureSessionRepo. All done in prewarmSession() in src/core/e2b.ts line 1911.

Pool age window

POOL_MAX_AGE_MS = 45 minutes. Anything older is reaped before being handed to a user. Sandboxes themselves time out at 60 minutes.

Template ID

mk0r-app-builder, ID 2yi5lxazr1abcs2ew6h8. Pre-baked image with Vite, React, Playwright, ACP bridge already running.

Boot signal

boot_progress events: session_check, pool_claim, vm_boot, acp_init, acp_session, repo. The chat route streams them to the browser so you can see exactly which path you took.

The pool sizing decision

One warm sandbox at a time. That is the default. The constants are tiny, easy to find, and have a comment explaining their relationship.

src/core/e2b.ts

The 45-minute pool max age sits 15 minutes below the one-hour E2B sandbox lifetime. That gap is the safety margin: any sandbox handed out of the pool always has at least 15 minutes of life left on it, comfortably more than the longest one-shot prompt budget. So the user never claims a sandbox that is about to die under them.

Numbers behind the pre-prompt race

0
Operations before first keypress
0
Default pool size
0
Auth checks in prewarm path
0 min
Pool entry max age
0 min
Sandbox lifetime
0s
ACP init / session abort
0s
set_model abort
0s
Per-turn route ceiling

What page-load looks like in the boot stream

The chat route emits a structured boot_progress stream when a prompt actually runs, naming each phase and how long it took. For a session that hit a prewarmed pool entry, the trace is short: the slow steps are already done.

Prewarmed one-shot, first prompt

How to run the same trace yourself

Everything described here is observable from a normal browser window with DevTools open. Five steps and you can see the same race we are describing.

1

Open mk0r.com in a fresh tab

Use a private window to skip any cached session key, so the localStorage path runs end-to-end and crypto.randomUUID() actually fires.

2

Open DevTools, go to the Network panel

Filter by Fetch/XHR. Reload. The first POST to /api/vm/prewarm should appear within milliseconds, before any user interaction. The body is empty.

3

Watch the localStorage tab

Application > Storage > Local Storage > mk0r.com. The mk0r_session_key entry appears on first paint and contains a UUID v4. That is the anonymous identity prewarm is now boosting.

4

Hit GET /api/vm/prewarm to read the pool state

It returns ready, warming, target, specHash, readyHost. After your prewarm completes, ready=1 means the sandbox is waiting for you specifically. Before then, warming=1 is the state.

5

Type a prompt and click Send

The chat stream now emits boot_progress events. pool_claim status=done with a small duration_ms is what you want to see — it means the prewarm landed in time and you skipped the 2.5 second cold boot.

Where the time goes for one-shot AI prototypes

The signup-gated path puts paperwork between landing and the first prompt. mk0r moves the same work to page mount, with no gate.

FeatureSignup-first one-shot toolsmk0r (anonymous prewarm)
Identity at page-loadUnknown — auth happens latercrypto.randomUUID() in localStorage, anonymous
Sandbox state at page-loadNone — booted after first promptWarming or ready in Firestore via /api/vm/prewarm
Operations done before first keypressZero — input form may not even be reachable yetFive: boot, init, session, model, repo
Friction before the first buildEmail, verification, plan, onboardingNone — input visible at first paint
Model defaultedOften a picker stepFREE_MODEL = 'haiku' in src/app/api/chat/model/route.ts
Cost shapeSandbox cost paid only on real users (post-signup)Pool target 1: at most one warm sandbox per template
First-token latency for the userCold boot + agent init + session in the critical pathCold boot + agent init + session done; only prompt remains
Where to verifyVendor blog postOpen src/app/(landing)/page.tsx and src/core/e2b.ts and grep prewarm
0Operations done before your first keypress
0Default warm sandbox in the pool
0Auth checks in the prewarm path
0 minPool entry max age

What the anchor fact actually buys you

Five operations on page mount sounds small. The reason it shapes the entire product is that any of those five would otherwise sit in the critical path between your click and your first token. Cold-boot a sandbox: 2.5 seconds. Initialize the ACP bridge: typically a few hundred milliseconds, but with a 30 second worst case. Open a Claude session: another few hundred milliseconds, same 30 second ceiling. Set the free model. Provision a git repo. None of that is interesting to the user. All of it would be visible to the user as latency if it ran after the click instead of before.

The decision to make the session anonymous is what makes the prewarm legal in the first place. Without an auth check on the way to the prewarm route, the page can fire it unconditionally on mount. With one, it could not. The differentiator and the optimization are the same decision.

Want to watch the pre-prompt race in your own browser?

Book 20 minutes and we will load mk0r.com together with DevTools open, watch /api/vm/prewarm fire on mount, and walk the prewarmSession() path line by line in src/core/e2b.ts.

Frequently asked questions

What is a one-shot AI prototype?

A one-shot AI prototype is a working app produced from a single user prompt with no follow-up turn required. The user types one description like 'build me a habit tracker with localStorage' and the tool returns runnable code, a live preview, and ideally a sharable URL. The 'one shot' part is about the human input shape, not the model: under the hood the agent may take dozens of internal tool calls. What matters to the user is that they prompted once and got an app, not a conversation.

Why does mk0r prewarm the sandbox before the prompt?

Because the slowest parts of building a one-shot prototype are not the model. They are booting an isolated sandbox, initializing the agent control protocol, opening a session with Claude, and provisioning a git repo. On mk0r those four operations happen on page mount via the call to /api/vm/prewarm in src/app/(landing)/page.tsx line 64, fire-and-forget. By the time the user clicks Send, the VM is alive, the agent is ready, and only the prompt itself still has to flow. The pre-prompt race is invisible until you measure it.

Where does mk0r generate the session identity, and is it really anonymous?

Yes, anonymous by default. The session key is created client-side via crypto.randomUUID() on first visit at src/app/(landing)/page.tsx line 47, then stored in localStorage under mk0r_session_key. There is no email, no signup, no account creation. PostHog identifies the user by that random UUID. If you want to claim your work later, you can sign in and the existing key migrates to your user record. But the first prototype runs entirely on the random ID. This is the structural reason mk0r can prewarm and signup-gated tools cannot: there is no auth check between page-load and prewarm, so the work can start before the user does.

How long does the prewarm actually take?

The sandbox boot from a pre-baked template runs roughly 2.5 seconds on E2B (template name mk0r-app-builder, ID 2yi5lxazr1abcs2ew6h8 in src/core/e2b.ts line 1). After that the prewarm function does ACP /initialize (30 second abort window), ACP /session/new (another 30 second abort), set_model haiku (10 second abort), and ensureSessionRepo. In practice the whole sequence finishes in a few seconds. The user is still scanning the prompt placeholder while the pool entry is already marked ready in Firestore.

What happens if I send a prompt before the prewarm finishes?

Nothing bad. The chat route falls through to the normal cold path: it tries the pool, finds nothing yet, and boots a fresh sandbox itself. The boot_progress stream surfaces this honestly with a vm_boot event instead of pool_claim. The cost is one extra cold boot of about 2.5 seconds. Prewarm is an optimization, not a precondition. The session works either way.

Why is anonymous-by-default a one-shot prototype concern, not a privacy concern?

Both, but the prototype angle is concrete: any tool that requires signup before a one-shot prompt is forced to put account creation, email verification, plan selection, or onboarding between the user and the first generated app. That is exactly the surface area where one-shot stops being one-shot. mk0r's design moves all of that out of the critical path. You arrive, you type, you get an app. If you want to upgrade to a paid model, sign in. If you want to publish, sign in. The signup never blocks the first build.

What is the actual difference between Quick mode and VM mode for one-shot prototyping?

Quick mode streams a single self-contained HTML file from Claude Haiku and renders it in an iframe. It is what powers the very first 'watch it become a real app' on the landing page. VM mode forwards the prompt to the ACP bridge inside the prewarmed sandbox, where a full Vite + React + Playwright environment runs. Quick mode is fastest for a static page; VM mode is necessary for anything that needs npm packages, file uploads, or persistent state. Both run on the same prewarmed session infrastructure, the difference is which path the route takes once the prompt arrives.

How do I verify the prewarm is actually happening?

Open mk0r.com in a fresh browser, open DevTools Network panel, and reload. The first POST to /api/vm/prewarm fires within milliseconds of mount, before any user input. The server-side log shows e2b.pool.warming_start followed by e2b.pool.ready_set when the sandbox finishes provisioning. You can also call GET /api/vm/prewarm to read the current pool status: ready count, warming count, target size. The number 1 you see for ready is the sandbox waiting on you specifically.

Does anonymous prewarm cost real money for users who never come back?

Yes, which is why the pool target is exactly one by default (process.env.VM_POOL_TARGET_SIZE || '1' in src/core/e2b.ts line 1852). The pool keeps one sandbox warm at a time. When a user claims it, topupPool runs in the background to refill the slot. Sandboxes that idle past 45 minutes get reaped by the cleanup pass; sandboxes that hit the one-hour wall-clock limit are killed by E2B itself. The cost of being wrong about a visitor is bounded: at most one warm sandbox per template hash, lifetime capped at 60 minutes.

What does 'no setup' actually mean in source?

It means the path between landing on mk0r.com and the first generated app contains zero forms, zero modal dialogs, and zero auth checks. The landing page renders an input box and a Send button immediately. The session key is created automatically. The sandbox is being warmed in the background. The model is preselected to haiku for the free tier in src/app/api/chat/model/route.ts line 5. Any tool that would normally happen 'before the prompt' has been moved either to page mount (prewarm) or after the first build (sign in to save). The build itself never asks you to set anything up.

Can I see the same prewarm code in the repo myself?

The two anchor lines are: src/app/(landing)/page.tsx line 64 (the fetch call to /api/vm/prewarm in a useEffect), and src/core/e2b.ts line 1911 (the prewarmSession function that does the five sequential operations). Open both files and the entire mechanism is visible: which timeouts apply, which Firestore document holds the pool state, what gets baked into the template, where the model defaults to haiku. Most one-shot tools treat their session boot as an internal detail. mk0r ships it as the load-bearing optimization the product is built on.

mk0r.AI app builder
© 2026 mk0r. All rights reserved.