Internals

The vibe coding feedback loop, measured in milliseconds

Most posts about iterating with AI app builders treat the feedback loop as a black box: type a change, wait, see the preview. The loop is not a black box. It is a chain of timed events with budgets you can read. In mk0r the budgets are 300 milliseconds and 800 milliseconds, the bridge is a small Vite plugin that posts five message types to the parent window, and the white flash everyone hates is killed by mounting the next iframe behind the previous one. Here is what one iteration looks like.

M
Matthew Diakonov
6 min
Direct answer (verified 2026-05-01)

Each iteration in mk0r runs a two-timer cycle. A 300ms debounce coalesces file writes from a single agent turn into one preview refresh. The preview then waits up to 800ms for an in-iframe Vite HMR bridge to post hmr:after; if it does, the iframe never reloads. If it does not, the parent cache-busts the iframe src for a hard reload, but keeps the previous iframe mounted underneath so there is no white flash. The numbers and the bridge are visible in source: see src/components/phone-preview.tsx, src/app/(landing)/page.tsx, src/core/vm-scripts.ts.

The numbers, stated up front

Four constants govern the feel of the loop. None of them are configurable from the UI; they are tuned for the common case where the agent writes a handful of files in one turn and Vite has time to swap them in place.

0msFile-write debounce
0msHMR wait window
0Bridge message types
0White-flash frames

The first two are timers. The third is the count of distinct message types the in-iframe bridge can post. The fourth is the number of frames where both iframes are empty, which is what causes the flash on every other AI app builder; mk0r mounts the next iframe behind the previous one, so this number stays at zero.

One iteration, frame by frame

Picture turn 11 of a session. You typed make the streak page red. The agent has been streaming for a few seconds. It is about to write a file. Here is what the next two seconds look like inside the host page.

Turn 11, between agent and pixels

01 / 06

t = 0ms · agent writes /app/src/StreakPage.tsx

The ACP stream emits an onFileChange event. The host page calls schedulePreviewRefresh("file-change", info). A 300ms timer is armed.

The bridge, in five message types

The whole reason the loop can skip a reload is that the host page hears from the iframe directly. The voice the iframe uses is a small Vite plugin written into every generated project at boot. It is not optional plumbing; the boot script always installs it. The relevant constant is MK0R_BRIDGE_TS in src/core/vm-scripts.ts. It posts these five things, all tagged source: "mk0r-preview":

  • ready on first boot, with the current location. Tells the host the dev server is up.
  • hmr:before with the count of modules about to update. Useful for ordering log output around an update.
  • hmr:after with the count of modules updated. The host watches for this one to decide whether to skip a hard reload.
  • hmr:error with the error string when Vite emits vite:error. The host can show the error inline instead of a blank page; the agent reads it and patches.
  • hmr:full-reload when Vite asks for a full page reload (HMR could not handle this change, e.g. a route module change).

Every message also carries a millisecond timestamp, so the host can compute "HMR painted in Nms" without holding its own clock.

The handshake between three processes

The loop spans three different runtimes: your tab (the mk0r host page), the iframe (the Vite dev server inside an isolated E2B sandbox), and the agent (the ACP session editing files in /app). The handshake between them is a few message types and two timers, and it looks like this on a single edit.

One file write, traced through the stack

Agent (ACP)Vite in iframeBridgeHost pagewrite /app/src/StreakPage.tsxonFileChange streamed eventschedulePreviewRefresh (debounce 300ms)Vite hot update, swap modulesvite:beforeUpdate, vite:afterUpdatepostMessage hmr:after { updates: 3 }skip hard reload (HMR painted in 40ms)

Note who never speaks to whom. The host never speaks to Vite directly; it only listens to the bridge. The agent never speaks to the iframe; it only writes to the filesystem. Vite never knows who is watching it. Each process keeps a narrow contract, and the timers cover the case where one of them does not respond in time.

The fallback path: what happens when HMR cannot keep up

Some changes cannot be hot-swapped: a change to the root layout, a change to a route module, a structural edit that crosses a module boundary HMR cannot bridge. On those edits Vite emits vite:beforeFullReload, the bridge posts hmr:full-reload, and even if the host had not heard from the bridge in time, the 800ms timer would also fall through to a hard reload. The reload itself is the boring part. The non-boring part is what happens to the iframe during it.

// from src/components/phone-preview.tsx
timerRef.current = setTimeout(() => {
  if (!awaitingRef.current) return;
  awaitingRef.current = false;
  log(`HMR didn't paint within ${HMR_WAIT_MS}ms - hard reloading iframe`);
  setPrevNonce(liveNonce);    // keep the OLD iframe mounted
  setLiveNonce(refreshNonce); // mount the NEW iframe with a new key
}, HMR_WAIT_MS);

The two state setters are the trick. The previous iframe stays in the DOM under prevNonce while the new iframe boots in the foreground under liveNonce. Both render at the same z-index, but the new one is in front; it covers the old one as soon as it paints. There is never a frame where both are empty. The hard reload path is real, but the user-visible flash everyone associates with iframe reloads is gone.

What this honestly does not solve

Three real limits worth saying out loud.

  • The model is still the long pole. On a typical edit the wall clock is dominated by token streaming, not by the 300/800 timers. Tightening the timers would not make iteration feel faster; switching to a smaller model would. mk0r defaults to Claude Haiku for that reason.
  • Big projects can blow past the 800ms window. On a project with a lot of dependencies, Vite's HMR can take longer than 800ms to swap a module. When that happens you get the (flash-free but slower) hard reload path more often. Raising the constant would help; tightening it would hurt.
  • State across iterations is bytes, not objects. The HMR-skip path preserves whatever local React state the preview happened to be holding. The hard-reload path does not. If the change you made resets a form you were debugging, the agent did not lose your work, HMR did. The two-timer cycle minimizes how often this happens but does not abolish it.

Want to see your own iteration loop traced layer by layer?

Happy to walk through how mk0r handles the 300/800 cycle on whatever you are building, and where the limits land for your project. 30 minutes, no pitch.

Frequently asked questions

What does the vibe coding feedback loop actually mean, in one sentence?

It is the round trip from typing a follow-up prompt to seeing the change show up in the preview. Most coverage of this topic stops at 'you describe a change and the AI does it', which describes the user-facing experience but not the loop. The loop is a chain of timed events: the agent writes a file, the file system notices, Vite's HMR fires, an in-iframe bridge posts a message to the parent window, and the parent decides whether to skip a hard reload or fall through to one. Each link in that chain has a budget. The end-to-end feel of vibe coding is mostly determined by those budgets, not by the model.

What are the actual numbers in the mk0r loop?

Two timers. The first one is a 300 millisecond debounce on file writes, set in src/app/(landing)/page.tsx around line 266 in the schedulePreviewRefresh callback. Multiple file writes from a single agent turn coalesce into one preview refresh. The second one is HMR_WAIT_MS = 800, declared at the top of src/components/phone-preview.tsx line 24. After the parent bumps the refresh nonce, the preview component waits up to 800ms for the iframe to post 'hmr:after' before falling back to a hard cache-bust reload. If the bridge paints in time, the iframe never reloads at all.

What is the in-iframe HMR bridge?

A small TypeScript file the sandbox boot script writes into every Vite project under /app, defined as the MK0R_BRIDGE_TS constant in src/core/vm-scripts.ts (the constant starts around line 1559). It listens to Vite's hot-update events and posts five message types to window.parent: 'ready' on first boot, 'hmr:before' and 'hmr:after' around each hot update with the count of modules updated, 'hmr:error' on a build error with the error string, and 'hmr:full-reload' when Vite asks for a full page reload. Every message is tagged source: 'mk0r-preview' so the parent can filter cross-origin noise. The parent listens for those messages in src/components/phone-preview.tsx around line 53 and only acts on the ones it owns.

Why does the parent wait instead of reloading immediately?

Because a hard reload is expensive (full page boot, Vite reconnects, React mounts from scratch, any local component state is lost) and most edits do not need it. Vite's HMR can swap a single React component in place in tens of milliseconds. The parent gives HMR an 800ms head start; if the in-iframe bridge posts 'hmr:after' inside that window, the parent logs 'HMR painted in <Nms> — skipping hard reload' and stops the timer. If 800ms expires with no message, the parent assumes HMR could not handle this change and cache-busts the iframe src. The cost of that wait is a slightly delayed paint on edits that were going to need a hard reload anyway. The benefit is no flash and preserved state on every edit that was going to HMR cleanly.

What about the white flash everyone hates? How does mk0r kill it?

When the 800ms wait expires and a hard reload is unavoidable, the parent does not blank out the old iframe before booting the new one. It mounts the new iframe behind a new key and keeps the previous one rendered underneath. The new iframe loads in the background; once it paints, the old one is dropped. There is no moment where both iframes are blank, so the user never sees a white flash. The pattern is a few lines in the same phone-preview.tsx file: setPrevNonce(liveNonce) right before setLiveNonce(refreshNonce). It is a small detail and it is the difference between iteration that feels like a native dev loop and iteration that feels like a slideshow.

What triggers a refresh in the first place?

Three event types arriving from the agent stream, all routed through the same schedulePreviewRefresh callback: a file-change event from the ACP onFileChange hook every time the agent writes or edits a file inside /app, a version event from onVersion when the route writes the per-turn git commit, and a done event from onDone when the agent stream finishes. Each fires the 300ms debounce. The debounce is what coalesces ten quick writes from a single agent turn into one preview refresh, so the preview does not flicker through every intermediate state.

How is this different from how most AI app builders handle the loop?

Most builders are running one of two simpler shapes. Either they hard-reload the preview on every change (which is what you would do if you had no HMR bridge to talk to), or they rebuild the project from the prompt list each turn and re-render a fresh preview from string output (no real Vite, no HMR at all, the loop is the model and a stringification step). Builders running real Vite inside an in-browser WebContainer get HMR, but the WebContainer fights the parent for the same memory and a tab refresh in the wrong moment can desync the project. mk0r's setup uses a real Vite dev server inside an isolated E2B sandbox with the bridge speaking to the host page over postMessage, so the loop matches what you would get if you were running Vite on your own machine.

What slows the loop down? Where does the time actually go?

Almost all of it goes to the model. The 300ms debounce and the 800ms HMR wait are two upper bounds, but the bridge usually posts 'hmr:after' in well under 100ms, so on most edits the user-visible cycle is dominated by token-streaming latency and tool-call wall clock. The persistent ACP session removes the boot cost on turn 2 onward (the session is created once during prewarm, see src/core/e2b.ts around line 1958, and reused with the same sessionId for the lifetime of the sandbox). The git commit at the end of each turn runs git add -A and git commit in /app and finishes in a few hundred milliseconds on a small project. The whole turn closure sits inside the route's 800-second budget at the top of src/app/api/chat/route.ts.

Does this loop fall apart at scale? When does iteration stop feeling fast?

Two practical ceilings. First, the conversation buffer keeps growing across turns (the persistent ACP session is a feature on turn three and a liability on turn forty). The fix is a memory layer the agent writes to on every correction; mk0r installs a CLAUDE.md inside the sandbox that instructs the agent to do exactly that, but no memory layer fully cancels buffer pressure. Second, on a really large project, Vite's HMR can take longer than 800ms to swap a module, in which case the parent falls back to a hard reload and you see the (flash-free but slower) reload path more often. Neither ceiling is the model's fault. Both are honest limits worth knowing about before iteration thirty.

Can I read this in source myself?

Yes. The repo is github.com/m13v/appmaker. The three relevant files are src/components/phone-preview.tsx (the parent-side listener and the 800ms timer), src/app/(landing)/page.tsx (the schedulePreviewRefresh debounce wired to onFileChange, onVersion, onDone), and src/core/vm-scripts.ts (the MK0R_BRIDGE_TS constant that gets injected into every Vite project so the in-iframe bridge can post lifecycle messages). The names match. The numbers match. The whole loop is about 60 lines of glue across those three files.

mk0r.AI app builder
© 2026 mk0r. All rights reserved.