Alternative

The AI app prototype is the production app, if you build it in mk0r.

Every other guide on this topic walks you through exporting a prototype from one tool and redeploying it somewhere else for production. That step only exists because the tool that built your prototype did not put a real database, a real domain, and a real Vite server inside the prototype. mk0r does.

m
mk0r
8 min
4.8from 10K+ creators
One VM, prototype = production
Neon, Resend, PostHog in .env from boot
*.mk0r.com wildcard HTTPS, day one

The assumption every other writeup on this topic makes

Read any of the popular articles on AI app prototype versus production and the shape is identical. You build a prototype in an AI tool. You export the code. You set up a new project on Vercel, Netlify, or Cloud Run. You migrate or stand up a database. You wire up real auth. You move from mock email to Resend or Postmark. You add analytics. You point a custom domain at the new deploy. Then your app is "in production."

That whole pipeline exists for one reason: the tool that built the prototype never had a real production stack inside it. It had a preview. Maybe a preview with an in-memory SQLite. Maybe a preview that sends email to the console. The prototype was never meant to be served to users, so it was never given the things a served app needs.

mk0r inverts that assumption. The sandbox has the stack your production would have, because your production lives in the sandbox.

Anchor fact

Port 3000 inside the sandbox is the proxy. Port 5173 is Vite. The public URL https://<vmId>.mk0r.com hits the same process that was serving the prototype three seconds ago.

See docker/e2b/files/opt/startup.sh lines 44 to 72 and the wildcard cert appmaker-wildcard-cert pinned to static IP 35.186.212.31 in the repo's CLAUDE.md. There is no separate build target.

What counts as "production-ready," in numbers

Six things are usually on the checklist. mk0r's prototype already has all six, provisioned at session start by src/core/service-provisioning.ts.

0

Neon Postgres project, per app

0

Resend API key, per app

0

PostHog app id, per app

0

GitHub repo, per app

0

Rebuild steps between prototype and production

0

Env vars the VM has before the first prompt

Provisioning lives in src/core/service-provisioning.ts lines 10 to 31. Env names are listed in docker/e2b/files/app/CLAUDE.md.

What a request to your prototype actually touches

A request hits the HTTPS load balancer at 35.186.212.31, the wildcard cert matches *.mk0r.com, and the backend forwards to your E2B VM. Inside the VM, the proxy on port 3000 fans out to the processes that compose your app.

Same VM, same app, prototype and production

Your browser
Chromium + Playwright MCP
Agent edits
E2B sandbox proxy
Vite dev server
Neon Postgres
Resend + PostHog

Prototype vs production, line by line

I picked the parts of the handoff that usually cost real time when you go from a prototype tool to a production deploy. On mk0r, most of those lines fold into one.

FeatureTypical AI prototype toolmk0r
Where the prototype runsVendor sandbox, not user-addressableE2B VM exposed at <vmId>.mk0r.com over HTTPS
Production build stepExport, rebuild, redeploy to Vercel/Netlify/etc.None. Vite dev server on port 5173 is the served app.
Database for the prototypeIn-memory or mock, you wire Postgres laterPer-app Neon project, DATABASE_URL in .env from boot
Email for the prototypeStubbed; real provider added before productionResend API key in .env from boot
Analytics for the prototypeNone; added after production cutoverVITE_POSTHOG_KEY + VITE_POSTHOG_APP_ID in .env from boot
Custom domainDeploy to platform X, add domain in X's dashboardOne email to i@m13v.com, mapped to your VM
Source of truth for codePlatform project, export to git laterReal GitHub repo created at session start

The usual shape of the handoff, and mk0r's version

You can toggle between the two and see how much of the work stays on your plate in each model.

Prototype-to-production handoff

You build in an AI tool, then leave it to ship.

  • Export prototype code as a zip or git repo
  • Create a Vercel / Netlify / Cloud Run project
  • Migrate or stand up a Postgres
  • Swap in a real email provider API key
  • Add analytics SDK and wire events
  • Point a custom domain at the new deploy
  • Re-run CI, wait for build, chase env var drift

What the startup script actually boots

The easiest way to see "prototype equals production" is to read the script that brings the sandbox up. Every process here is part of the served app. None of them get replaced later.

/opt/startup.sh (abridged, boot order)

Full file: docker/e2b/files/opt/startup.sh. The only externally exposed port is 3000, and that is what sandbox.getHost(3000) resolves, which is what <vmId>.mk0r.com maps to.

Five things that happen before the model writes any code

1

Session boot provisions real services, not stubs

src/core/service-provisioning.ts creates a Neon Postgres project, a Resend API key, a PostHog app id, and a GitHub repo for your session. The returned env vars (DATABASE_URL, RESEND_API_KEY, VITE_POSTHOG_KEY, VITE_POSTHOG_APP_ID, GITHUB_REPO_URL) land in the VM's .env file before the agent gets control.

2

Vite comes up on port 5173 as the app server

docker/e2b/files/opt/startup.sh runs `npx vite --host 0.0.0.0 --port 5173` inside /app. There is no `npm run build` step later. The same Vite process that serves the hot-reloading prototype is the one that serves the public domain. HMR even stays connected over wss (vite.config.ts sets clientPort: 443, protocol: 'wss').

3

Proxy on port 3000 is what the public domain talks to

/opt/proxy.js is the externally exposed port. sandbox.getHost(3000) returns <port>-<sandboxId>.e2b.app, and mk0r's wildcard DNS + load balancer map <vmId>.mk0r.com to that host. Your prototype is already behind a production HTTPS edge.

4

Chromium and Playwright MCP share the same VM

Same startup script launches Chromium with --remote-debugging-port=9222 and @playwright/mcp on port 3001. The agent tests your running app in a real browser on the same network as your Vite server. Same loopback, same cookies, same session state.

5

Custom domain is a human-in-the-loop email

src/app/api/publish/route.ts at line 101 composes an email to i@m13v.com with your domain, session key, VM id, user email, and preview URL. A human maps it. No CDN propagation wait, no CI run, no staging tier for your app. Your VM is the origin.

When you would still want a traditional production deploy

I am not claiming mk0r replaces every serving model. If any of these are hard requirements, you likely want to run your own infra on top of the repo mk0r hands you.

Multi-region replicas with automatic failover

A single E2B VM is a single VM. If you need the app to survive a region outage, you need a deploy surface with replication.

Per-tenant autoscaling

The sandbox has fixed 4 vCPU and 4 GB RAM. Past a certain load, you want a platform that adds replicas on its own.

Strict uptime SLAs

mk0r pauses idle VMs. Acceptable for most apps; not acceptable if you have contractual uptime numbers.

Full isolation from the prototype workflow

If non-technical teammates should not be able to reach the running production VM through the chat, a separate deploy makes more sense.

The one-file tour, if you want to verify this

  • src/core/service-provisioning.ts lines 10 to 31: the ProvisionedServices shape. Neon, Resend, PostHog, GitHub are real fields, not optional stubs.
  • docker/e2b/files/app/CLAUDE.md: the pre-provisioned services table, listing DATABASE_URL, RESEND_API_KEY, VITE_POSTHOG_KEY, VITE_POSTHOG_APP_ID, GITHUB_REPO_URL.
  • docker/e2b/files/opt/startup.sh lines 44 to 72: Playwright MCP on 3001, ACP on 3002, Vite on 5173, proxy on 3000.
  • src/app/api/publish/route.ts line 101: the deploy action fires an email to i@m13v.com; there is no separate build pipeline to trigger.
  • src/core/e2b.ts line 11 and line 33: proxy host comes from sandbox.getHost(3000); VM idle timeout is 1 hour.
  • CLAUDE.md infrastructure section: static IP 35.186.212.31, wildcard cert *.mk0r.com, load balancer URL map appmaker-lb. Your prototype VM sits behind the same edge as mk0r.com itself.

No account, no plan picker, no second deploy target. Describe the app you want and let the prototype ship itself.

Open mk0r

Want to watch the prototype = production story live?

Book 20 minutes with the team. We'll spin up a session in front of you and open the .env, the startup script, and the served URL side by side.

Book a call

Frequently asked questions

Is the prototype I build on mk0r the same thing that goes to production?

Yes. The E2B sandbox that builds your app is the same one that serves it. Vite dev server listens on port 5173, an internal proxy listens on port 3000, and the external URL https://<vmId>.mk0r.com terminates TLS on mk0r's HTTPS load balancer using the *.mk0r.com wildcard certificate. There is no separate production build target.

What about the database, email, and analytics? Do I have to migrate those when I go to production?

No. src/core/service-provisioning.ts provisions a per-app Neon Postgres project, a Resend API key, a PostHog app id, and a GitHub repo during session start. The VM's .env already contains DATABASE_URL, RESEND_API_KEY, VITE_POSTHOG_KEY, VITE_POSTHOG_APP_ID, and GITHUB_REPO_URL before you type your first prompt. Your prototype writes to the same Postgres your production reads from, because they are the same Postgres.

How do I attach a custom domain?

You type a domain into the publish modal. That posts to /api/publish with action='deploy'. The handler in src/app/api/publish/route.ts at line 101 sends an email to i@m13v.com with your domain, session key, VM id, and preview URL. A human on the mk0r team maps the domain to your VM. There is no CDN rebuild, no redeploy, no asset copy. Your sandbox becomes the origin for the new hostname.

Does the sandbox fall asleep after I leave?

The VM has a 1-hour idle timeout (E2B_TIMEOUT_MS = 3_600_000 in src/core/e2b.ts). On idle, the sandbox is paused and its state is checkpointed. When the next request arrives, it is resumed via Sandbox.connect() and keeps serving on the same vmId hostname. A paused VM is not a dead VM.

What stack is inside the sandbox, and can I leave?

Vite, React, TypeScript, and Tailwind v4. The project layout lives under /app in the VM (see docker/e2b/files/app/). The GitHub repo that gets provisioned is yours; you can clone it and run npm install and npx vite and have the same app running on your laptop. The prototype is a normal Vite project, not a proprietary container format.

Is Chromium actually running next to my app?

Yes. docker/e2b/files/opt/startup.sh launches Chromium on the virtual display with --remote-debugging-port=9222, then runs @playwright/mcp on port 3001 wired to that CDP endpoint. The agent can click, type, and screenshot your live app in a real browser. Same VM, same network, same running app.

When would I still want a separate production deployment?

If you need multi-region failover, autoscaled replicas, or a deploy surface with its own SLAs, you would run your own infrastructure. The GitHub repo mk0r gives you is standard Vite and you can point Cloud Run, Fly, Render, or whatever at it. mk0r's point is that most apps do not need that step on day one and the prototype is already serving real users on a real domain with real services.