Guide

A code generator app that hands you a real Postgres, a real mail key, and a real repo, before the agent has written a line.

The thing almost nobody writing about this topic explains is what a generated app is actually plugged into when it starts running. Most answers stop at "it outputs code." mk0r's answer is a 389-line provisioning file that asks Neon for a dedicated Postgres project, asks Resend for an audience and a restricted API key, tags an isolated PostHog app id, and creates a private GitHub repo, all in parallel, all before the agent reads its first prompt. Here is the file, the events it fires, and the row your agent sees in /app/.env on turn one.

M
Matthew Diakonov
10 min
4.8from 10K+ creators
389-line provisioning module
4 backend services per app
Env injected before turn one

What other tools in this category stop explaining

Open ten other guides that cover this topic. They compare generator apps on the same handful of axes: how many templates, how fast the preview, how many languages, how the free tier looks. They rarely answer the one question that matters as soon as you want the app to do anything useful: when the generator is done, what is the app plugged into?

The usual answer is nothing. A generated frontend can hold state in memory. A generated component can render an email form and throw the submission into the void. If you want the app to remember what you typed yesterday, you need a database. If you want it to tell you when someone signed up, you need a mail sender. If you want the creator to own the code, you need a repo. Most generators let those four cliffs sit between you and a useful app.

mk0r bridges all four, and the bridge is a single file.

The orchestrator, in one excerpt

The main entry point is provisionServices(sessionKey) at src/core/service-provisioning.ts. Three things to notice: PostHog is synchronous because the project is shared and per-app isolation is an appId tag, Resend and Neon and GitHub run in Promise.allSettled so one failure does not block the other two, and the return value is a plain map of env vars that a later step writes to /app/.env.

src/core/service-provisioning.ts
0Lines in service-provisioning.ts
0Services provisioned per app
0Run in parallel via Promise.allSettled
0Accounts the user has to create

Where each credential lands

The three async provisioners return structured objects. The orchestrator flattens them into a envVars map, and then the sandbox startup writes those vars into a real /app/.env file inside the VM. The agent reads that file on boot because docker/e2b/files/app/CLAUDE.md instructs it to check for pre-provisioned services before asking for credentials.

Three upstreams, one hub, five env groups your agent inherits

Neon API
Resend API
GitHub API
provisionServices()
DATABASE_URL
RESEND_API_KEY
RESEND_AUDIENCE_ID
GITHUB_REPO
VITE_POSTHOG_APP_ID

The Neon call, not summarized

mk0r does not borrow your database. It asks Neon for a brand-new project with a hard-coded region and Postgres version, and returns the connection uri and the role password alongside. The region choice is stable because the app sandbox runs in us-central on E2B and us-east-2 on Neon keeps the round trip under a hop. The Postgres version is pinned to 17 so the agent does not get stuck writing an SQL feature the server does not support.

src/core/service-provisioning.ts (provisionNeon)

Five values come back. One of them, the connection uri, is the whole story: the generated app imports process.env.DATABASE_URL and it just works. No paste. No local proxy. No free-tier cluster that goes to sleep after five minutes of idle.

The Resend call, also not summarized

Two API calls, in sequence. First a POST to /audiences to create a per-app audience. Second a POST to /api-keys with permission sending_access so the key this app is handed can send mail but cannot list other audiences or rotate other keys. That last detail matters: if a generated app leaks its Resend key, the worst case is unwanted sends. The key cannot see anything else in the Resend account.

src/core/service-provisioning.ts (provisionResend)

How the env file gets written

The provisioning call runs on Cloud Run. The sandbox runs on E2B. The handoff is a single printf > /app/.env fired over the ACP bridge. Timeout ten seconds, newline-delimited key equals value format. The agent reads it the first time it inspects the project directory.

src/core/e2b.ts (createSession, around line 1299)

The rule that tells the agent these exist

Provisioning the services is half the job. The other half is making sure the agent actually uses them rather than asking the user for an API key. The sandbox ships with a project-level CLAUDE.md that names the services and the env vars by table, so the agent treats them as the default rather than a surprise.

docker/e2b/files/app/CLAUDE.md

A real boot trace, abbreviated

Here is what the service logs look like when a session is created and the agent picks up the first prompt. The [provisioning] lines come from the orchestrator; the final agent line is the very first turn against the sandbox.

Cloud Run, createSession flow

From a prompt to a persistent app, step by step

The sequence is the same every time. The interesting part is that steps two through five finish before the user types a second sentence. The agent lands on a project that already has everything it needs to build something persistent.

1

Prompt submitted on mk0r.com

No account. No modal. The user types a description of the app and hits enter.

2

Sandbox boot begins

A warm E2B sandbox is pulled from the pool. startup.sh begins, Vite waits to bind 5173.

3

provisionServices() fires

src/core/e2b.ts:1299 calls the 389-line orchestrator. PostHog is sync, Resend plus Neon plus GitHub run in Promise.allSettled.

4

Upstream APIs return

Neon returns a connection_uri, Resend returns an audience id and a restricted key token, GitHub returns a repo full name.

5

/app/.env is written inside the VM

buildEnvFileContent formats key equals value lines, execInVm pipes them into /app/.env with a ten-second timeout.

6

Agent inherits the env

The agent reads /app/CLAUDE.md, sees the pre-provisioned services table, reads /app/.env, and plans its first write accordingly.

7

First component lands

src/App.tsx is written, HMR fires, the preview rebuilds in place, and the agent already knows the app can talk to Postgres.

8

Second turn goes persistent

On the next turn the agent adds a table migration, wires a query, and the app remembers what the user typed across reloads.

What this unlocks in practice

The same prompt reads very differently depending on what the underlying project is plugged into. Below, the same six prompts that most code generator apps would turn into static frontends, and what they become when the target is a project with a real /app/.env.

A habit tracker with streaks that survive a refresh

The agent reads DATABASE_URL, writes a habits table and a check_ins table, wires a server action to insert, and the streak count persists across sessions. No 'hook up your own database' moment.

A pricing page with a real email capture

The agent posts into the pre-created Resend audience using the restricted key. You see the contact land in the mk0r-<slug> audience, not in a shared list.

An internal dashboard with event counts

The PostHog app id is already in /app/.env. The first chart the agent writes counts the events your app is currently emitting, filtered by app_id.

A signup form that books people in

Two rows later the agent has a users table in your dedicated Postgres and a welcome email flowing through Resend, and both are in the repo that was provisioned for you.

A waitlist that emails the owner on every signup

Dedicated Postgres for the list, restricted Resend key for the notification. The owner email is the only thing the agent asks you for.

Side by side with the standard pattern

Most of this category still ships pure frontend output. That is not wrong, it is just a different product. The distinction is visible on every row the moment you ask what the generator hands to the agent.

FeatureTypical code generator appsmk0r
What you get back from the generatorA file, a zip, or a preview iframeA running app plus /app/.env with real credentials for four services
DatabaseNone, or a shared sandbox databaseDedicated Neon Postgres project, aws-us-east-2, pg_version 17
Email sendingMailto link or 'add your SMTP later'Restricted Resend API key with its own audience, scoped to this app
AnalyticsNothing, or a tracker you have to paste inPostHog app id injected as VITE_POSTHOG_APP_ID for event isolation
Source controlDownload, create a repo yourself, pushPrivate GitHub repo created under m13v/ before the first turn
First turn capabilityPure frontend, no persistence, no emailCan write a users table, insert rows, send welcome emails, log events
Account setup to reach that first turnEmail, password, plan selection, often a cardNone. /app/.env is pre-filled from mk0r's service accounts

The part worth copying

If you are building a code generator app, the lesson is not the specific mix of services. The lesson is that the real unit of generation is not a file, it is a project that the generated file can already talk to. Neon has an API, Resend has an API, GitHub has an API. You can call all three in parallel on session creation, write the answer into /app/.env, and your agent starts every turn with infrastructure instead of a blank slate. The total cost is 0 lines of TypeScript and three HTTP calls in Promise.allSettled.

Want a walkthrough of a session where the env file is already full?

Book twenty minutes. I will share a sandbox and you can cat /app/.env before the agent's first turn.

Book a call

Frequently asked questions

What does a code generator app actually give you?

Most of them give you code. You paste a prompt, a file or a zip comes back, you copy it somewhere and wire up the rest of the world yourself. That rest of the world is usually the slow part: a database, a way to send email, a place to track analytics, a repo to push to. mk0r treats those four as part of the generation. By the time the agent writes the first React component, /app/.env already has a live connection string, a restricted email key, a PostHog app id, and a remote the repo was initialized against.

Where in the repo is the provisioning logic?

src/core/service-provisioning.ts, 389 lines. It exports one main entry point, provisionServices(sessionKey). PostHog is configured synchronously because the project key is shared across all apps and per-app isolation happens via an appId. Resend, Neon, and GitHub run in parallel through Promise.allSettled so one failure does not block the other two. The result gets written to /app/.env inside the sandbox via execInVm at src/core/e2b.ts:1299-1313.

What exactly does mk0r ask Neon for?

A full dedicated project, not a shared database. The POST to Neon's /projects endpoint specifies region_id aws-us-east-2, pg_version 17, and org_id org-steep-sunset-62973058 (the mk0r Neon org). Neon responds with a project id, a connection_uri, a default database, a role, and an endpoint. All five land in envVars as DATABASE_URL, NEON_HOST, NEON_DB_NAME, NEON_ROLE_NAME, NEON_ROLE_PASSWORD. The agent reads DATABASE_URL the moment it needs persistence. No SQL console, no copy-paste, no free-tier shared cluster.

Why a restricted Resend key per app instead of one shared key?

Blast radius. If the generated app leaks its own key, it can only send mail; it cannot list audiences, rotate keys, or touch other apps. The provisioning call creates an audience first (mk0r-<slug> Users), then creates a Resend API key with permission sending_access scoped to that audience. Both values land in /app/.env as RESEND_API_KEY and RESEND_AUDIENCE_ID. When the agent writes the newsletter signup for your app, it adds the contact to your audience, not a shared one.

Does the PostHog setup really isolate each generated app?

The project key is shared, but every generated app gets its own VITE_POSTHOG_APP_ID of the form mk0r-<first twelve chars of the session key>. The in-VM PostHog client sets that as a group or property on every event, so the dashboard can filter by app without running a separate PostHog project per generation. That is a deliberate choice: Neon and Resend are resources that benefit from per-app isolation, PostHog benefits from a shared project with tagged events.

Does the GitHub repo get created empty or pushed to?

Created empty and private. The provisioning call POSTs to api.github.com/user/repos with auto_init: false and private: true. The generated repo name is mk0r-<twelve-char slug>. The sandbox's git setup runs separately (see ensureSessionRepo in e2b.ts) and wires the local /app directory as the initial commit on first push. GITHUB_REPO and GITHUB_REPO_URL end up in /app/.env so the agent can show the user where their code lives without asking them to create an account.

What happens if one of the upstreams is down?

Promise.allSettled, not Promise.all. A single failed service does not block the other three. The code pushes an entry into errors[] and keeps going. A skipped Neon gives you an app without a DATABASE_URL, but you still get a working app, an email sender, an analytics stream, and a repo. The calling path in e2b.ts logs the partial state and continues. Nothing about the user's prompt is blocked on provisioning success.

Do I need to configure any of this to try mk0r?

No. Open mk0r.com and type a description. The sandbox boots, provisioning runs in parallel, and the agent inherits whatever /app/.env ends up holding. There is no signup, no payment, no onboarding flow that asks you to paste credentials. The credentials come from mk0r's service accounts and are scoped per app, so your generation starts with real infrastructure and you are not the one paying for it.

Is this only available on the paid tier?

Every session gets provisioning attempted. The only reason a service is skipped is that the corresponding provisioning key is missing on the Cloud Run service (POSTHOG_PROJECT_KEY, RESEND_PROVISIONING_KEY, NEON_PROVISIONING_KEY, GITHUB_PROVISIONING_TOKEN). On the live service those are all set, which is why a fresh mk0r session arrives with DATABASE_URL populated and you can ask the agent for a dashboard that reads from Postgres in the very first turn.

Can I read the provisioning file without cloning the repo?

Yes. The appmaker repo is public. The file is at src/core/service-provisioning.ts. The call site is at src/core/e2b.ts, line 1299, inside the createSession flow. The .env writer is buildEnvFileContent at the bottom of the same provisioning file. The sandbox CLAUDE.md that documents which env vars the agent can expect lives at docker/e2b/files/app/CLAUDE.md, lines 37 to 50.