Guide

Google AI Studio vibe coding vs a sandbox with Postgres already wired

Google AI Studio launched a vibe coding flow that builds React apps from a prompt. It is clean. It also begins after a Google login and ends at a front end. mk0r runs the same describe-to-build flow with no account, and the agent opens your first message holding env vars for a Postgres database, a transactional email key, an analytics project, and a GitHub repo that already exist for that session.

M
Matthew Diakonov
9 min
4.8from 10K+ creators
No Google login
4 services pre-provisioned per session
DATABASE_URL in /app/.env before turn one

The account wall is the first difference

To run Google AI Studio's Build Apps flow you need a Google account, you need to accept the AI Studio terms, and you need the flow to pass the per-account gate Google applies. If you are on a Workspace account with certain AI policies turned off by an admin, the Build tab never lights up. None of those conditions exist on mk0r. There is no signup form. There is no OAuth modal. The landing page fires one POST on mount, and by the time you finish reading the hero, a booted E2B sandbox from the pre-warm pool is ready to claim.

That matters because the friction is not theoretical. Every extra step between "I have an idea" and "the app is rendering" is a checkpoint where you can drop out. AI Studio is the best-in-class example of what vibe coding looks like when the vendor happens to already have your identity. mk0r is the version for people who do not want to give it up.

First 60 seconds after 'build me a waitlist'

Sign in with Google. Accept AI Studio terms. Wait for the Build tab to hydrate. Paste the prompt. Gemini generates a React app. You have a form with fake submit and no storage.

  • Login required
  • Terms prompt
  • Output: client React only
  • Email capture is a console.log, no DB, no email provider

What the agent knows before you type

The reason mk0r can ship a real waitlist on turn one is not a better model. It is a longer contract loaded into the session before your prompt arrives. The project-level CLAUDE.md documents, in a literal markdown table, four services that are already wired to the VM with named env vars. The agent is told to read /app/.env before it asks you for any credential.

src/core/vm-claude-md.ts (projectClaudeMd, lines 1177-1192)

This table is not a marketing claim on the landing page. It is the exact text Claude sees when the session starts. The rule it produces is predictable: when the user asks for email capture, the agent reaches for RESEND_API_KEY; when the user asks to store data, the agent reaches for DATABASE_URL. No side quest for credentials.

How those env vars actually get there

The agent gets env vars because a Node provisioner ran before the sandbox was handed to you. The module at src/core/service-provisioning.ts exports a typed surface for the four providers. Each one has its own real API call out to the provider to cut a per-session resource, then returns a record of env vars that the VM boot writes to /app/.env.

src/core/service-provisioning.ts

Look at the neon field. It returns six values, including a connectionUri and a role password. That is a genuine Postgres project, not a shared demo DB. Each session gets its own. If you want to read the actual SQL you can.

inside the VM, right after boot

The shape, end to end

If it helps to see the dependency graph, this is where every one of those env vars comes from and where it lands. The agent is the only thing that reads it all. You are not the plumber.

Provisioning, boot, and the first prompt

Neon API
Resend API
PostHog project
GitHub REST
/app/.env
projectClaudeMd
Claude Code agent
Vite dev server
Your app

Side by side on the dimensions that matter

This is not a "which is better" grid. Both tools are good at what they do. This is a straight read of where the flows diverge, so you can pick the one that matches what you are actually trying to ship.

FeatureGoogle AI Studiomk0r
Account required to startGoogle account + AI Studio termsNone
ModelGeminiClaude, via ACP inside an E2B sandbox
First-prompt runtimePreview iframe in AI Studio UIVite + React + Tailwind on localhost:5173
Database on turn oneNot provisionedNeon Postgres (DATABASE_URL in /app/.env)
Transactional email on turn oneNot provisionedResend (RESEND_API_KEY in /app/.env)
Analytics on turn oneNot provisionedPostHog app ID wired as VITE_POSTHOG_KEY
Source controlDownload zip, connect GitHub yourselfPer-session repo at m13v/mk0r-app-<slug>
Agent toolingBuild Apps runtime (sandboxed preview)Shell, file edit, HMR, Playwright, Chromium on CDP
Where the rulebook livesNot user-visiblevm-claude-md.ts in the public repo (2,354 lines)

The numbers, concrete

Every number below points at a file or a table you can open. No benchmarks from a slide deck.

0Accounts required to start on mk0r
0Services auto-provisioned per session
0Env vars written to /app/.env
0Lines in the agent rulebook

Put together: 0 real API calls to real providers before your first prompt lands, 0 env var names documented in the project CLAUDE.md so the agent uses them by default, and 0 sign-in steps between you and a running session.

What you can ship before you would still be on Google's login

These are specific app shapes that rely on the pre-provisioned stack. Each one works on turn one because the agent does not have to ask you for any credential. On a front-end-only vibe coding tool, each one requires you to go sign up somewhere else and paste a key back.

Waitlist with real email capture

Agent inserts into the pre-provisioned Neon table, fires a Resend confirmation, logs a PostHog event. All three keys already in /app/.env.

Lead magnet with download email

PDF or file sits in /app/public, a form POST triggers Resend with the asset link. GITHUB_REPO keeps the source as you iterate.

User-generated gallery

Postgres table for submissions, PostHog events for views, simple moderation UI. No third-party form builder involved.

Class sign-up with reminders

One Neon table for sessions, another for attendees. Resend sends the reminder. Schedule drives the UI.

Feedback widget

Insert to Postgres, email a digest daily via Resend, chart responses with PostHog. Reads like a real product by the second turn.

Webhook dashboard

Accept a POST, log to Postgres, show a live table. Includes a PostHog funnel so you know which endpoints got called.

The four steps, start to ship

The difference with AI Studio is not that mk0r has more steps. It is that the ones you would do yourself are already done.

mk0r's vibe coding loop

1

Open mk0r.com

The page fires POST /api/vm/prewarm on mount. A pre-booted E2B sandbox from the Firestore pool is reserved for you.

2

Describe the app

Your first prompt is routed to a Claude session that has already read the 2,354-line CLAUDE.md plus six named skills and seen /app/.env.

3

The agent writes against real services

When the output needs a DB, the agent writes SQL against DATABASE_URL. When it needs email, it imports resend and uses RESEND_API_KEY. No credential requests back to you.

4

Publish or push

Hit publish to get a subdomain, or keep the GITHUB_REPO and push from the VM. History is preserved either way.

Open a session and watch the env file load

No Google login. The agent opens your first prompt already holding DATABASE_URL, RESEND_API_KEY, and a GitHub repo for the session.

Start building

When to pick which

If you are already in the Google cloud, want Gemini specifically, and the shape you are building is a polished client-side React app, AI Studio is a good fit. It is tightly integrated with the rest of Google's AI tooling and the preview is excellent.

If you want to ship something with a backend behind it, or you do not want to sign in, or the app you have in mind collects data, sends email, or needs its own repo, pick mk0r. The key fact in this whole page is that /app/.env is populated before the agent reads your prompt. That is what makes a real waitlist on turn one possible without asking you for anything.

Want the provisioner walked through live?

Book a 20 minute call. We will open a fresh session, cat the /app/.env inside the VM, and show how the agent writes against each service on the first turn.

Frequently asked questions

What is Google AI Studio's vibe coding in one paragraph?

Google AI Studio's Build Apps feature lets you describe an app in natural language and get a working React front end powered by Gemini. You sign in with a Google account, type a prompt, and a preview renders in an iframe. You can iterate, download the code, or deploy it yourself. It is a polished flow for generating a front end. It does not hand you a running backend.

What does mk0r do differently for vibe coding?

Two things. First, no Google login or any login at all. You land on mk0r.com, describe your app, the session starts. Second, the first prompt runs against a VM that already has four services provisioned for that specific session: a Neon Postgres database, a Resend API key, a PostHog project, and a GitHub repo. The agent is told in its project CLAUDE.md to read /app/.env before asking you for any credentials, so a waitlist that stores real emails and sends a real confirmation can ship on turn one.

Which services are provisioned per session, and what are the env var names?

The provisioner at src/core/service-provisioning.ts creates: a Neon project with DATABASE_URL, NEON_HOST, and NEON_DB_NAME; a Resend audience and a per-app restricted API key exposed as RESEND_API_KEY and RESEND_AUDIENCE_ID; a PostHog app ID as VITE_POSTHOG_KEY, VITE_POSTHOG_HOST, VITE_POSTHOG_APP_ID; and a GitHub repo at m13v/mk0r-app-<sessionSlug> as GITHUB_REPO and GITHUB_REPO_URL. The project CLAUDE.md documents this table in src/core/vm-claude-md.ts at lines 1177 to 1192.

Do I need a Google account to use mk0r?

No. There is no signup gate. The landing page fires POST /api/vm/prewarm on mount, so a booted E2B sandbox is already warming from a Firestore-backed pool while you are reading the hero. When you send the first prompt, the session is claimed out of that pool and the agent starts running. Google AI Studio requires a Google account and agreement to Google's AI terms before the first prompt can run.

Can Google AI Studio build a full-stack app?

Its Build Apps output is a client-side app. You can have Gemini generate code that calls external APIs, but you are the one who signs up for those APIs, manages the keys, and deploys the backend. Nothing in the Build flow provisions a database or a transactional email account for you. On mk0r the agent is told which services exist and their env var names, so it writes code that imports from process.env on turn one.

Which model does mk0r use, and how is that different from AI Studio?

mk0r uses Claude inside an E2B sandbox, routed through the Agent Client Protocol. Google AI Studio uses Gemini. The practical difference for vibe coders is less about model IQ and more about what the agent is allowed to do. mk0r's agent has shell, file edit, Playwright, and HMR feedback on a live http://localhost:5173 dev server. AI Studio's output is rendered by its own runtime and the build surface is narrower.

What happens to my code when I am done on mk0r?

Two paths. You can publish to a subdomain (the publish flow calls /api/publish), or you keep the GitHub repo that was provisioned for the session. GITHUB_REPO and GITHUB_REPO_URL are in the env; the agent can push, open pull requests, and maintain history. Google AI Studio lets you download a zip and connect your own GitHub separately.

Is mk0r really faster than AI Studio for 'describe and build'?

The wall clock you care about is page load to first working UI. On mk0r the VM is pre-warmed, so the wait is the LLM generation itself. On AI Studio the wait is sign-in plus consent plus generation. For a waitlist with email capture, the difference is larger, because on mk0r the agent wires Resend directly from the pre-provisioned key instead of asking you to go sign up and paste a key.

Skip the login. The backend is already provisioned.

Start a vibe coding session
Book a walkthrough