Guide

AI app builder multi-screen state, the pattern that actually works

Most articles on this topic open with react-router. That is not what AI app builders actually generate when you ask for a multi-screen app. They lift one piece of state into the root component and conditionally render the matching screen. This page explains why that simpler pattern is more reliable for one-shot AI generation, and shows the exact files where it lives in mk0r.

M
Matthew Diakonov
8 min read

Direct answer (verified 2026-05-05)

In an AI app builder that generates a single React tree (mk0r, Bolt, v0 in app mode, Claude Artifacts), multi-screen state lives in src/App.tsx as a React useState or useReducer. The agent renders one screen at a time based on that state and passes any cross-screen data down as props. A router is not wired by default. You can install one if you need deep links, but the unprompted output is the lifted-state pattern. The reason it works this way is in the agent's instruction file: every new component must be imported into App.tsx, so App.tsx is already the place where the model knows everything that exists.

Where state actually flows

Your prompt and the agent both end up writing the same file. The screens are leaves.

One file holds the screen state. Every screen reads from it.

Your prompt
Claude (ACP)
src/App.tsx
ListScreen.tsx
EditorScreen.tsx
SettingsScreen.tsx

Why a router is the wrong default for AI generation

React Router is a great library. The pattern it encodes (URL is the state, components are mounted to match) is the right one for production web apps with deep links, browser-back semantics, and permission-gated routes. None of that is true for the first version of a prototype.

More importantly, a router introduces a second source of truth that the agent has to keep coherent across files. To add a screen, the model must (1) write the component, (2) register it in the router config, (3) import it into the config file, (4) write a link to it from somewhere a user can reach. Skip any one of those and the screen is invisible. The agent does not always remember all four.

Lifted state in App.tsx collapses those four steps into two. Write the component. Add a branch in the App.tsx switch and a button that calls setScreen. That is it. There is one file the model has to keep in sync with itself, and that one file is the file it was already going to edit anyway because every component must be imported there.

The instruction that shapes this pattern

The reason mk0r's agent reaches for the lifted-state pattern is one sentence in the agent's CLAUDE.md, which is baked into the E2B template at docker/e2b/files/root/.claude/CLAUDE.md:

“Always import and render new components in src/App.tsx. Components not imported from App.tsx will never appear on screen.”

That instruction exists to prevent the orphan-component bug (agent creates a file, never imports it, page is blank). Its second-order effect is architectural. App.tsx becomes the place where every component name in the project is known. Adding a screen-state variable next to those imports, and conditionally rendering one of them, is the path of least resistance. The mk0r seed scaffold also does not include react-router, so the agent has no router to reach for unless it installs one, which it will only do if you ask. The combination of those two facts (App.tsx is the import root, no router preinstalled) is what produces the lifted-state pattern in almost every multi-screen project mk0r builds.

What the generated code looks like

Here is a minimal but complete version, the kind of App.tsx the agent writes when you ask for a tiny journal app with list, editor, and settings screens. State for “which screen is active” sits in the same component as state for the entries themselves. Every screen receives only the props it needs.

src/App.tsx

A few things to notice. There is no global store, no context provider, no router. The list screen knows nothing about the editor screen except how to ask App to open one. The editor screen knows nothing about persistence, it just calls onSave with the new entry and lets App decide what to do. Each screen is unit-testable on its own with a hand-written props object. And App.tsx is where you go to read the whole story of the app.

One turn, four files, one commit

The other half of why this pattern is reliable for AI generation is the per-turn commit. mk0r runs each prompt to completion, then runs git add -A && git commit inside the sandbox (the function is commitTurn at src/core/e2b.ts:1687). The git add -A stages every modified file at once. A turn that adds a screen will touch App.tsx (state union), the new screen file, the previous screen file (to add the navigation handler), and sometimes a shared types file. They all land as one SHA.

Why this matters: the screen state and the screen components have to agree on names and shape, and that agreement is now atomic. Undo rolls back all four files together. There is no possible intermediate state where the editor screen exists but the App union does not list it, or where the list screen calls a navigation handler that App has not implemented yet. Either the whole turn shipped, or none of it did.

What an actual agent turn looks like

The transcript below is what you would see if you watched the agent handle “add a Settings screen with a back button” on the journal app from the previous section. Four files modified, one commit at the end.

Agent turn: add Settings screen

When to graduate to a router

Lifted state is not a forever pattern. The signals that it is time to install react-router (or TanStack Router, or whatever you prefer) are concrete:

  • You want shareable URLs for individual screens (a link to a specific entry, not just the app).
  • You want the browser back button to work the way users expect.
  • You have permission-gated screens and want the route layer to handle the redirect, not the component.
  • App.tsx has grown past 200 lines of pure routing logic and starts to read like a switch statement on rails.

When you hit any of those, prompt the agent: “install react-router-dom and refactor App.tsx so each branch is a Route. Keep the entries state at the App level, lift it to a context if any route needs it.” The agent will install the package, write the router config, refactor the screens, and commit. The lifted-state pattern was never wrong, it was just the right shape for the size the project used to be.

The honest summary

AI app builders are not bad at multi-screen apps. They are good at the version of multi-screen that matches the size of a prototype. The lifted-state pattern in App.tsx is what they reach for by default, and it is more reliable in one-shot generation than any router-based approach because there is exactly one file to keep coherent. When you outgrow it, you do not throw the pattern away; you ask the agent to graduate it. The screens, the data, and the flow you already built come along.

If you want to read the lines of code I described above in a real builder, the source is github.com/m13v/appmaker. The agent instruction is at docker/e2b/files/root/.claude/CLAUDE.md, the per-turn commit logic is in src/core/e2b.ts around line 1687, and you can try the builder itself without an account at mk0r.com.

Want help shaping a multi-screen app idea?

Book a 15-minute call. We will walk through your idea together and figure out whether the lifted-state pattern is enough or whether you need to graduate it now.

Frequently asked

Frequently asked questions

What does 'multi-screen state' mean in an AI app builder?

It is everything the running app needs to remember as the user moves between views. Which screen is showing right now. The form values from the previous screen. The list the user just edited that should still be there when they navigate back. The auth status, the cart, the selected item. People assume the answer is a router plus a state library, but in AI-generated apps the answer is usually simpler than that. The model holds one piece of state in the root component and renders the matching child. That is the entire mechanism.

Why don't most AI app builders use react-router by default?

Two reasons, both practical. First, a router introduces a second source of truth (the URL) that the model has to keep in sync with the rendered tree. When the agent adds a new screen, it has to remember to register the route, write the component, import it, and link to it from somewhere. Forgetting any one of those four steps makes the screen invisible. Second, AI-generated apps are short-lived prototypes most of the time. Deep linking and browser-back are not load-bearing for a one-evening side project. mk0r's seed scaffold does not include react-router. The model is told to import every component into src/App.tsx, and the simplest way to put two screens in App.tsx is a useState plus a conditional.

Where does the screen state actually live in a mk0r-generated app?

In src/App.tsx. The agent's instruction file at docker/e2b/files/root/.claude/CLAUDE.md says verbatim 'Always import and render new components in src/App.tsx. Components not imported from App.tsx will never appear on screen.' That single line shapes the architecture. Screens are components. Components are imported into App.tsx. The currently active screen is whichever one the conditional in App.tsx returns. The cleanest expression is a useState typed as a string union of screen names, plus a switch or a small object map that picks the component to render. Cross-screen data sits next to that screen state, in the same component, and gets passed down as props.

What about react-router though, can I still install it?

Yes. The sandbox is a real Vite + React + TypeScript project at /app, with npm available. You can prompt the agent to 'add react-router-dom and split this into proper routes' and it will install the package, write a router config, and refactor the screens. The point is not that you can't use a router. The point is that the default, unprompted output is the lifted-state pattern, and that pattern has fewer moving parts for the agent to keep coherent. People who reach for react-router on day one of a prototype usually outgrow it on day two when they realize they want to deep-link, and people who never reach for it usually never need to.

Doesn't lifting state into App.tsx make App.tsx a god component?

It can, if you let it. The mitigation is the same one any senior React dev would use: extract the screen state plus its associated data into a custom hook (useAppState in src/hooks/) once it grows past 30 lines, and pass the hook's return value down. App.tsx then becomes a router-like switch that calls a hook, reads currentScreen, and renders one of half a dozen children. The total surface area is still smaller than wiring react-router plus a state library, and every screen still works in isolation. mk0r's CLAUDE.md tells the agent to extract components past 100 lines, which applies to App.tsx the same way.

How does the agent keep state and screens in sync across iterations?

Per-turn atomic commits. Every successful agent turn calls commitTurn at src/core/e2b.ts:1687, which runs 'git add -A && git commit' inside the sandbox. The 'git add -A' is the load-bearing piece, it stages every modified, added, or deleted file at once. So a turn that adds a Settings screen, edits the screen-state union in App.tsx, and updates a shared type in src/types/ shows up as one commit. There is no half-applied state where the new screen exists but the union does not include it, or vice versa. Undo runs git checkout on the whole tree, so reverting always restores a coherent multi-file picture.

What is the practical limit of this pattern?

It holds well up to roughly five or six screens with shared state and one or two data lists. Past that you start wanting either react-router (for deep linking and back-button correctness) or a real store like zustand (for state that is read by deeply nested components without prop drilling). The honest framing is: lifted state in App.tsx is a great starting point and a fine final form for utilities, internal tools, and most demos. It is not the right answer for production multi-tenant apps with persistent URLs and complex permissions. AI app builders are best at the first category and noticeably worse at the second one.

Can I see the full pattern in a generated app?

Yes. Open mk0r.com, switch to VM mode, type a prompt like 'a tiny journal app with a list screen and an editor screen, the editor opens when you tap an entry'. The agent will scaffold src/App.tsx with a useState for the active screen plus the entry list, write src/components/ListScreen.tsx and src/components/EditorScreen.tsx, and pass setSelectedEntry down to the list. You can read every line. The whole project is a real Vite app at /app inside the sandbox, and you can pull it down with the GitHub integration to see the file tree.

mk0r.AI app builder
© 2026 mk0r. All rights reserved.