Miro AI Prototyping outputs a spec, not an app
Every page that ranks for this topic describes the clickable canvas as if it were the deliverable. It is not. Miro’s own design-to-code page tells you to authenticate Claude Code with a Miro MCP server and run a second AI through your board to get working software. That second hop is the whole story, and most explainers leave it out.
Direct answer (verified 2026-05-08)
Miro AI Prototyping outputs editable clickable mockups on the Miro canvas (multi-screen flows with auto-wired interactions), not running code. To turn a Miro prototype into a working app, Miro’s own design-to-code documentation recommends connecting Claude Code to a Miro MCP server so the agent can read your board context and generate code in a separate repo. Miro is the spec layer. The working app lives somewhere else.
What lands on the canvas when you hit generate
The Miro Prototype generator (Sidekicks > Formats > Prototype in the current UI) takes a text prompt, a screenshot, or a selection of existing frames and produces a draft directly on the board. You can pick device type (mobile, tablet, desktop) and choose single-screen or multi-screen at generation time. The draft is a stack of canvas frames with arrow connectors between them, plus a few default interaction states the AI guesses (a button tap routes to the next screen, a back arrow returns to the previous).
What you get on the canvas
- Multi-screen flows generated as editable Miro frames
- Auto-wired arrow connections between screens for clickable preview
- Basic interaction states (taps, navigation, simple toggles)
- A device frame (mobile, tablet, desktop) you choose at generation time
- All of the above as canvas objects, not as a hosted, working app
Last item is the load-bearing one. Everything Miro generates is a canvas object. Drag it, comment on it, restyle it. None of it is a running web app. The clickable preview is a stitched-together flow inside Miro, served from Miro’s board renderer, accessed by sharing a Miro board URL with people who have a Miro account.
Where working code is supposed to come from
Miro publishes a design-to-code workflow page that admits this gap and resolves it with a second tool. The shape of the recommended path:
Miro prototype to working code, the published path
Each arrow is a real piece of work. You build context on the canvas (a problem statement, user stories, frames). You generate the prototype with the AI Sidekick. You authenticate Miro’s MCP server. You connect a coding agent (Claude Code is the one Miro names in the docs) to that MCP server. The agent reads the board, writes code into a repo, you deploy that repo somewhere. The clickable canvas is one step inside a chain of five.
That chain is the whole product, not an edge case. Reading the Miro docs as a working sequence (rather than as marketing) lands on a clear claim: Miro AI Prototyping is a specification tool that hands off to an AI coding agent. It is not, on its own, an app maker. The canvas is the artifact you align around; the running app is downstream.
The case for spec first, code second
Splitting the spec layer from the build layer is a defensible choice. A canvas full of frames, sticky notes, decisions, and rejected variants carries more context than the final code does. When the coding agent runs, it gets to read all of that, not just the latest frame. Teams that already live in Miro for design review get to keep their existing workflow and bolt on code generation at the end, instead of replacing the workflow entirely.
When the canvas is the right tool
- The audience is internal. Designers, PMs, stakeholders who attend a design review and leave with alignment on intent. The Miro canvas is built for that conversation.
- The artifact is the conversation. Comments, stickies, decision threads. None of that survives a code export, all of it survives on the canvas.
- You already have engineering downstream. The team that turns the spec into code exists. The canvas is the brief they read; the MCP handoff is a workflow nicety, not a requirement.
- You want to compare variants visually. Three layouts side by side on a board is a faster format for that decision than three code branches in a repo.
When the canvas step is dead weight
Flip the audience. If the next person who needs to look at the prototype is a user, the canvas mockup is dead weight. A user wants a link they can open on their phone. They cannot open a Miro board on a phone in any useful way; they will not sit through a click-through that is rendered by Miro’s preview engine. They want to tap something that behaves like an app.
The same logic applies to throwaway prototypes you build to settle an internal argument. If the goal is “does this idea work at all,” reading running HTML on a phone gets you to the answer faster than reading a canvas on a laptop. The handoff to MCP and a coding agent doubles the surface area without doubling the signal.
The interesting thing about the second case is that the spec layer becomes a tax instead of an asset. You wrote the prompt twice, once into Miro to generate the canvas, once into Claude Code to generate the code. Half the time the second prompt drifts away from the first because the agent re-interprets the board. A code-first AI builder collapses both prompts into one input and writes the running app from a single sentence.
What the collapsed loop looks like in mk0r
Type a sentence
Live HTML stream
Open in browser
Iterate with words
Share preview URL
What gets generated in the collapsed loop
- Real HTML, CSS, and JavaScript files at /app, not canvas geometry
- A live dev server on port 5173 with HMR you can open on your phone
- Playwright MCP available to the agent so it can verify its own output
- Iteration by typing the next sentence; HMR refreshes the running app
- A shareable preview URL with no signup, no canvas, no second tool
Concretely: open mk0r.com, type a sentence, watch HTML stream into a phone-shaped preview pane. The default fast path uses Claude Haiku 4.5 streaming directly into the browser; the deeper path runs the agent inside an E2B sandbox (template ID 2yi5lxazr1abcs2ew6h8) where Vite, React, TypeScript, Tailwind v4, and Playwright MCP are pre-installed at /app, and a dev server is already listening on port 5173. The agent has Playwright MCP wired in via src/core/e2b.ts:170-191 so it can open the running app in a real Chromium instance and verify what it just wrote before handing the iteration back to you.
The honest decision
Pick the tool that matches the audience for the artifact, not the tool with the most features.
If the artifact is going to a person who lives in Miro and the conversation around the canvas is the value, Miro’s prototype generator earns its credits. The MCP handoff to Claude Code is a clean way to keep that conversation context when you eventually need code. Two tools, two layers, an honest split.
If the artifact is going to a user and the running app is the value, the canvas step is friction. A code-first AI builder writes the running app from one prompt and skips the second tool entirely. Same reason a designer would not draw a Figma mockup of a tweet before sending the tweet: the running version is cheaper to produce than the spec.
“The clickable canvas is the spec. The running app is somewhere else.”
Miro design-to-code workflow, summarized
A small experiment if you are stuck
Pick the next prototype you owe somebody. Build the same idea in both tools. In Miro, generate the canvas, share the board URL with whoever asked. In a code-first builder like mk0r, type one sentence and share the preview URL. Watch what they open first. Watch which one gets feedback you can act on. Watch which one survives the next twenty-four hours.
That experiment, run twice, settles the question for your specific audience faster than any blog post. Including this one.
Trying to skip the canvas step?
If you want the working-app loop without the Miro plus MCP plus Claude Code dance, walk through it together for fifteen minutes.
Frequently asked questions
What does Miro AI Prototyping actually output?
Editable canvas mockups, not running code. The output is a set of Miro frames on the board, with auto-wired arrows between them so you can click through a flow, plus basic interaction states the AI guesses for you. The frames behave like any other Miro objects: drag, edit, restyle, comment. The deliverable lives inside Miro and is shared via a Miro board link, not as a hosted web app.
Can Miro generate working code from the prototype?
Not on its own. Miro's own design-to-code page recommends connecting an external AI coding agent (Claude Code is the example they use) to a Miro MCP server. The agent then reads your board (frames, sticky notes, problem statement, the prototype itself) and writes code in a separate repo. Miro is the spec layer; the agent is the build layer; the working app lives somewhere else, usually wherever you deploy the generated code.
Is Miro AI Prototyping free?
It is a paid add-on for Starter, Business, and Enterprise plans, sold as part of a Miro AI credit bundle that is shared across other AI features on the canvas. Free plans do not include the prototype generator. The credit model means heavy iteration also costs more credits per session.
What inputs can Miro AI Prototyping take?
Text prompts, screenshots of an existing UI, sketches, sticky notes already on the canvas, and selected frames. The screenshot input is the most useful one in practice; you can paste a competitor screen or a rough Figma export and ask Miro to rebuild it as an editable canvas prototype.
How is mk0r different from Miro AI Prototyping?
Different layer. Miro generates an editable mockup on a canvas you can share inside a Miro board. mk0r generates running HTML, CSS, and JavaScript at /app inside an E2B sandbox (template ID 2yi5lxazr1abcs2ew6h8) with a live Vite dev server on port 5173 and Playwright MCP available to the agent for self-verification. The deliverable is a working app on a real preview URL, no canvas, no signup, no second tool, no MCP wiring.
When should I use Miro AI Prototyping over a code-first AI builder?
When the audience for the prototype is a design or PM team that lives in Miro, or when the value of the artifact is the conversation around the canvas (comments, stickies, alignment workshops) rather than a runnable preview. Anything you would walk through in a design review, where the goal is alignment on intent, fits the Miro flow well.
When is a code-first builder a better fit?
When the next person who needs to look at the prototype is a user, not a teammate. A user wants a phone-ready link they can click. A canvas mockup does not survive that test. If your goal is to put a working app in front of a customer this afternoon, you are paying a tax to start in Miro and finish in a coding agent. mk0r at mk0r.com is one example of skipping the canvas step entirely.
Does mk0r read a Miro board?
No. mk0r reads a sentence. If you already have a Miro board with a prototype, you can paste a screenshot of the frame into mk0r and ask it to build the same thing as a working app. The screenshot path is fast and lossy on purpose, the iteration after that is where you recover the detail. There is no MCP integration with Miro and there is no plan to add one because the canvas-first workflow is not the gap mk0r is solving.