\

I turned Markdown into a protocol for generative UI

52 points - today at 1:42 PM


There's a lot of work happening around both generative UI and code execution for AI agents. I kept wondering: how do you bring them together into a fully featured architecture? I built a prototype:

- Markdown as protocol — one stream carrying text, executable code, and data

- Streaming execution — code fences execute statement by statement as they stream in

- A mount() primitive — the agent creates React UIs with full data flow between client, server, and LLM

Let me know what you think!

Source
  • zeroq

    today at 5:40 PM

    If you're still looking for a name let me suggest "hyper text".

    It embodies the whole idea of having data, code and presentation at the same place.

    If you're open for contributions I already have an idea for cascading styles system in mind.

      • altruios

        today at 5:54 PM

        Every turn of the wheel someone wants to make a new one.

        Maybe one day someone will invent a rounder wheel.

          • doublerabbit

            today at 7:26 PM

            Personally I think we should move to Heptagons, they're round enough.

            The wheel is what I would call, passé.

              • altruios

                today at 8:22 PM

                Every day the wheel of society turns a little further off course.

                Soon we'll be optimizing for minimizing the sides of a wheel (triangles are not the final form here...) /s

        • noman-land

          today at 7:18 PM

          If HTML happened again except this time it was markdown, maybe more non-nerds would be able to use it? XML just looks gnarly.

          • FabianCarbonara

            today at 6:29 PM

            Ha, history does rhyme ;) Happy if you reach out via mail!

              • heckintime

                today at 7:09 PM

                I think he's talking about CSS

        • z3ugma

          today at 8:42 PM

          I will say I came upon this same design pattern to make all my chats into semantic Markdown that is backward compatible with markdown. I did:

          ````assistant

          <Short Summary title>

          gemini/3.1-pro - 20260319T050611Z

          Response from the assistant

          ````

          with a similar block for tool calling This can be parsed semantically as part of the conversation but also is rendered as regular Markdown code block when needed

          Helps me keep AI chats on the filesystem, as a valid document, but also add some more semantic meaning atop of Markdown

          • nthypes

            today at 9:09 PM

            Why not MDX?

            • realrocker

              today at 8:04 PM

              The streamed execution idea is novel to me. Not sure what’s it significance ?

              I have been working on something with a similar goal:

              https://github.com/livetemplate/tinkerdown

                • FabianCarbonara

                  today at 8:11 PM

                  [dead]

              • pbkhrv

                today at 7:50 PM

                Very cool. I'm imagining using this with Claude Code, allowing it to wire this up to MCP or to CLI commands somehow and using that whole system as an interactive dashboard for administering a kubernetes cluster or something like that - and the hypothetical first feature request is to be able to "freeze" one of these UI snippets and save it as some sort of a "view" that I can access later. Use case: it happens to build a particularly convenient way to do a bunch of calls to kubectl, parse results and present them in some interactive way - and I'd like to reuse that same widget later without explaining/iterating on it again.

                  • FabianCarbonara

                    today at 7:55 PM

                    Exactly this!

                    Right now this uses React for Web but could also see it in the terminal via Ink.

                    And I love the "freeze" idea — maybe then you could even share the mini app.

                • joelres

                  today at 7:41 PM

                  I quite like this! I've been incrementally building similar tooling for a project I've been working on, and I really appreciate the ideas here.

                  I think the key decision for someone implementing a flexible UI system like this is the required level of expressiveness. To me, the chief problem with having agents build custom html pages (as another comment suggested) is far too unconstrained. I've been working with a system of pre-registered blocks and callbacks that are very constrained. I quite like this as a middleground, though it may still be too dynamic for my use case. Will explore a bit more!

                    • FabianCarbonara

                      today at 7:50 PM

                      Thanks! Really interesting to hear you're working on something similar.

                      You're right that the level of expressiveness is the key design decision. There's a real spectrum:

                      - pre-registered blocks (safe, predictable)

                      - code execution with a component library (middle ground)

                      - full arbitrary code (maximum flexibility).

                      My approach can slide along that spectrum: you could constrain the agent to only use a specific set of pre-imported components rather than writing arbitrary JSX. The mount() primitive and data flow patterns still work the same way, you just limit what the LLM is allowed to render.

                      Would love to hear what you learn if you explore it!

                        • joelres

                          today at 7:57 PM

                          Will do! I'm using a JSON DSL currently, I wonder if there's a best choice for format that is both at the correct level of expressiveness and also easy enough for the LLM to generate in a valid way. I do think markdown has advantage of being very trivial for LLMs, but my current JSON blocks strategy might be better for more complex data.... will play around.

                  • theturtletalks

                    today at 6:11 PM

                    OpenUI and JSON-render are some other players in this space.

                    I’m building an agentic commerce chat that uses MCP-UI and want to start using these new implementations instead of MCP-UI but can’t wrap my head around how button on click and actions work? MCP-UI allows onClick events to work since you’re “hard coding” the UI from the get-go vs relying on AI generating undertemistic JSON and turning that into UI that might be different on every use.

                      • FabianCarbonara

                        today at 6:16 PM

                        In my approach, callbacks are first-class. The agent defines server-side functions and passes them to the UI:

                          const onRefresh = async () => {
                            data.loading = true;
                            data.messages = await loadMessages();
                            data.loading = false;
                          };
                        
                          mount({
                            data,
                            callbacks: { onRefresh },
                            ui: ({ data, callbacks }) => (
                              <Button onClick={callbacks.onRefresh}>Refresh</Button>
                            )
                          });
                        
                        When the user clicks the button, it invokes the server-side function. The callback fetches fresh data, updates state via reactive proxies, and the UI reflects it — all without triggering a new LLM turn.

                        So the UI is generated dynamically by the LLM, but the interactions are real server-side code, not just display. Forms work the same way — "await form.result" pauses execution until the user submits.

                        The article has a full walkthrough of the four data flow patterns (forms, live updates, streaming data, callbacks) with demos.

                    • smahs

                      today at 7:44 PM

                      In an agentic loop, the model can keep calling multiple tools for each specialized artifact (like how claude webapp renders HTML/SVG artifacts within a single turn). Models are already trained for this (tested this approach with qwen 3.5 27B and it was able to follow claude's lead from the previous turns).

                      • tantalor

                        today at 7:08 PM

                        The nice thing about standards is that you have so many to choose from

                        • Lws803

                          today at 8:01 PM

                          I see potential to take over Notion's / Obsidian's business here. Imagine highly customizable notebooks people can generate on the fly with the right kind of UI they need. Compared to fixed blocks in Notion

                            • rthrfrd

                              today at 8:56 PM

                              That’s what I’m building, along with the invisible unified data model underneath, that is needed to tie everything together. Always glad for feedback, reach out in my profile if it sounds interesting!

                          • eightysixfour

                            today at 5:38 PM

                            There seems to be a lot of movement in this direction, how do you feel about Markdown UI?

                            https://markdown-ui.com/

                              • FabianCarbonara

                                today at 6:07 PM

                                Markdown UI and my approach share the "markdown as the medium" insight, but they're fundamentally different bets:

                                Markdown UI is declarative — you embed predefined widget types in markdown. The LLM picks from a catalog. It's clean and safe, but limited to what the catalog supports.

                                My approach is code-based — the LLM writes executable TypeScript in markdown code fences, which runs on the server and can render any React UI. It also has server-side state, so the UI can do forms, callbacks, and streaming data — not just display widgets.

                                • threatofrain

                                  today at 5:55 PM

                                  I'd much prefer MDX.

                              • 4ndrewl

                                today at 7:47 PM

                                The bots that read the instruction and yet add the emoji to the _beginning_ of the PR title though. Even bigger red flag I guess?

                                • iusethemouse

                                  today at 6:00 PM

                                  There’s definitely a lot of merit to this idea, and the gifs in the article look impressive. My strong opinion is that there’s a lot more to (good) UIs than what an LLM will ever be able to bring (happy to be proven wrong in a few years…), but for utilitarian and on-the-fly UIs there’s definitely a lot of promise

                                    • FabianCarbonara

                                      today at 6:26 PM

                                      [dead]

                                  • today at 6:01 PM

                                    • dominotw

                                      today at 7:51 PM

                                      would be nice if it wasnt just ui but other form like voice narration, sounds ect

                                      • kevindo9x19

                                        today at 8:30 PM

                                        [dead]

                                        • today at 1:42 PM

                                          • ZakDavydov30

                                            today at 8:40 PM

                                            [dead]

                                            • wangmander

                                              today at 6:57 PM

                                              [flagged]

                                                • Retr0id

                                                  today at 7:07 PM

                                                  What's the going rate these days for decade-old HN accounts to repurpose as AI spambots?

                                                    • pohl

                                                      today at 8:06 PM

                                                      I don’t know, but feel free to send me an offer.