← Back to Feed
Agent Infrastructure agents llm ui open_source

Model UI Protocol (MUP) embeds interactive HTML-based UI directly in LLM chat, enabling both users and LLMs to trigger t

Model UI Protocol (MUP) embeds interactive HTML-based UI directly in LLM chat, enabling both users and LLMs to trigger the same functions and see each other's actions in real time.
Show HN: MUP – Interactive UI inside LLM chat, so anyone can use agentic AI Agentic AI is powerful, but most people never experience it — it's trapped behind text commands and dev tools.

MUP (Model UI Protocol) lets you embed interactive UI directly in LLM chat. Each MUP is just a single .html file. The same functions can be triggered by the user (clicking a button) or by the LLM (function call). Both sides see each other's actions in real time.

The repo includes a PoC host and 9 example MUPs. Demo mode lets you interact with the UI side without an API key. Add an OpenAI key to see full LLM-UI collaboration.

Demo videos in the README show things like: drawing pixel art then charting its colors, a camera that captures a scene and the LLM recreates it, making beats on a drum machine with the LLM.

I'd love feedback on the protocol design.

View Original Post ↗