On Generative UI and DX as the final UX meta
28 March 2026
I remember the first time I experienced good developer experience. I didn't know the term yet, but I felt it and I became obsessed. This was the first time I ever deployed a project to Vercel.
It was so glaringly good and integrated into Git workflows and the UI was so well crafted with so much minimalist attention to detail and intentionality. I was enchanted, not only by the pretty pixels in my deployment platform but by the feeling of being empowered to create and share.
Software is changing. It is now abundant and ubiquitous. A set of patterns and predictions algorithmically predictable to the state of the art intelligence systems. Enter the post chatbot UI hypothesis Generative UI, software that is in a constant state of flux, dynamic, designed with the deepest desire to solve very specific and contextual problems. But a problem arises. Non-deterministic UX is an unsolved pattern. A system in constant flux sounds glamorous, futuristic even, our obsession with "more" and novelty could not be more satiated by the idea.
But a bleak reality remains. The socio-cultural artefacts we produce as designers have a massive impact take the $300 million button, a single registration form standing between a user and checkout cost one e-commerce company hundreds of millions before anyone noticed the friction. Or the infinite scroll swallowing whole the ambitions and dreams of millions, an addiction centre in your pocket. Likes and follower counts turn into depression and anxiety. So if humans are in even less control over the interfaces delivered to users, what does a world like that look like in a context where we've begun to understand the virtual can be tragic.
And so in our quest for a more generative and contextual interaction pattern, how much freedom do we give to the opportunities of what could be the form of the interface.
At the core of it all is a problem that spans three layers simultaneously — DX for the agents doing the building, UI for the interfaces they generate, and UX for the humans who ultimately use them. We've collapsed the entire design process into a set of parameters we haven't figured out how to write yet.
New questions arise. What are the parameters for empowering agents to create these interfaces, assuming this is the future. And I guess the answer to that, like most things in life and in working with AI, will depend a whole lot on the context. We've just never had to design for that before.