A few years ago I spoke at IxDA’s Interaction event about “meta-design” — designing the conditions for good design to emerge.1 The core of my talk was challenging conventions on how to design with humanism at the center—not just in theory, but in practice. Since then, I’ve been gently revisiting those ideas, especially in light of AI’s rising specter influence on our workflows & expectations.
But first, let’s admit it — we all kinda toss around “human-centered design” like free swag at a tech conference. 😅 I mean, everyone claims to do it, with some variation of “customer-driven” thrown in per corporate lingo (often that usually just means doing what a heavily paying customer says). But do our fragile, frictionful SaaS B2B products, services, and platforms actually feel human? Do they understand, respect, and adapt to the messy realities of real people? Do they embody virtues that resonate with the beleaguered, impatient, frustrated people who must use them daily? 🤨
After 20+ years of designing systems — from enterprise tools to AI-powered platforms— I’ve noticed a pattern. When teams drift from meaningful outcomes, it’s often because they lack one (or more) of these four pillars. Each one demands a different level of, frankly, courage. Each reminds us that human-centricity isn’t a process — it’s a stance backed by conviction. 🙌🏽
You have to believe in it; so let’s go through it…
🔭 Strategic foresight
Design for the person your user will become
This isn’t about predictive crystal balls. It’s about developing a kind of design intuition to connect dots others often miss in their spreadsheets via metrics myopia. It’s how we recognize shifts in behavior, market, and technology — before they show up in dashboards! 🙃
For example in this project at a web analytics startup, we were stuck optimizing the signup flow. But when we zoomed out — thinking not in days but in years — we realized we weren’t designing a conversion funnel. We were shaping habits & mindsets. We reoriented toward broader lifecycle impact, not just vanity metrics, that respected people as agents in the process, not simply datapoints.
Especially now, with GenAI tools reshaping user expectations, we need foresight more than ever! It's the difference between riding a pre-deterministic hype cycle of outsized gains, and designing something resilient, useful…and even worth trusting.
🔥 Creative provocation
Discomfort can be a design tool
We often think empathy is the engine of innovation. But just as often, it’s friction! Some of the best design breakthroughs come from asking the questions that make the room squirm.
“What if we flipped this assumption?”
“What if we didn’t do it the way everyone expects?”
These provocations are less about rebellion, and more about revealing blind spots. In the age of AI, where speed & automation can blur judgment, creative provocation slows us down just enough to ask about alternate possibilities that aren’t simply “chasing what everyone is doing.”
💭 Reflection-in-Action
Think deeply while you work
This phrase, borrowed from Donald Schön, calls for real-time thoughtfulness. Not post-mortem regret, but live ethical calibration. It’s how we catch manipulative dark patterns before they go live. It’s how we notice when the AI-generated copy doesn’t just “sound off”—it subtly erodes user trust.
In a world obsessed with velocity, this kind of care is admittedly kinda radical.😆 But necessary. Because we’re not just shipping features. We’re shaping behavior — and increasingly, belief systems. We’re shipping novel futures.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should,” as Jeff Goldblum’s Dr. Malcolm in Jurassic Park famously said.
👣 Deep Humanism
Stay curious about people—not just “users”
Too often we collapse people into segments, personas, or conversion paths, datapoints or business objects to be cross-tabbed and SQL queried. But real humans don’t live inside conversion funnels. 😅 They’re navigating ambiguity, competing priorities, and the broader socio-tech systems we often ignore. And often fighting with multiple apps with notifications demanding their fractured attention, and time.
Deep humanism requires us to slow down, to regard technical product complexity as an adverse, silently destructive phenomenon warping people, not just a problem of simplifying features. It means designing with great humility and care. It’s how we avoid flattening humans into training data or designing only for the “happy path.”
This mindset matters even more as AI systems learn from, reflect, and influence human behavior at scale. If we’re not careful, we risk reducing people into inputs — although let’s face it, it’s been happening already for decades per the “Big Data” craze. Deep humanism can help pull us back from that edge!
Please note: none of these pillars stand alone!
Foresight without reflection leads to armchair futurism. Provocation without humanism becomes provocation theater. It’s in the tension & balance between them that truly resonant design emerges. ⚡️
In the AI era, that integration is our new craft. We’re not just pushing pixels or plotting flows — we’re shaping futures, tempering automation with care, and designing systems that still make space for people to feel seen. This is difficult work with emerging patterns, practices, and principles to be developed.
What this demands of us
This isn’t about being precious or perfectionist. It’s about being responsible. As I wrote recently: AI doesn’t replace human judgment—it demands more of it. And that means more of us — our ethics, our empathy, our discernment. ⚖️
When Mark Templeton once told our global design team at Citrix, “Designing is caring,” he didn’t mean it in a soft, sentimental way. He meant diligence, rigor, thoughtfulness—stewardship. These four pillars above are critical lenses through which that care becomes visible. And our work becomes meaningful, beyond the hype cycle, or the latest sprint deliverable.
So the question isn’t whether we can afford to design this way. The real question is: what kind of world are we designing toward — and will people feel belonging in it?
Because in this AI-fueled era, our job isn’t just to speedily ship features for the metric. It’s to shape conditions for dignity, for trust, for possibility. That’s what real design stewardship looks like. 😁
And maybe, if we aspire to these pillars, we won’t just adapt to the future.
We’ll help design one worth arriving in. ✨
Seems the 2021 Interaction videos are all down, but you can watch a version I’d recorded for the Rosenfeld Community here: https://rosenverse.rosenfeldmedia.com/videos/the-rise-of-meta-design-a-starter-playbook-videoconference/