Your design team's "AI readiness"
How to stop panicking about AI & start shaping conditions that move you forward
Oof. It’s already November 2025, and I can feel some of that anxiety radiating from design Slack channels, LinkedIn DMs, and leadership syncs. “OMG We’re so behind on AI! Everyone else has already figured this out! Time is running out!” 😬 Hold on folks, the actual reality is: Nobody has figured this out yet. Let’s not conflate the frequency of [AI tool name] mentions with confidence that it’s all settled and done. You still have time to figure out your MCP and RAG while spinning up agentic workflows with “progressive trust”. 🙃 (a primer here)
We’re all pioneers stumbling through this continuously shifting frontier together. The question isn’t whether you’re “too late”—it’s whether your teams are ready to navigate this messy, uncertain terrain together. And if you’ve built strong internal design practices before, you likely already know more than you think. The parallels of setting up “design readiness” with building up “AI readiness” haven’t changed—they’ve just become a bit more urgent.
The readiness question
Back around 2010 at Citrix, we were a scrappy crew of less than ten designers trying to elevate design across a 25-year-old legacy enterprise corp. We couldn’t serve every product team that wanted our help, so we made a deliberate choice: we only engaged with teams that were ready to integrate design into their practice.
Not teams that said the right buzzwords. Not teams with the biggest budgets or executive sponsors. Teams that demonstrated real curiosity, tolerance for ambiguity, and willingness to do the hard, sometimes backwards-moving work of design—mixed-team workshops, customer research, fast fail prototyping, etc.
There was one particular team that clearly wasn’t ready—insisting on tight dev schedules, treating design as UI cleanup, basically wanting decoration slapped on. 🙄We politely agreed to reconnect a year later. Spoiler alert: after a couple years of maturing their approach, that team got excellent design support and absolutely thrived! 🙌🏽
The lesson? Readiness isn’t about perfection. It’s about preparation.
And now, as we address AI integration, this same principle applies—but the stakes feel higher because the landscape is shifting faster than any of us can comfortably process. A new tool or method every week! And yes, those executives pushing AI mandates while moving quarterly goalposts. I can’t fix that! But here’s what we can do…
Your team’s ability to successfully integrate AI into design practices isn’t about having the latest tools, the perfect workflow, or some 32-point implementation roadmap. 😆 It’s about cultivating mindsets and behaviors that shape the conditions for learning, experimentation, and effective integration. Here’s what I’ve been advising clients and colleagues the last few months, per observations and anecdotal info.
‣ Principle: Progress over perfect
Just as teams needed curiosity & active listening to embrace design, they now need comfort with incremental learning about AI tools and methods. This means:
Entering AI experiments with genuine curiosity about what works and what doesn’t, not predetermined panicky conclusions about replacement or obsolescence.
Recognizing that best practices don’t exist yet—we’re all still making this up as we go, and that’s OK! 🙃 Improvise & learn.
Accepting that this week’s “cutting edge” AI workflow might be next week’s outdated model, and being flexible with that velocity of change. Resiliency is key.
The teams struggling most are waiting for that definitive guide, the proven playbook, the executive mandate that tells them exactly what to do. Meanwhile, the landscape keeps evolving so fast. The teams who are navigating well are continuously running scoped experiments, sharing what they learn (wins & setbacks), and adjusting their approach based on real feedback. 🙌🏽
It’s all a work in progress, in real-time. Not ideal, but it’s how you frame the opportunity ahead with AI capabilities.
‣ Principle: Everything is a prototype
Remember how the design process requires tolerance for ambiguity, throw-away prototypes, and taking steps backwards to move forward. Well, with AI generating dozens of variations in minutes, this principle becomes absolutely critical. 🤨
Teams need your strong support for the following:
Treat AI-generated concepts, personas, layouts, and copy as starting points for discussion, not finished solutions requiring immediate approval or rejection.
Develop comfort with quickly filtering through high volumes of options without getting paralyzed by choice or rushing to that first “perfect” solution. This is where UX principles, risk/tradeoff criteria, or 2x2 matrices can all help.
Realize that Gen AI outputs are prototypes & exploration, not absolute truth or final deliverables. Vibe coded artifacts are not production code, yet.
I’ve heard of some teams just being overwhelmed by the sheer volume of AI-generated options—”Which one is right? How do we decide? What if we pick wrong?”—when the real value is in using those variations to provoke better questions and surface hidden assumptions.
‣ Principle: Respect the designer’s agency
Here’s where it gets nuanced, and where a readiness mindset really matters. AI can play valuable roles in design workflows:
Operational support: Handling admin tasks, documentation, coordination
Supplemental intelligence: Surfacing considerations, catching blind spots, identifying risks
Creative collaborator: Generating options, offering alternative perspectives, accelerating iteration
Synthesis engine: Processing large amounts of information into digestible summaries
But the human designer remains the steward making the critical choices about creating, judging, deciding, and choosing. Teams demonstrating readiness understand this distinction. They don’t treat AI as either a singular magical solution or an existential threat. They see it as a capable instrument that still requires human creativity, contextual understanding, ethical reasoning, and the ability to balance competing human needs & motives—also known as politics. 🙃
Teams that aren’t ready? They either:
Abdicate judgment entirely to AI (”Well, ChatGPT suggested this so...”)
Reject AI tools completely out of fear or territorialism
Use AI to bypass design expertise rather than augment it
Cross-functional collabs in a blurry world
AI tools are wonderfully/terrifyingly blurring traditional role boundaries. Developers might generate UI mockups. Product managers might create user journey visualizations. Designers might write actual functioning code. This is both exciting and rather anxiety-inducing! 😅
Teams demonstrating AI readiness:
Prepare for this fluidity rather than defending territorial expertise, with a formal (yet not burdensome) RACI/DACI model that clarifies final decision-makers while recognizing each disciplines’ unique value
Create psychological safety for everyone to experiment (which thus means failures or setbacks) with AI in their own way, while respecting deep expertise
Remember, that PM generating a quick mockup in v0 doesn’t negate or nullify the designer’s ability to create coherent, accessible, brand-aligned experiences that actually solve user problems. But it might help the PM convey their thinking more clearly, which is always valuable! 😊
The real question is: does your team have the maturity to navigate these blurred boundaries with generosity and clear-eyed assessment of what AI does well versus what humans uniquely contribute?
An intersection that matters most
Here’s what I’m realizing in my client conversations & coaching sessions: truly AI-ready design teams combine individual confidence-building (that frontier pioneer mindset I wrote about earlier this year) with team-level preparation (the readiness practices above). You can’t really just tell designers “go learn AI tools!” without creating team conditions that value experimentation, tolerate failure, and respect the messy learning process. And you can’t just mandate “we’re all using AI now!” without building individual designer confidence that their core human abilities—creating concepts, judging viability, deciding direction, choosing solutions—remain irreplaceable and increasingly valuable.
In my view it’s really this intersection that unlocks genuine progress:
Individual designers who believe: “I’m learning to steward AI as a powerful instrument that augments my creativity and judgment, not replaces it. My willingness to experiment matters more than my years of experience.”
+
Teams that demonstrate: “We create space for AI experimentation while respecting design expertise. We provide strategic context without art directing. We’re patient with the learning curve and celebrate insights from both successes and failures.”
=
Organizations that can actually integrate AI thoughtfully rather than chaotically chasing tools or defensively rejecting change.
OK so...what now?
If you’re feeling that November anxiety—”OMG we’re so behind! Everyone else has figured this out! time is running out!”—Here’s my challenge to you:
Stop asking “Are we using the right AI tools yet?” and start asking “Are we ready to learn together?”
Because “readiness” isn’t about having perfect expert answers. It’s about setting up the conditions, mindsets, and behaviors that enable everyone to discover answers through thoughtful experimentation.
A few specific practical next steps to get your team moving:
Audit your team’s AI readiness using the principles above. Where are you strong? Where are you struggling? Be honest—this isn’t about blame, it’s about understanding starting conditions. Here’s a GenAI prototype of an audit tool based upon client chats and my own experiments. Give it a shot!
Start small & learn together. Pick one stage in your design process. Choose one AI tool. Run a focused experiment. Most importantly: Capture & share what you learn—the insights, the failures, the unexpected discoveries. Make learning visible and valued.
Create that space for both AI experimentation + human expertise. Encourage your designers to try AI tools in low-stakes situations. Simultaneously, reinforce the irreplaceable human judgment they bring.
Model the “Progress Over Perfect” mindset. As a manager or design leader, share your own AI learning journey—including the awkward fumbles and dead-ends. Give your team permission to be imperfect learners.
Revisit the fundamentals. The same principles that made teams ready for the design process—curiosity, respect for expertise, tolerance for ambiguity, willingness to explore “wrong” options—apply to AI integration. Strengthen these muscles.
The design teams that will thrive aren’t the ones with the most sophisticated AI + UX workflows by Jan 1, 2026. They’re the teams building the conditions for continuous learning, adaptation, and thoughtful integration where it’s relevant, as this wild new landscape keeps evolving. Who knows what will happen next spring or summer, as new tools or models appear with ever more capabilities…with certain flops & setbacks too! 😅 There’s only ongoing readiness & resiliency to handle whatever emerges next.
The question is: are you ready to shape it thoughtfully? 🙏🏽
What readiness challenges is your team facing with AI integration? What experiments are you running? I’d love to hear about your wins, struggles, and everything in between. Hit reply and let’s learn from each other.

