In this one-off interactive, gamified workshop, we’ll simulate real-world work scenarios at your organisation via a board game, helping you identify and eliminate bottlenecks, inefficient processes, and unhelpful feedback loops.
Workshop Details
**AI increases time-on-task by 20 to 40%, error rates are climbing, and users are quietly disengaging. The problem isn't the technology. It's that nobody asked the user's brain.**
Sophie Aldebert is a UX researcher at Citi in London who's spent years studying how people interact with digital products. She has a chemical engineering background, has worked across startups and enterprises, and currently designs research for internal banking systems. In this episode, she brings the cognitive science perspective that most AI conversations skip entirely: what actually happens in a user's head when you ship AI-powered features, and why faster development cycles don't automatically translate to better user experiences.
## About this episode
The conversation starts with a productivity paradox. AI companies are shipping tools at unprecedented speed, but the people using those tools, even tech-savvy practitioners, already feel behind. Sophie argues this isn't just a communication problem. It's a cognitive load problem. Working memory can hold roughly three to seven items at once. Every new AI feature, every changed workflow, every output that needs verification eats into that limited budget.
Change fatigue comes up early and stays throughout. Sophie points to the difference between Apple's approach (keeping core interactions stable while layering in new capabilities) and the Windows 2000 era of ripping everything out and starting fresh. The lesson: keep your core flows recognizable. If you redesign overnight, users don't adapt. They leave.
One of the sharpest observations is about how AI shifts the user's role from doing to verifying. Before AI, you clicked a button and got a predictable result. Now you throw a prompt and check whether the output is good enough. That checking takes effort. It requires trust. And trust, Sophie notes, is one of the hardest UX metrics to measure. You can track undo rates, retry counts, and time-on-task, but whether a user actually believes the system is working for them is deeply subjective.
The episode also pushes back on the reflex to put AI everywhere. Sophie makes the point that some features work fine without it, and the decision to add AI should start with the user's actual problem, not with a stakeholder who saw a demo. She singles out Microsoft's Copilot portfolio as a cautionary tale: dozens of products sharing the same name, each doing something slightly different, none of them making users feel more in control.
The conversation closes with practical advice for UX researchers feeling the pressure to master every new AI tool. Sophie's take: be selective, stay curious, lean into the human skills that AI can't replicate (empathy, judgment, stakeholder navigation), and stop feeling guilty for not spending your weekends in Cursor.
## Key takeaways
- AI interfaces increase cognitive load by default. Studies show 20 to 40% more time-on-task even when the AI is well-implemented, because novelty and verification both consume working memory.
- Change fatigue is real and measurable. Keep core user flows stable. Communicate what changed and why. Give users the option to self-serve their way through updates rather than forcing them to call support.
- Trust is a critical UX metric for AI products, but it's subjective and hard to quantify. Proxy metrics like undo rate, retry frequency, and prompt iteration count can help, but they still need qualitative research to interpret.
- Not everything needs AI. The best UX practice emerging is to start with the user's problem, evaluate whether AI actually solves it, and be willing to say no when the answer is simpler without it.
- Human skills matter more, not less. Empathy, critical thinking, collaboration, and the ability to manage opinionated stakeholders who just built a prototype in Claude are the differentiators for UX practitioners going forward.
## About the guest
Sophie Aldebert is a UX researcher at Citi in London with a background in chemical engineering. She has worked across B2B and B2C in both startup and enterprise environments and currently focuses on internal systems research in the banking sector. She co-organizes UX Crunch, a London-based community event for UX practitioners, and actively mentors people entering tech through programmes at UCL and King's College London.
[Sophie's LinkedIn](https://uk.linkedin.com/in/sophie-aldebert) | [Sessionize](https://sessionize.com/sophie-aldebert/)
## Resources mentioned
- [Google NotebookLM](https://notebooklm.google/) — AI-powered research tool Sophie mentioned for compiling and querying documents
- [Nielsen Norman Group: Minimize Cognitive Load](https://www.nngroup.com/articles/minimize-cognitive-load/) — foundational reference on cognitive load in UX
## Listen & subscribe
Find the Stellar Work Podcast on [Spotify, Apple Podcasts, YouTube, and more](https://stellarwork.start.page).
For weekly essays on transformation, flow, and AI in knowledge work, [join the Stellar Work newsletter](https://substack.com/@stellarwork).
---
*The Stellar Work Podcast is hosted by Ben, founder of Stellar Work. Conversations with the people shaping how work actually gets done.*
Not Sure Where to Start?
In this one-off interactive, gamified workshop, we’ll simulate real-world work scenarios at your organisation via a board game, helping you identify and eliminate bottlenecks, inefficient processes, and unhelpful feedback loops.
Workshop Details