The Role of Designers in AI Ethics with Tomasz Hollanek
- ed5759
- 11 minutes ago
- 19 min read
In this episode, we talk to Tomasz Hollanek, researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems. The conversation examines the importance of AI literacy, the responsibilities of journalists in reporting on AI technologies, and how design choices embed social and political values into AI. Together, we reflect on how critical design can challenge existing power dynamics and open up more just and inclusive approaches to human–AI interaction.
Tomasz Hollanek is a researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, working at the intersection of AI ethics and critical design. His research focuses on critical approaches to human–AI interaction design, drawing on critical AI studies and broader critical studies of technology. Tomasz examines how established social and economic power dynamics shape technologies and design practices, often leading to marginalization or discrimination, and how these dynamics become embedded in AI systems. Through his work, he aims to question dominant narratives around AI and explore more reflective, responsible, and inclusive design practices.
Reading list:
Leverhulme Center for the Future of Intelligence https://www.lcfi.ac.uk/
Donal Norman's The Design of Everyday Things https://www.londonreviewbookshop.co.uk/stock/the-design-of-everyday-things-donald-a.-norman
Anthony Dunne and Fiona Raby's innovations in design theory, https://dunneandraby.co.uk/content/projects
Aisha Sobey's Social AI Policy https://www.jesus.cam.ac.uk/articles/social-ai-policy-futures-workshop
Philip Agre's critical technical practice https://pages.gseis.ucla.edu/faculty/agre/critical.html
High-Risk EU AI Act Toolkit, https://www.lcfi.ac.uk/resources/heat-high-risk-eu-ai-toolkit
AI Act Article 4 https://artificialintelligenceact.eu/article/4/
Transcript:
Kerry McInerney (00:57)
In this episode, we talk to Tomasz Hollanek, a research fellow at the Leverhulme Center for the Future of Intelligence. Tom introduces us to the field of critical design studies and explores how it aims to make both ordinary users and tech designers more critical. He asks us what a good life with technology looks like and explores how intersectional feminist thought can help us think differently about design and who counts as a designer. We talk about the trade-offs between friction and ease in product design and how design itself is a form of compromise.
Together, we also explore the promise and the perils of toolkits for translating AI ethics, how we should approach the question of AI literacy, and how journalists can report responsibly about AI. We hope you enjoy the show.
Kerry McInerney (01:39)
Thank you so much for joining us here today. We're so delighted to have you here. So just to kick us off, could you tell us a little bit about who you are, what you do, and what brings you to feminism, gender, and technology?
Tomasz Hollanek (01:51)
Hi both, thank you so much for having me on the podcast. I've been a huge fan since you began a few years ago.
Thank you for having me. So my name is Tomasz Hollanek, and I'm a researcher at the Leverhulme Center for the Future of Intelligence at the University of Cambridge. I work at the intersection of AI ethics and critical design, primarily thinking about critical approaches to human AI interaction design. And if I had to tell you in a nutshell what critical design is in critical AI studies or critical studies of technology more broadly really focus on questioning established social and economic power dynamics and how these dynamics often lead to marginalization or discrimination of specific groups of users, but also producers of technology and how these established dynamics then get translated into objects of design, including AI systems.
So critical design is really about thinking, what do we do with this knowledge? How do we design differently if we know about all these possible negative outcomes of technological development? And in many ways, critical design has been shaped by many movements within this bigger, under this bigger umbrella of critical design have been shaped by intersectional feminist critique that helps us think more critically and in a more complex way about the users of technology. Their different characteristics and how different characteristics might lead to different forms of oppression, but also thinking in a more complex way about the role of the designer and how some people did not really get to design, or how their work was never really sort of rewarded by the industry.
Eleanor Drage (03:25)
I'm excited to hear your response to our big three good robot questions. So what is good technology? Is it even possible? And how can feminism help us get there?
Tomasz Hollanek (03:31)
Right, so I knew obviously you would be asking this question. I was sort of struggling to figure out how to respond briefly. And I think I kind of had to rephrase the question for myself and ask, “What does a good life with technology look like?” And that question, reformulated this way, immediately makes me think about the context in which we can imagine this good technology sort of manifesting. So it makes me think about the people and how they're interacting with the technology.
The idea of good technology becomes less abstract when we think about it in context. We also realize that for some people, it might be good, might be bad, or indeed evil for others. So thinking about what good means in this context is always relational in some sense. And so this move that I'm proposing with this reformulation, I guess, mimics something that design theorists did in the 80s.
In the early 90s, they moved from a technology-centered approach to a human-centered approach. So this started with Don Norman's work, for instance, where it was all about thinking about how we can design, not really thinking about technology at the center, what technology can do, but rather how people interact with this technology and what they actually need.
Having said this, the human-centered approach also has some issues, with this approach as well. And the main issue is that the idea of good that is associated with human-centered design is really sort of linking the idea of good with easily adaptable, frictionless, smooth, and intuitive. Critical design also makes us question this because what if friction, and maybe making the user pause, is also necessary? So in many ways, like for me, good technology is critical. And this means both that it's designed critically, but also that it has to incite critique. It has to make the user think. And that's why sometimes, actually, human-centered design and the idea of goodness implied in that approach is not the most effective one.
Eleanor Drage (05:32)
There's a whole movement of designers who want to make us more critical, to stop and think about the tools that we're interacting with. And this is really important because I don't know about you, but I use things sometimes completely unthinkingly. So, can you explain this movement and how it affects the people using the tech? So you and me.
Tomasz Hollanek (05:53)
Sure. In some sense, we can relate this way of thinking about design to early work by Anthony Dunne and Fiona Raby in London and their innovations in design theory, in design thinking, but also in design pedagogies. And for Dunne and Raby, critical design is really not about functionality, but criticality. And indeed, in their work, they mentioned this idea of cognitive glitching as something desirable. So critical design is the kind of design that makes the user think about how things are and what is wrong with these established ways of designing, but also interacting with technologies and design objects more broadly. So, I find that the idea of cognitive glitching as something desirable, as something that you can see as a productive feature of a given technology as actually very relevant to what we are witnessing now with the rise of conversational AI, AI assistants, and AI companions. Maybe, like when we think about the impact of these technologies, AI assistants, conversational AI, research suggests that one of the issues that might, that we might see with these technologies is that they might be too obedient or our interactions with these assistants might be too smooth and then these expectations that we develop as we interact with assistants or other forms of conversational AI might then influence how we interact with humans because we might have unrealistic expectations of these interactions. And there are all kinds of other issues, including manipulation, so in many ways, AI assistants are meant to be, in some sense, immersive, even more immersive than other technologies. And so, bringing the idea of cognitive glitching to the picture here and thinking about how friction can be productive for the design of these technologies is something that I'm thinking about and working on now. I'm starting a product policy innovation group that will think about how we can think about this idea of friction as desirable by potentially thinking about new mechanisms for consent elicitation, for instance, or new ways of showing disclaimers to users. In other words, different mechanisms that disrupt the experience without frustrating the user, but maybe taking them out of the experience in some sense. So one idea that was flagged up at a workshop that I organized with one of my Cambridge colleagues, Aisha Sobey, a few months ago, was the idea of recurring consent. Maybe users should be asked to consent to interactions with these systems multiple times, not just once when they sign up for a given service. So I guess, yeah.
Eleanor Drage (08:32)
Can I just briefly ask? So recently, somebody said to me that they are fed up with getting repeatedly asked to consent to something, and they find it so irritating. So how do you deal with that? Because ultimately, it's asking for more labor. Like you have to look through the small print again and see what's changed and consent again, or not consent again. What does this repeated consent look like?
Tomasz Hollanek (08:53)
I mean, I have seen, so I worked on this question with my students at Cambridge, and it's not easy. There is no easy answer here, but the goal really is to make sure that design does not become invisible to users. And so you're right. We learn how not to think about clicking the accept, reject, or decline button, right? Whether you naturally are inclined to consent or not, we do it mindlessly, precisely because of how these consent elicitation mechanisms are designed. And there are some ways in which we can. There are ways of thinking about these mechanisms that could help users consent more critically. And so, one idea that was flagged up by my students was to use that time when you're waiting for a response from a system, that brief moment when a system is generating a response, and maybe show some extra information about the given system to users, then when they have to wait anyway. Maybe it's not something that should be text-based. Maybe there are different, more interactive forms of drawing the user's attention that can be both productive and not necessarily frustrating.
But it's an open-ended question, that's why we are starting this working group, because it is really difficult to design in a way that doesn't frustrate the user, but at the same time makes them think. And bearing in mind that different users might have very different preferences for very different reasons, including, bearing in mind some cognitive diversity, or neurodiversity. So, no easy answers here, but I think there are some ways in which critical design can inspire us to think about specific blueprints for this kind of design, interaction design.
Kerry McInerney (10:39)
Thank you. I think that's really fascinating. And I think what you're describing in many ways comes back to such a core problem in AI ethics, but also technology ethics more broadly, which is this question of trade-offs and what we choose to value and prioritize. And I think what your work is doing in such a crucial way is also pivoting towards saying that actually this level of friction and this kind of cognitive glitching is so important that maybe we need to be trying to encourage people to be able to embrace some of that labor and some of that friction, even if it can be in a lived way, really frustrating and I say that with a huge amount of sympathy to Eleanor's point, which is I think we've all experienced this with cybersecurity measures like two factor authentication when you're like, “I know this is so much more secure, but my goodness, is it annoying having to try and open something using that app?” And so I always wonder again, how designers sort of grapple with this question of trade-offs. But I actually want to ask you a little bit from the design side, because you've described the way that designers are trying to make users more critical of the technologies that they use. But the aspect of critical design studies and theory is trying to make designers themselves more critical. I think a really encouraging thing is that we're really moving beyond the paradigm of ethics and regulation that's only the business of the lawyers and the philosophers, like the engineers and designers don't have to think about that. I think that public paradigm has really shifted, and there is, I hope, more of an expectation that designers, engineers, and computer scientists are bought into these ethical projects. But I'd love to hear from you. How do you think designers can become more critical in the way that they design?
Tomasz Hollanek (12:10)
Great question. I'll respond in a second, but just very quickly, to go back to the previous point about friction and reconciling the objectives of critique and the objectives of design. I would say that we need to learn to see design as a form of compromise, and both some radical positions and some critical design positions come across as well. Radical in that they reject industry standards or indeed reject the idea of usability as something desirable. And indeed, critical design as a field has been criticized precisely for this rejection of industry standards, not only because that makes critical design as a way of thinking potentially not useful for practicing designers, but also makes critical design turn into something deeply elitist. Something that you can see or appreciate or encounter in a gallery with the arts space and the art world comes with its own set of institutional hierarchies and issues. So this is, and this actually allows me to go back to your question, Kerry, which is how do we make designers more critical, especially those that are not taught much about feminist critique or ethics more broadly?
There was an approach proposed by Philip Agre in the 1990s, a critical technical practice. He wrote about this need for an Agre himself, who was trained as an AI engineer and a computer scientist. And he describes this process of essentially learning how to engage with humanity's work, not as something that you need to translate into technical terms, technical processes, but as something you should engage with in its own right. And the point of this engagement, according to him, was to make computer scientists aware of their own epistemological framework. Essentially, make them aware of what it is that they see as knowledge, what they see as worthy of attention, when it comes to defining research objectives. And so, in other words, for him, the point of reading or engaging with humanities research was to develop epistemic humility, a fancy term for something that we indeed try to teach to computer scientists at Cambridge by making them engage with some of these critical texts directly.
Eleanor Drage (14:31)
What does it mean, epistemic humility? Or even actually engaging with critical texts? Describe these things for potentially non-academic listeners.
Tomasz Hollanek (14:33)
I guess it means… Yeah, sure. So epistemic humility, in layman's terms would be this position where you recognize that there are some things that you will never know based on your own experience and based on your own set of tools that you use to understand the world, based on your own educational experience. That means that to understand the world, but also to understand what good design is, you need to engage with others.
So this goes back to that question of relationality that we started with as something desirable. But then again, coming back to Kerry's question about how we make designers engage and at university, we can make our students read some critical texts and discuss them in class with us, but in industry, that's not necessarily a desirable objective for a team that is actually working on developing a technology.
Naturally, the question of translation comes into the foreground here, which is how do we translate some of these more critical, perhaps radical or difficult concepts into something that is more easily digestible for a broader audience? I have done so, so there are so many tools out there that try to do this, translate the theory of AI ethics or critical studies of technology into design practice. There are so many tools out there, actually, that the sort of toolkit scoping work has become a sub-genre of AI ethics scholarship in its own right. And what I did a few months ago, I have a paper published on this, which basically looks into the landscape of AI ethics toolkits and tries to understand, based on these other critiques, why these toolkits are not really working. Why are they not really helping developers do better? And, very briefly, the three main points of scoping critique were that first, many of these tools that are supposed to translate some values, such as transparency or justice, into practice, into development, do not do so well because they represent a very specific mindset in the first place. So they are not really inclusive enough in the first place. So they might be missing some values, such as environmental sustainability, for instance, which is very often overlooked in these conversations. The second problem with tools and toolkits that aim to translate the idea of ethics into a development practice is that ethics becomes very often simplified or oversimplified. It's presented as a set of actions to be executed, turning the idea of ethics into a checklist. So that's the second kind of critique that these tools face. And finally, and very importantly for me, even if some more critical terms and methods are included in these tools or guidelines, for instance, participatory design. They are very often decontextualized in ways that foreclose their transformative potential, and they might lead to ethics washing more than to actual, genuine change in how things are developed. So, in the paper, I also looked at a different set of tools, a set of tools that are informed by feminist, intersectional critique and critical design more broadly.
And very briefly, these alternative tools, you could say, are never really mentioned by practicing AI ethicists, whatever that means. What they do is they really try to reintroduce friction to the conversation. How do they do that? Well, these tools don't aim to bring tools together, but rather, they aim to bring people together. So they make it clear that if you want to think about ethics, you might need to engage with affected communities and civil society organizations. And so they gather people, not tools. That's the goal of these more critical tools or tool kits. And finally, they help designers think about how these conversations could be staged in a critical, in a power sensitive way. Because, as we know, many of these conversations can also very quickly and easily turn into an extractive process. In a nutshell, designers can become critical by first developing epistemic humility, that is, this questioning position where you realize that you might not know things based on your own experience. And then they can use critical design methods, including some ideas such as participatory design methods, to bring those who usually do not have a seat at the metaphorical design table to the conversation. And just to very quickly gesture to an actual project that I have been working on, and for full disclosure, with Eleanor, is our project, the High-Risk EU AI Act Toolkit, where we, on the one hand, help practitioners, providers of AI systems, comply with this new emerging regulation in the European Union, but at the same time, we scaffold this process, but we do not make ethics look simple. So we prompt the users of this toolkit not only to comply, but also to move beyond that and think a little more critically about what it is that they're doing.
Kerry McInerney (19:43)
That's really, really exciting and fantastic work, and we will link that in the show notes or the blog post that goes along with this episode on our website www.thegoodrobot.co.uk where you can also find all of the different books, texts, and articles that Tom has mentioned in this really rich episode. So if you're interested in learning more and diving into this more deeply, you know where to go for sure. I wanted to actually broach another aspect of this translational piece. So what you've just described is taking big theoretical or ethical concepts and trying to make them operationalizable for practitioners who are in maybe the AI space or the technology design space. But I want to talk about a different kind of translational act, which is a really, really big buzzword right now in AI ethics, which is AI literacy. And I know you as someone who's been thinking a lot about what it means to build people's understanding and awareness of AI technologies and how they work. So I want to ask you, what is AI literacy? Why does it matter? But also, what are the strengths and the limitations of this paradigm when it comes to thinking about ethical AI use?
Tomasz Hollanek (20:45)
Right. Well. Thanks, Kerry. I mentioned earlier that this needs to bring those marginalized or overlooked others to the metaphorical design table. In a way, the question of AI literacy is important because for these others to participate meaningfully in these conversations, they need to have an understanding of what AI is, what it's doing, and what could go wrong if we were using these technologies to solve specific societal issues. And so in some ways, you could see AI literacy as something that helps ensure that these conversations between specialists, designers, and non-specialists, people who rarely participate in design processes, to make sure that these conversations are power sensitive and they're more equal in some sense.
AI literacy really means the ability to build or code basically AI systems. So that's one interpretation, the most basic one in some sense. AI literacy can also mean the ability to use AI systems effectively. But also, and I think this is the most important aspect of AI literacy understood a little more broadly. It's this ability to understand how AI systems work and what can go wrong. This ability to perceive the potential risks, of course, also, but in particular, risks of a given technology. And as you mentioned, Kerry, AI literacy has become sort of central in the AI ethics debate. The AI Act mentioned earlier is actually making this process of ensuring that users understand AI technologies, makes it compulsory. So the AI Act Article 4 mentions AI literacy as something that providers of technologies need to ensure that their users can develop before they start using a given technology. But that's obviously very vague.
And now, in terms of limitations, I think the best way to critique this paradigm is through a distinction that Audrey Tang, the former minister for the digital of Taiwan, makes the distinction between competence building and literacy. And of course, this is in many ways semantics. But what I love about this distinction is this tension between literacy as something rather passive. So you're learning how things work.
Competence building is all about making you engage with how things work and potentially helping you to participate and maybe envision and indeed execute them differently. Having said all this, I think when we're thinking about AI literacy, we really need to be careful not to make people feel like they actually...
In many ways, this discourse around AI literacy has been driven by technology companies that basically make us feel like we need to learn how to use AI because otherwise, we will be replaced by humans who use AI. So this is why I think that aspect of AI literacy is the ability to learn to perceive risks related to technologies, and the aspect that Audrey Tang mentions, competence building, is crucial here. So, in other words, what I'm saying is that it might be that AI literacy is more about learning about our rights, for instance, under the AI Act, how we can seek redress, for instance, rather than how to use AI in the workplace.
Eleanor Drage (24:05)
Another group of people that you're really concerned about or working and working with, should say, journalists. Now you open the newspaper, you listen to your news on the radio, and there's always something about AI. And that means that a lot more journalists have started branching out into reporting about technologies that not necessarily familiar with. And journalists come to you and to us at Cambridge all the time asking for support. So, how are you supporting journalists in reporting responsibly on AI? And actually, what does it mean to report responsibly on AI?
Tomasz Hollanek (24:42)
Thanks, Eleanor. Obviously, this question, this sort of research stream that I'm working on, relates to the question of AI literacy that Kerry asked, because journalists quite obviously can, in a very direct way, influence the AI literacy of the general public. So this is why it's so important that they themselves develop this, well, what we could call critical AI literacy. So the ability to really meaningfully consider the risks and benefits of artificial intelligence, not only use them effectively in, for instance, reporting. I should say before I criticize the way AI is covered, I should say that there is amazing work out there and so many journalists are doing really, really important work highlighting some of the biggest issues with AI as one theme of my other theme of my work has focused on the question of debt bots and grief bots and and and the role that journalists have in highlighting some of these issues is absolutely fundamental. And many of them do an amazing job at keeping AI companies in check and making sure that they are held accountable. At the same time, because of this phenomenon that you mentioned, Eleanor. So essentially, people who are not specializing in technology are now being test-worth reporting on AI because AI is everywhere in all domains. If you're a medical journalist, all of a sudden you have to report on AI. If you're an education-focused journalist, all of a sudden you have to report on AI as well. so because of this, AI is changing so quickly. These people don't really have time to educate themselves enough.
About the risks and benefits of AI and the ethics and policy of AI. So as you mentioned, I'm trying to work with journalists to help them, to help us, especially these groups that haven't really sort of specialized in reporting on technology or AI, and how we can make sure that they can do it more responsibly without necessarily investing too much time into the process, because they do work under extreme pressure. And so I was very lucky to interact with all kinds of journalists from various types of organizations, and we discussed together the potential tips and standards that reporters could use in their work to do better. For instance, Melissa Heikkiläis now at
The FT gives this really good piece of advice to her colleagues: maybe you should think about how you can decenter the technology in your reporting and bring the human aspect of the technology. So what she suggests is instead of saying AI did this or that, maybe replaces with computer. And if it still makes sense, you can keep it. But if it doesn't, maybe you shouldn't use that metaphor. So, I have been trying to support journalists, not only me, but the whole team at CFI, we organize several workshops with practitioners to understand their needs and some potential limitations to their learning process and what some of the obstacles are. And we developed a toolkit for journalists that collects all these tips and resources, so that they can very easily and quickly access them.
Kerry McInerney (27:53)
And we will also definitely link that toolkit as well. So, for as much as Tom is a critic of toolkits, he is also a spectacular designer of toolkits. And so once again, that will be available on our website, The Good Robot. Tom, this has just been a fascinating tour de force of all things critical design. And every time I talk to you at the office, I know I learned something new. And so it's really a delight to get to bring that to our listeners as well. So, yeah, thank you so much for coming on the show.
Tomasz Hollanek (27:58)
Yeah.
Kerry McInerney (28:23)
Again, we know that people are going to find everything you've shared incredibly fascinating.
Tomasz Hollanek (28:28)
Thanks so much, both. And yes, all the toolkits, but also critiques of the toolkits, will be available to listeners.
Edited by: Meibel Dabodabo


