In this episode we chat to Shannon Vallor, the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh and the Director for the Centre for Technomoral Futures. We talk about feminist care ethics; technologies, vices and virtues; why Aristotle believed that the people who make technology should be excluded from citizenship; and why we still don't have the kinds of robots that we imagined that we'd have in the early 2000s. We also discuss Shannon's new book, The AI Mirror, which is now available for pre-order.
Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy. She is a former Visiting Researcher and AI Ethicist at Google.
READING LIST:
Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016)
The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024).
TRANSCRIPT:
KERRY MCINERNEY:
Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts. What is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk where we've got a full transcript of the episode and a sample. Especially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us. And we'd also so appreciate you leaving us a review on the podcast app, but until then sit back, relax, and enjoy the episode.
ELEANOR DRAGE:
In this episode we chat to Shannon Vallor, the Baillie Gifford Professor in the Ethics of Data and AI at the University of Edinburgh and the Director for the Centre for Technomoral Futures. We talk about technologies, vices and virtues, why Aristotle believed that the people who make technology should be excluded from citizenship, and why we still don't have the kinds of robots that we imagined that we'd have in the early 2000s.
Before we begin, I want to tell you about another podcast that Kerry and I really like, and it's called B the way forward, it's hosted by the president of the AnitaB organization, Brenda Darden Wilkerson, and this is an amazing organization in its own right, because it's helping to pave the way for women and non-binary individuals working in tech.
Some of their incredible guests include Janelle Monae, who I'm a massive fan of, and female finance powerhouses, like the founder and managing partner of Backstage Capital, Arlan Hamilton.
This isn't just another tech talk. This is your front row seat into the creative minds that are shaping the future. If you were ready to drive change and AnitaB inviting you to be part of the movement. So get plugged in to be the way forwards wherever you stream your podcasts. We hope you enjoy the show.
KERRY MCINERNEY:
So just to kick us off, could you tell us a little bit about who you are, what you do, and what's brought you to thinking about feminism, gender, AI, and technology?
SHANNON VALLOR:
Hi. I'm really honored to be here. I'm excited to be a guest. I've heard so many great things about the podcast. So thanks for inviting me. I'm Shannon Vallor and the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence at the University of Edinburgh, where I'm in the Department of Philosophy.
I also direct the Centre for Technomoral Futures, which is part of the Edinburgh Futures Institute and which is really devoted to the integration of technical and moral expertise, particularly with respect to issues concerning data driven technologies and the impact of data and AI on society.
My background as a philosopher of technology goes back pretty far. So I really started in this field in around 2006, 2007 where we were looking at the ethics of robotics. Back then, actually, you may or may not know, but back in 2006, we were still in a pretty solid AI winter, as we call it.
There weren't that many people asking questions about AI, but robotics was really considered to be at a kind of upswing where we were really thinking that we were going to see a lot of new kinds of social robots and particularly care robots being developed in hospital settings, in elder care settings. A lot of people think that this idea actually is brand new, but this is something that people have been thinking about for some time now.
And so I began looking at the ethics of these kinds of technologies at the beginning of my career. That and the ethics of early days social media like Facebook. And I was particularly interested in both of these domains, with the impact of new technologies on our relationships and particularly on the particular virtues or character traits and moral skills that we bring to our relationships.
And I was particularly drawn to questions about the role of care in contexts where robots might be present and asked to participate in social care. And of course that led me quite quickly to feminist care ethics. So even though I don't consider myself a feminist philosopher in the sense that feminist theory isn't my area of specialty, feminist care ethics has actually been one of the primary lenses that I've used in my work as a philosopher of technology and an ethicist of technology. I’m happy to say more about that, but that's really how I got into this field. So actually thinking about feminism, technology, and robots was where I started my philosophical career in this area.
And then it's broadened out to many more areas since.
ELEANOR DRAGE:
It's interesting because people often equate AI ethics with robot ethics. They think it's just about robots.
SHANNON VALLOR:
Yeah, very different. But lots of overlap, certainly. But now we're seeing, of course, that the most rapid advances in AI are happening with disembodied algorithms and models.
And even the things that are being brought into care settings are more like chatbots that are being used for therapeutic purposes, rightly or wrongly in many contexts. And originally, the idea was back in the early 00s, that what we would first do is develop physical robots that would help with things like lifting patients in beds so that they could be more easily cared for by their loved ones or to reduce the physical strain on caregivers in hospitals and nursing homes and things like that. You would have robots that would bring people to the toilet, you would have robots that would ensure that people could walk safely down a hallway without a fall.
And we still don't have those robots. But what we have are robots that do very different kinds of care. Or claim to, or are positioned to do, different kinds of care than that.
ELEANOR DRAGE:
Maybe we can come back to how the things that we've imagined to be created the fastest actually end up not being what we have right now.
But first, can you tell us what is good technology? Is it even possible? And how can feminist work help us get there?
SHANNON VALLOR:
Yeah, good technology is absolutely possible. And one of the things that I think is really important to remember is that technology is not just computers and smartphones, right? It's not even just electronic or wired technology and covers everything that has allowed the human community to flourish in a complex and often threatening environment, as an interdependent and quite vulnerable animal, right? So the human animal is not like many others in that we're extremely fragile when we're born. We are not ready for the world. And even at the very beginning there is a need for the human animal to be supported.
And so care is at the beginning and the kind of foundation of the human experience because we literally don't live, we just die in the wilderness if we're not cared for a very long time. But technology is at the beginning of that as well. Because from the very beginning, humans had to develop technologies in order to enable them to care for their families long enough for their children to survive, for their communities to survive.
So technology is as fundamental to goodness as care is, in my view. But I think the question is, what do we envision technology as now, and how far has it drifted from that original meaning of technology as an instrument of care, right? I have a whole story about this, so I won't give you the whole story, but I think it's quite obvious from what we see in popular culture around technology that our image of technology has drifted quite far from its heart.
And so a lot of my work is about figuring out how to restore the meaning of technology, to this notion of technology as inseparable from care in the moral sense.
KERRY MCINERNEY:
Really fascinating. And it's also really, I think, inspiring to hear this idea of technology is being coeval not only with humanity, which is an idea that Eleanor and I look at a lot, this idea of we can't really think of what it means to be human outside of these kind of technological processes and augmentations, but also technology itself as part of this idea of goodness.
So even beyond like pleasure, which I think is often the limits of where our thinking stops when it comes to how technology can be good, but rather thinking about how can technology actually facilitate good forms of coexistence. But I also want to come back to this question of care that you've raised, and I love the idea that you actually started because you're such a giant in this field as this philosopher drawing on these ideas about care. I was going to make the promise to our listeners to say, see, look, if you start taking feminist approaches, you can become Shannon Vallor, start a tiny wave of Shannon Vallor acolytes. I think the feminist ethics of care is a really interesting approach because I think care is such a valuable and yet also such a contested and complex concept because on the one hand, we have so much phenomenal feminist literature, which highlights how care labor is both, highly devalued in the societies we live in and also how care could provide the foundations for different kinds of societal arrangements, more gentle and caring societies. And on the other hand, we also have this kind of growing, really interesting body of work in feminist and critical race theory that also tries to complicate care and looks at how care is also sometimes very implicated in violence from humanitarian care through to to really interesting work around ecologies of care and how conservation work often involves saving one species at the expense of another. And so what does care mean in that context?
So that was a very long way of saying, I'd be really interested to hear how you're thinking about care and technology, I guess has almost developed or changed over the past. Sort of 10, 20 years when there's been such a transformation in AI technologies. Has that also changed your understanding of what it means for technologies to be approached through an ethic of care or how care can be meaningfully integrated into technologies and differently valued with technology?
SHANNON VALLOR:
Yeah. I think there's so much to say here but I'll just maybe pull on a couple of threads. One is that I think, in terms of what's been happening over the past ten years, there's a silver lining to a lot of the damage, or there may be a silver lining–we'll see–to a lot of the damage that's been done in the current tech ecosystem and with its complete detachment from norms of care and responsibility.
And that is that I think it's become clear how unsustainable our current approach to the built world is. And we already knew that through the lens of environmental ethics and understanding the impact of technology and the built world on climate and so forth. But I think we're also seeing particularly around social media, some of the damage that it's done in terms of undermining some of the civic threads of care and mutual aid that we've relied on, while also providing or allowing some people to rebuild new systems of social care and mutual aid through those tools. I think what it has helped us see is that there is a need for some sort of deep seated rethinking of the technological enterprise.
The current status quo is not sustainable and can't just be patched around the edges, right? So I think when we started doing work in computer ethics, AI ethics, robot ethics there was still a great deal of optimism about the trajectory of innovation and a lot of belief that we just needed to be ready to do some some work around the edges of that system to sand off the rough and sharp parts that might injure people, but that the system as a whole was working as it should. I think very few people can’t see fully the impact of what a lot of the kind of technological platforms have done in terms of undermining our kind of shared sense of reality,our sense of responsibility to one another. And certainly the platform companies haven't been alone, right? We could look at the Murdoch empire. We could look at any number of influences that have undermined some of the fabric of a flourishing society. But I think it's clear that if we continue as we have been going, society will not go much further without really rupturing in a serious way and becoming politically, economically and environmentally unsustainable.
I think we're getting to the point where there's a recognition that we have to go back to the beginning in some way and think again about what technology means and what the relationship between technology and human flourishing is. So my hope is that the silver lining is, as it becomes more and more evident that the harms of the current ecosystem are likely outweighing the benefits and certainly aren't being distributed in a way that justifies the enterprise as a whole, that we have to step back and say this isn't just a question of doing some ethics review work or putting a few extra regulations in place. Not that these things aren't needed, right? But that there's just, there's a deeper problem that we have with the way that we engineer the world and ourselves and our societies, and we have to go back and do some much deeper thinking about where things went wrong.
And I think there are insights around particularly the role of care and the gender dimension of care and its relationship to technology. So one of the things that I've been looking at, if you go back to Aristotle, and Aristotle, of course, is, a philosopher that, as a virtue ethicist, I do quite a bit with, but also a philosopher who believed that women were incapable of exercising political and moral reason, incapable of functioning as responsible citizens and so forth.
And one of the other great errors of Aristotle is that he regards the mechanical arts and those who practice them as unfitting for civic participation. So he argues that the practitioners of the mechanical arts should be excluded from citizenship in much the same way that women are.
Now, why is that? Now, a lot of times you might think, ‘Oh Aristotle was a student of Plato, and Plato was against technology because it was of the physical world and not the world of eternal ideas, and technology deals with things that are made of matter and that change, and this distracts the body and ties it to the soul.’
This whole sort of, Platonic metaphysics that often is thought to drive the philosophical prejudice against technology in the West goes back quite far. But really, if you look at Aristotle, and even if you read Plato and the Laws where he says some similar things about this, it's clear that there's actually a very different kind of rationale here and it's very much about the role of care.
So what Aristotle says in that same passage is that the reason that the practitioners of the mechanical arts can't be citizens is because they are the ‘providers of necessary services.’ So think about that, the providers of necessary services, these are the people who meet people's basic needs. What is technology originally?
It is a matter of meeting our basic needs, right? The original technologies were ways of mending, of feeding, of sheltering, of warming, right? Really fundamental stuff. And who provided much of that labor? And who developed many of those techniques and passed them on? And where were those techniques exercised?
Largely in the domestic sphere. Largely in the home, right? And so by the time that Aristotle is writing about the mechanical arts, he's identifying them as a body of techniques for doing something that is very closely tied to domestic care. And it's not a far leap, right, to see Aristotle then putting that wall up between the domain of domestic care, which includes the mechanical arts, and the domain of politics, and the domain of power, the domain of virtue and decision making, right? And I think we need to tear that wall down, quite simply. What that does is that allows technology and politics to be reintroduced to each other in a much healthier way. But it also allows us to break down the gender devaluation of technology's purpose in providing care and enabling human flourishing.
ELEANOR DRAGE:
You said before that you're a virtue ethicist, but can you tell everyone what that means? And in fact, what is virtue ethics? It's an increasingly popular idea and your work is constantly cited by the Master's students in AI ethics that we teach at Cambridge. So can you give us a little bit of insight into that?
SHANNON VALLOR:
Sure. And there's a really interesting, I think, set of questions around the relationship between virtue ethics and care ethics as well, that's quite contested. But virtue ethics is an approach to ethics that rather than looking at the specific consequences of particular actions and judging their rightness or wrongness that way, the way a utilitarian might, or looking at the universal rationality of certain kinds of principles, which is what we call a deontology or kind of rule driven ethics, virtue ethics is based upon the particular moral skills and capabilities that we cultivate in ourselves and exercise together in the world. So virtues are excellences of character. They're things like courage, honesty, compassion, generosity, wisdom. There are intellectual virtues as well as moral virtues.
And we aren't born with them. One of the fundamental commitments of virtue theory is that virtues are things that we build up in ourselves through practice, that we model on those who already possess them and give us a template to follow, but then we also refine those templates through the exercise of wisdom and through experience so that we can develop and exercise these moral skills intelligently in the world in ways that are sensitive to context, and to the particular relationships and needs that we’re embedded with. And so the idea is that the courage, for example, of a soldier in battle looks very different than the courage of a medical professional who is performing a painful medical procedure on a child, right? But they both have to marshal a great deal of courage to do what they're doing. And so the idea is that virtues are general moral competences that adapt themselves intelligently or that we adapt intelligently to the moral challenges that meet us in the world.
Vices are the opposite of that, right? Vices are the character traits that we can easily build up that actually inhibit our ability to perform competently as moral agents and moral partners with others and that inhibit human flourishing. So that's virtue ethics in a nutshell, but there's obviously a great deal more to it.
And there's a lot of questions around whether care is a virtue. Or whether virtue ethics rather is a part of care ethics. Which is more fundamental? Are they complementary? Do they compete with one another? Do we need to choose one? There's lots of different views about the relationship between feminist care ethics and virtue ethics, which obviously comes from, at least in the Western philosophical tradition, a philosopher who is fairly inimical to feminist thought.
So there's a lot of interesting tensions to explore there.
KERRY MCINERNEY:
That's fascinating. And I also love the idea of our new series being philosophy in a nutshell cause that explanation was certainly super helpful for me. What does this mean in the context of AI?
Like, how do you as a philosopher of technology, think about what virtues are in the context of our contemporary tech industry? And particularly, in relation to a product, an idea that's going through such a huge boom at the moment, AI.
SHANNON VALLOR:
Yeah. So I look at this basically from two directions.
So one is looking at what are the virtues that we need in order to flourish in the 21st century with the particular challenges facing us? Remember that virtues have to be adapted, right, to the circumstances, and to the environment. So the virtues we need aren't the virtues that 4th century Greeks needed.
And so we have to figure out what are the virtues that will help us carry on well in the world today, with the technologies and the challenges that they bring, but also the opportunities that we have to use them to help us and to care for one another. Which virtues will help us do that, and how do we cultivate those particular virtues?
You might think about virtues like moral imagination or virtues like compassion, which can be particularly hard to exercise in digital and disembodied kinds of environments. And so how can we strengthen those virtues in contexts that are very heavily technologized?
So I'm very interested in that kind of question. I'm very interested particularly in the virtue of practical wisdom, which is the virtue that helps us adapt to circumstances that are new, where we don't have a familiar template to follow. Where doing what we were taught was good, might not serve us well.
And so practical wisdom is a kind of creative moral virtue that allows you to revise the moral scripts that you've inherited and adapt them better to the surprises that the world is delivering to your doorstep. And there is no time where that has been needed more than the 21st century.
Right now the world is rapidly destabilizing, both from an environmental perspective and from a political and economic perspective. And we are going to need to be very flexible and very responsive to those changes in our moral scripts. So I'm very interested in how those virtues can help us adapt to the changes that technology brings, but also the other changes that we're facing. At the same time, there's the flip side of this. Which is that technologies also shape our virtues. They also shape our vices. They also shape our character as a whole. Because virtues are developed by practices, by the things we do repeatedly in the world.
Most of the things we do repeatedly in the world are being changed by the technologies that are being designed and deployed. So that means that our own character is being remade, for better or worse, by the technologies that we're using. So I became very interested in this years ago with Facebook, thinking about how relying on Facebook for social connection, (which is something young people used to do, by the way),I wanted to know how relying on Facebook heavily for social connection would affect the different virtues that are necessary to flourish in a social environment. But today I'm very interested in AI and how the particular kinds of practices that AI encourages or fosters, or in the other case replaces, right, for us. And it says ‘Well, you don't need to be doing this because we'll automate this with AI.’ What if that practice that gets automated with AI is something that was really essential for cultivating a particular virtue? Like care, right? So if we automate care for our children, for our parents, for our pets, what does that actually do to our ability to cultivate care as a moral capability? And I think those are really interesting questions.
ELEANOR DRAGE:
And they're being taken up by students of AI ethics, hopefully across the globe, so absolutely. What we'd love to know from you as we end is what's your new book about and what do you feel compelled to tell the world at this time? Because there's so many things that we feel like we need to say to people to reassure them, to give them a message about what's really going on.
So what's your take? What should we be telling people in the pub or in the bookstore.
SHANNON VALLOR:
Yeah, that's great. So my new book is called The AI Mirror and it will be out in the first half of next year from Oxford University Press. And as the title will tell you, it's about the metaphor of the mirror as a way of understanding many of the dominant forms of artificial intelligence that are marketed today. It's a metaphor for what they are and how they work. And in the book, I unpack that metaphor on multiple levels. But it's also about us as a human family and the way we understand ourselves.
So mirrors are one of the tools we use to help look at ourselves and understand our form. And I think one of the really interesting things about AI is it's becoming a mirror that we're encouraged to use, to understand what it is to be human and to understand ourselves and one another– to understand, for example, what we're most likely to do. AI mirrors tell us what we're likely to do next, what we will prefer, where we will succeed, where we won't, who we should date, what kinds of jobs we should be given. And so I'm very interested in understanding how AI is affecting human self-knowledge.
I'm particularly concerned about the fact that the distortions that AI can produce in our own self knowledge may prevent us from actually addressing many of the greatest challenges that we're facing in the century, from sort of climate to economic and political instability. So my book is about how to reclaim our own self-understanding. It's not about breaking the AI mirrors or suggesting that we don't need these tools as part of our self understanding, but rather looking at how we can relate to AI mirrors in a much wiser way, and in a way that helps us understand our true capabilities and our true potential more fully than we can without them. And I think today we have unfortunately a system that is doing precisely the opposite.
As Abeba Birhane has argued and many others have acknowledged, AI mirrors tend to function in a rather conservative way. They literally conserve the past and project it forward.
They are trained on the data of what we have already been. And if we rely on those mirrors to show us what we are and what we can be in the future, that's very dangerous in a moment where we actually need to become something new, because these tools can't show us what we can become. They can only show us what we have historically and predominantly tended to be within the penumbra of the data that has been collected, which we know does not represent the human family very well in the first place. So we have a profoundly distorting and also backward-looking orientation of these mirrors. At the moment what I'm writing about is how we can develop better kinds of mirrors, but also better, richer ways of understanding ourselves and our potential to remake the world in a way that is just, sustainable and compatible with human flourishing.
KERRY MCINERNEY:
Would that be right in guessing that this involves practical ethics, which you raised earlier in some way, this idea that kind of, reform yourself or change path, instead of being locked into those same kinds of patterns and futures projected by AI mirrors.
SHANNON VALLOR:
Absolutely. So there's lots of talk about virtues in the book, unsurprisingly, but also in the book, much more so than in my first book, I draw a lot on literature, science fiction and other kinds of creative sources of moral and political vision, because that's what these AI mirrors are really lacking.
So it's really important that we bring in these richer sources of moral imagination and political imagination, these richer sources of possibility that humans have always relied upon through the creative arts. And I want to bring some of those resources in as a way of showing that technologies like AI aren't the only ways, and not the primary or even best ways, to understand ourselves.
KERRY MCINERNEY:
That's fantastic. And, I think Eleanor and I are fully on board with kind of one important part of feminist praxis being, this kind of political imagination and this ability to see and envision the world differently and plug for Eleanor's book, which is coming out very soon, I believe, which thinks specifically about European woman science fiction and kind of other ideas of kind of planetary. Eleanor, you should explain your own book.
ELEANOR DRAGE:
I'm so bad at titles. I think it's called um, An Experience of the Impossible, The Planetary Humanism of Women's Science Fiction. But it could be the other way around.
SHANNON VALLOR:
I love it.
ELEANOR DRAGE:
We had a team meeting this morning and I couldn't remember any of the other books that we were writing so I have to defer to Kerry at all times.
SHANNON VALLOR:
Congratulations. That's amazing.
ELEANOR DRAGE:
Thanks so much for coming on and I hope to see you again soon.
SHANNON VALLOR:
Thanks for a great conversation.
ELEANOR DRAGE:
This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage.
Comentários