top of page
Search

The Mythology of AI with Stephen Cave

Why do we call it “Artificial Intelligence”?


In this episode, Kerry McInerney speaks with Dr. Stephen Cave about the stories and assumptions that shaped the very idea of AI. Drawing from his book with Dr. Kanta Dihal on the origin myths of artificial intelligence, he explains how the term “AI,” chosen in 1956 over other possible names, resonates with thousands of years of mythology about creating artificial beings.


Their conversation explores how technology is never neutral but deeply shaped by cultural values and historical power structures. Stephen reflects on how the binary between the “artificial” and the “natural” was used to justify colonial expansion, and how ideas of “intelligence” were historically developed by eugenicists to rank and organize people into hierarchies of perceived fitness. From the history of standardized testing to contemporary AI development in Silicon Valley, the episode examines how these long-standing narratives continue to influence who builds technology today and whose voices remain marginalized.


Dr. Stephen Cave is Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. His research focuses on the ethical, social, and philosophical implications of artificial intelligence, drawing on philosophy, anthropology, and the history of technology. Through his work, he examines how cultural narratives, social values, and historical assumptions shape the development and governance of emerging technologies.


Reading List:


Transcript:

Kerry McInerney (00:53)

In this episode, I talked to Dr. Stephen Cave, the director of the Leverhulme Center for the Future of Intelligence at the University of Cambridge. Stephen has a new book out, so big congratulations, which he's co-written with Dr. Kanta Dihal. The book explores AI's origin myths (Imagining AI: How the World Sees Intelligent Machines). So in today's episodes, we talk about what mythology even is, what it does in the world, and why AI mythologies matter. And we talk about two of the major origin myths that are foundational to the modern field of AI, the myth of artificiality and the myth of intelligence.


In a historical tour de force, Stephen explores how these ideas developed across Western history, time, and space, and explains why we need to think about these concepts as value-laden. He shows us how artificiality and intelligence were deeply imbued with racist and colonial ideologies at the time, and how they have enforced gender hierarchies that continue to shape the field of AI today. We hope you enjoy the show.


Well, brilliant. Thank you so much for joining me here today. Just to kick us off, can you tell us a little bit about who you are, what you do, and what's brought you to thinking about gender, feminism, and technology?


Stephen Cave (01:57)

Well, thank you for having me on the show; I am a big fan. So my name is Stephen Cave, and I'm a professor of philosophy at the University of Cambridge. But most importantly, for the last 10 years or so, I've been running something called the Leverhulme Center for the Future of Intelligence, where we think about the ethics and impact of AI. So our mission is to make the whole AI revolution go well. Well, what does that mean? It means thinking about justice, sustainability, prosociality, and so on. And that is what then brings me to the question of feminism and gender and technology, because developing AI in a way that is pro-justice means in a way that doesn't exacerbate the great historical injustices of patriarchy and using the tools of feminism to ensure that AI doesn't only not exacerbate those problems, but maybe even contributes to the improvement of the lot of women around the world.


Kerry McInerney (02:54)

Brilliant, thank you so much. And for all our listeners, we'll be attaching, of course, as always, to this episode’s reading list, where we'll be showcasing some of Stephen's fantastic work, but we'll also direct you towards the Leverhulme Center site so that you can see all the exciting research that's going on there at the moment. But because you've been working out in this field for such a long time and kind of really established yourself in the Leverhulme Center as this exciting space for thinking about AI ethics, I'm particularly excited for your answer to the three good robot questions which are what is good technology, is it even possible, and how can feminism help us work towards it?


Stephen Cave (03:28)

Yes, good technology is possible. I think we mustn't be anti-tech. Technology is part of who we are. We couldn't survive as a species without technology. We have literally evolved through and with our technology. So it is fundamental to being human.


Now, technological change, which is accelerating, always induces anxieties, and rightly so, because it can, as I've already mentioned, exacerbate injustice. It has new affordances, new potentialities, which can be extremely disruptive. But good technology is possible. In fact, we wouldn't enjoy anything like the prosperity that we do without technology, and it is in days of doom and gloom and negative news cycles worth reflecting on the enormous prosperity that there is worldwide. And of course, it isn't enjoyed evenly. That's a huge problem that needs to be addressed. But with many more people on the planet now than ever before, many fewer in absolute numbers, not just as a proportion, are living in poverty. People are living longer. People have new opportunities for self-actualization. And that is because of technological advances, as well as, of course, the political structures that give people equal access to those kinds of technologies.


So, good technology is then technology that allows us to get on and do the things that we find purposeful, to connect with other people, to connect with nature, to create. It's technology that allows us to do that, whether by tackling ill health and disease, knowing, or creating functioning sewer systems and giving us access to fresh water, or just helping us better to manage our diaries and spend our time connecting with other people. So it's technology that doesn't exist for its own sake, that isn't in the foreground, that isn't dominating our lives, but is simply facilitating us in connecting with others and with nature and doing the things that give our lives purpose. And feminism is absolutely critical to that. I mean, when I talk about our lives, OK, I am a comfortable white man. Half the population of the planet is, of course, women who have, in many cultures, been historically oppressed and haven't had the same access to opportunities as men have had. So, thinking about making technology go well for the maximum number of people possible, for all people ideally. Thinking about technology that allows people to live lives of meaning and purpose means using the tools of feminism to ensure that everyone of all genders has equal access to those opportunities.


Kerry McInerney (06:02)

I really like the way you frame this as needing to balance the meaningful challenges we face around social inequality, particularly gender and racial inequality, when it comes to technology, while also avoiding this intense pessimism about all forms of technological advancement and progress and what that meaningfully gives to our lives. And yes, big hooray for sewer systems in general. We love not dealing with raw sewage or open sewage at any time. And actually, one of our previous guests, Helen Hester, with whom we did the live episode on whether technology can save us from housework, a book called After Work with Nick Srnicek, which talks quite a lot about the development of new technological systems like sewer systems in cities like London, and how that really, really transformed people's lives. So if you have an interest in sewage, check that out. But I actually want to come now from their book to your book.


So the advent of this episode is in part because Stephen, along with Dr. Kanta Dihal, has just published a new book. So huge congratulations. Tell us a little bit more about this book. So we know that it explores AI's origin myths and why this matters. But what do you mean by mythology or myths throughout the book?


Stephen Cave (07:10)

Yes, thank you. Well, so, Kanta Dihal and I have been working on what you might call narratives around AI or AI discourse for much of the last decade. But increasingly, we've been framing that as a mythology of AI. And that is a conscious choice to use that term, because we feel it has a power that narratives and discourse don't. But we do use that term advisedly. So, what do we mean by mythology? Well, we do try to lay that out in the book. We do try to be very clear about what we mean. But essentially, think of a mythology as a complex and large set of stories, stories that aren't trying to do something directly descriptive. They're not a straightforward historical account of a people or a place. They're not a straightforward attempt to forecast what's going to happen. So they're inventive, imaginative stories that are trying, even though they're not a straightforward attempt to describe the world, they're nonetheless trying to explain the world in some way. So when we think of Greek myths or Indian mythology or Judeo-Christian mythology, these are stories that explain the origin of the universe, the origin of humanity, the origin of social orders, answers to questions about what is right and what is wrong, what might happen if you behave in certain ways, and perhaps also the end of the world or what comes next or what happens after we die and so on. So they have a great deal of explanatory power. Within that explanatory power, of course, values are embedded, are what philosophers of science call “value-laden stories”. That is, they express what the world should be like, not just what it is like, but what it should be like. What is good and what is evil? What kind of powers, rights, and responsibilities different people have, depending on their structure in the world, and so on.


So when we say a mythology of AI, we are claiming that the stories we tell about AI have this kind of structure, that there are a lot of them, that they're complex, that they're not merely attempts to either describe the history of AI or its future. Still, they're exploring many different possibilities, many different ways in which our future with intelligent machines might play out. And some of those can be quite contradictory, which is indeed symptomatic of a mythology. Greek myth doesn't just have one thing to say about heroism or life after death, for example, or the origin of humanity. Many interweaving stories express different sets of values and different perspectives. And the same is true of AI stories, some of which are very utopian, some are very dystopian, and so on.


So it's a complex mesh that is interwoven and, most importantly, does embed values. So the mythology of AI, although it has very long roots, we've been telling stories about intelligent machines for at least 3000 years. The mythology of AI, having mostly evolved in the 19th and 20th centuries, really expresses the ideologies that were prevalent in that time and the kind of value systems that the creators of those mythologies and technologies would have subscribed to. So we really need to analyze the mythology of AI to unpick those value-laden assumptions and to understand how they are driving the development of this technology to the advantage of some and the disadvantage of others.


Kerry McInerney (10:32)

That's really interesting. You're ascribing this role to mythology, where it, on the one hand, helps us understand the world better, but on the other hand, is also actively shaping the world around us. It has this kind of productive force that's doing things in the world and changing it, which is why these mythologies, and particularly when it comes to say a mythology of AI, matter so much. But I actually want to ask you then, what do you see these mythologies about AI doing in the world? Why do they matter so much? I think this has been a question that has animated so much of your work on AI and storytelling. And I remember being really hugely excited when I first came across the AI narratives project, because being in the discipline of politics and IR (international relations) and doing work on science fiction, I felt like trying to tell people that these stories have a force and they have a political power was a little bit of a hard sell. People would say, “Well, what's really the point of looking at young adults' dystopian fiction, The Hunger Games”, or “what's the point of looking at how housework is represented in sci-fi films?” Because ultimately, these are just stories in the narrative world. But I feel like what you're arguing in your work and in this book is that that's not quite the case, that the interplay with the real world is far more dynamic than that, particularly a narrow perspective would suggest.


Stephen Cave (11:45)

That's a great question. These mythologies absolutely are active in the world. Perhaps one way to look at it is to see that we are fundamentally cultural beings. We exist in a world of culture and of stories. The material aspects of our lives would seem very plain if they weren't animated by the stories we tell that create our motivations and our belief systems and shape how we interact with each other, and so on. So we are fundamentally cultural. So, to analyze this cultural aspect of AI is to understand what is motivating us in creating it, what kind of dreams we’re fulfilling, what kind of nightmares we’re avoiding, and so on.


So now, to give some concrete examples, first, very many leading figures in the development of the technology itself will very openly admit that they were shaped as children by science fiction. So, of course, science fiction is one of the main components of the mythology of AI, and many leading figures in academia and big tech are very, very open that they would be the world's first real Susan Calvin, the robo-psychologist from Asimov stories, or to build the first HAL (hardware abstraction layer) computer. Yeah, don't ask me why anyone would want to do that. But you can, if you walk into the Google office, for example, you'll see that lots of the rooms and other artifacts are named after these science fiction stories. So there is a vision of the future developed in science fiction and outside and speculative nonfiction and other kinds of works about the future with AI that they are trying to make come true. So there's that very conscious and explicit set of causal links, if you like. But more subtly, the pervasive association of AI and computing more generally and engineering more generally with masculinity, which has a long history and is a big part of the mythology of AI, is undermining women's motivation to go into the field and undermining their ability to get into the field and flourish within it if they do try to go in. Because effectively, there's a culture of skepticism throughout AI about women's abilities. And this is not rooted in fact. This is absolutely rooted in the broader mythology. So these are just two examples of the very real-world consequences the mythology has.


Kerry McInerney (14:05)

Absolutely. And for the long-time good robot fan, we have a hot take episode with Eleanor and me on a study that Stephen, Kanta, Eleanor, and I did together on representations of AI scientists on screen, and then how that might be shaping women's perceptions of computer science and AI as fields. And I believe that episode is rather irreverently titled, Why AI Scientists on Screen Suck, which is unfortunately most of the scientists you will see on screen we argue embody these quite harmful very masculinist stereotypes whether they're lone geniuses or child prodigies who then can get away with doing and being anything usually rude to people and so if you're interested also check out the reading list again for this episode I'll link that paper there. So I want to deep dive a little bit more specifically on some of the origin myths you talk about in the book, and also to hear a bit more about what kinds of values they evoke, because, as you said, myths are deeply value-laden. So I want to start with maybe one of the most foundational myths, which is the myth of artificiality. So what is the myth of artificiality, and how does it shape the field of AI?


Stephen Cave (15:11)

Yeah, so in the book, it's a short book, and we focus on what we call the origin myths of AI. And by that, we don't really mean when do people start building intelligent machines. We don't mean that moment around 1955, 1956, when John McCarthy and others coined the term AI, though we do talk about that. What did they think they were doing? But perhaps more importantly, what mythologies were they already channeling when they chose those terms? And it's important to understand they had options. It was not inevitable that what we now talk about is AI, and it's all over the headlines, government briefing papers, and the talk at every dinner table conversation. It is not inevitable that it was called artificial intelligence. There were many, many other candidates at the time, cybernetics, most famously, but also information studies, neural processing, and automata studies. There were countless in advanced computing and information engineering. There were a lot of options. But artificial intelligence is stuck in the public imagination. And I don't think that's a coincidence. It's stuck because it was tapping into the broader mythology of immense power that already existed to a large extent in the 1950s, but has further developed since. There's lots to say about this broader mythology and different ways of approaching it, but in the book, we approach it by first looking at the idea of artificiality and then looking at the idea of intelligence. It's just a neat way of bringing together some of the overlapping influences on how we think about AI.


So that said, start with artificiality. So artificiality comes from the idea of the arts, comes from Latin, “ars”, which means “crafts and making things”. So it really is a direct equivalent of “techne”, which is the Greek term for the same thing. So “artifice”, artificiality, is equivalent to technology in terms of how they've developed from Greco-Roman culture. Technology is a much more widespread term now to describe the stuff humans make. But that's actually relatively recent. Technology as a word was really coined in English in the 19th century. MIT, Massachusetts Institute of Technology, was the first institution ever to bear the term technology. And that was around 1860, I believe. So for most of the last 2,000 years in Western culture, in particular, Anglophone Western culture, when people were talking about stuff humans made, they used the term the arts, mechanical arts, and so on.


So, in looking at the origin myths of artificiality, it's that long history we're looking at. And if you're looking at a long history of anything in Western thought, then really you're looking at Judeo-Christian influences, and you're looking at Greco-Roman influences, and you're looking at their interplay. And the idea of the artificial, I mean, the most obvious thing about it is it's juxtaposed with the natural. Artificial is stuff humans make, but natural is everything else, stuff we find lying about in forests and so on. So it's stuff humans make. But the idea of the artificial came to be the hook for a great many hopes over the last 2,000 years. It's been a long time, but of course. And if you think of the Christian story, which dominated any ideological framing in Europe over the last 2000 years. Obviously there was a strong sense of humanity has fallen, we were thrown out of the Garden of Eden, our life is one of suffering, we need to get it over with, behave well and then go to heaven, but there was always an undercurrent that was a bit more optimistic than that about actually doing something with our time on earth, about if trying to create a decent life for ourselves, and the arts were always seen as the means for doing that, of course combined with the right kind of moral attitudes and piety and so on.


And although we see the scientific revolution as a turning point, and it was, it has much longer roots, of course. There were traditions keeping alive these scientific and technological methods throughout the Middle Ages. But when the scientific revolution came, the proselytizers, people like Sir Francis Bacon, were saying, “We can use technology, the arts.” He was talking about the arts to recreate a paradise on earth. So he was using a Christian vocabulary to say, “Yes, we were thrown out of Eden because we messed up, but we can get back. We can make our life like a paradise. And we can do it through science and the arts.” So this gave a new legitimacy to the arts. And of course, it was a time when the power of technology was starting to manifest.


So we increasingly see all of these hopes attached to the idea of the artificial, hopes for transcending our current state of disease and famine and so on, for something much more Utopian. And around this, in this period, the 17th and 18th centuries, this very much gets picked up by those Europeans who were seeking a better life elsewhere. There was, of course, a period of European expansion, which seemed from a European point of view to be colonialism and imperialism. And those hopes of what people, colonists, hope to find were very much shaped by this idea of using technology to create a new Eden, a second paradise in these newly found lands, in particular, of course, the US, but elsewhere around the world. Now, the story here takes a darker twist because, of course, much as Europeans like to portray themselves as going out into these empty lands to put them to the plow for the first time, because they've just been sitting there. Because they weren't just sitting there, they were inhabited by other humans.


And Europeans going into these lands to justify their actions of conquest, exploitation, and enslavement of others needed some justification that would fit with their Christian ideology and fit with their view of themselves as good people doing the right thing, despite what they were in reality doing. And so they develop a narrative in which the idea of the arts comes to play a crucial role. And the idea that because Europeans have superior technology, not only is this the means to conquer, but it also becomes crucial to their right to conquer.


We, for the first time, Europeans are going to properly put this land to the plough, and we're going to properly cultivate it. We are going to civilize it. And civilizing, yes, it means bringing Christianity and a certain kind of moral framework, but most of all, it means bringing technology, or what was then called the arts. So we have the idea of the arts here, fully enmeshed in these broader ideologies that were so dominant at the time, because of the development of racism, which becomes crucial to justifying the imperial mission. And of course, as we've already mentioned, it's also a highly gendered concept. So one way of understanding this, which I find very helpful, is to draw on, well, I draw on Val Plumwood, the eco-feminist conception of linked binaries, and others have explored these binaries from various critical perspectives. But Val Plumwood lays out an outbreak clearly in Wonderful Book Feminism and the Mastery of Nature. Artificial is opposed to the natural, but this is just one binary that is closely tied to a lot of others, including civilized and savage, male and female, reason and emotion, white and black. So these binaries are all heavily evaluated. It's not just that the artificial and the natural aren't just a simple ontological distinction. It's a value judgment.


In all of these binaries, the one I mentioned first is part of this broader ideological framework, the superior one. Men have a right to rule over women, white people over black people, because civilization takes precedence over savagery, and the artificial should take precedence over the natural. The natural is associated with savagery and barbarism and death, and civilization, and the artificial, then with bringing light and hope and a life of ease and prosperity, and so on. So when we talk about the myths of the artificial, what Kanta and I want to do is just to lay bare. This 200-year-plus legacy that surrounds the much more innocent-sounding binary of artificial and natural actually is embedding in that binary all of these others that have been so much a part of the systems of oppression that have shaped the world in the last few hundred years.


Kerry McInerney (23:29)

And I think that contextualization is so useful because so much kind of technofeminist work has really, really focused on trying to unpick this artificial natural binary. I think tracing not only how that is so deeply tied to other kinds of hierarchical binaries as you've done, but also showing how even this very idea of artificiality as this tool of domination or as a way of establishing racial hierarchies is really valuable. And we have quite a few episodes out on the podcast on techno-orientalism, and I think one of the central ideas when it comes to that is this idea that you can be racialized as not being close enough to the artificial, not being technologically proficient enough. And you can also be racialized as having too much of an affinity with the artificial, as being seen as being too much like a thing. And so, I think this is making a really important contribution to that way of thinking about how technology, even just conceptually, or this idea of this capacity to make technology, this capacity to create, becomes imbued with all these gendered and racial ideas. And so I guess I want to ask you another question, I guess about an origin myth, which I think does again relate to this idea of capacity, which is actually the second phrase, word in this phrase, artificial intelligence, the origin myth of intelligence. So, how do you encounter intelligence in this book? How do you trace and map out this idea and relate it to the way that we think about intelligence in the contemporary field of AI today?


Stephen Cave (24:52)

Yes, thanks. So, of course, intelligence fits very neatly into this system of binaries, so your intelligence versus people-mindedness, stupidity, the moronic, and so on. And again, closely tied to some of the binaries I've already mentioned, like reason and emotion, and we see very much in the mythology of AI intelligence associated with a pure, cold rationality, a purging of that which is emotional. And of course, all of this is very gendered, very racialized. But the story of intelligence is a very interesting one. Unlike artificial, which perhaps has this very long history, but actually in the last 150 years has really come to be replaced as a term by technology, which has a very similar meaning. The story of intelligence is rather the opposite, that actually it was a word that was not prominent until about 150 years ago. And, it's given what an important term it is, not just now, because of the AI moment, but through all of my lifetime and before, intelligence has played a key role in how people think about themselves and how they think society should be structured, plays a key role in psychology and psychometrics, the measure of intelligence, huge industry, and so on. But actually, it wasn't until the latter part of the 19th century that it rose to prominence.


Before then, philosophers, for example, like myself, were interested in ideas of the mind or the faculty of reason, and so on. And there's a very, very significant difference between those as objects of study and intelligence. And that is, everyone is thought to have a mind or a faculty of reason, or so on, equally. Everyone has one. And they choose whether to exercise it or not. So, if you look at the great philosophers Kant, Locke, and Hobbes, etc., they're all talking as if we've all got this. So they're debating about whether it comes with certain preconceived ideas, or whether it's a blank slate, why some people exercise it more than others, but the assumption is we've all got one. And the idea of intelligence is very different, because from the start, people are taught to have different degrees. And that turns out in the story of intelligence to be extremely consequential.


So to understand that pivot, I think the crucial moment is Charles Darwin's publication of The Origin of the Species. Of course, now evolution has been around as an idea for quite a long time already. But he won many converts to the idea. And suddenly, he established the importance of very subtle, fine gradations in ability for a species, and the idea that they inherit natural selection. Of course, all members of a species are born with subtle variations, and those that are best fitted to survive do so. Heredity becomes very important, and gradations become very important. Charles Darwin's cousin, Sir Francis Galton, picked up on this idea and applied it to our mental life. He starts asking a question that actually already fascinated him, but Darwin gave a new framework for the extent to which intelligence is hereditary. And to what extent does it differ between people? And so he set up what we would consider the first real intelligence test, which we wouldn't really recognize as such. Now there were things like reaction time tests and other kinds of things. But he thought they were tests of mental ability. So this is now the 1860s, 1870s. But he wasn't doing so only out of curiosity.


Francis Galton is also the person who coined the term “eugenics”. And his main motivation was ensuring that the best races survived and had increasingly large numbers of children. And those who existed to be the inferior races did not. Of course, we've already talked about the ideology of colonialism and the kind of mental gymnastics that white Europeans were doing at the time to justify their behavior elsewhere in the world. So, establishing that white Europeans or whoever were objectively superior, and therefore, in a kind of social Darwinist framework, deserved to inherit the world. And because it was a strange mix of descriptive and normative, yes, they would, it was inevitable, because they're superior, but they also should because they're superior. So it's a mix. And so Francis Golden starts in a real sense, his whole life is a campaign for the importance of eugenics, with intelligence testing as the key to determine who is eugenically fit and who is not. And this idea is picked up around the world, but in particular in the US, where a group of very, very influential psychologists who are literally all card-carrying eugenicists, literally members of or sometimes leaders and influential members of eugenics associations, pick up on intelligence testing in the US, which of course has this very fraught racialized history.


And they start developing IQ tests and start promoting the idea of intelligence as important. And the real turning point is around the First World War, when this group of psychologists got tests deployed on army recruits. 1.75 million people are tested to see who is fit to serve and who isn't, who's fit to be an officer and who isn't. And on the back of this, a couple of books come out, Lewis Terman's book on The Measurement of Intelligence. Lewis Terman created the Stanford-Binet test. He was at Stanford, which is still used today. Carl Brigham's book, A Study of American Intelligence, Carl Brigham developed the SAT, which is, of course, still used today. These are card-carrying eugenicists who then publish these very, very influential accounts of intelligence, which are really for the first time putting the importance of intelligence out into the world. And these are explicitly racialized analyses. Brigham's book is entirely framed in racial ways. And of course, they're asserting the superiority of white Europeans like themselves, in particular Northern Europeans, and they have a hierarchy of other races, with inevitably, given the social structure of the US at the time, black people at the bottom. This is incredibly influential. Their works are incredibly influential in obviously people's perceptions of who is intelligent and who is not, but also with profound consequences, justifying racial segregation, for example, justifying stricter immigration laws, which kept a lot of, for example, East European Jews from coming to the US at exactly the time pogroms are increasing in Europe and Nazism is on the rise. Sobies are extremely influential.


Now, up until this 1955-56 moment, intelligence has become more and more central in US culture and elsewhere in the world, but in particular in US culture, as the way of structuring society. The most intelligent should be at the top. That's the key to our success. They should be going to elite universities. They should be taking the government jobs. They should be running our companies. Then, correspondingly, certain groups lower down should be the workers. And then others, the so-called “feeble-minded”, which is a very racialized category. And, Brigham and Turman, whom I've mentioned, have clear conceptions of which races fit most in the feeble-minded. They should simply be sterilized for the good of society. The world would be a better place without them. So this is the context, the worst excesses, one might say, of eugenics were revealed by the Nazi atrocities. And the term fortunately became much less prominent after that. But the system of thinking, of course, continued well into the latter part of the century, and it is still influenced by how we think today. And that is the backdrop when McCarthy coined the term artificial intelligence. He was plugging into a term that was incredibly fashionable at the time. The primary technocratic tool for structuring society was totally racialized and gendered. And I think that has profound consequences for how we see AI today.


Kerry McInerney (32:14)

Absolutely. And I think this critical history of what intelligence even means? And how did we come to fetishize intelligence as this primary way of dividing people? And not only is that deeply racialized, deeply gendered, but also, I think, The Oxford Handbook of Eugenics talks about the way that class and disability became so central in shaping these eugenic ideas. And I think that absolutely matters when it comes to contemporary AI today in terms of what we value. What do we build for?


Time has really just flown by, but yeah, thank you so much. So, for our wonderful listeners, can you tell them where they can buy the book, and definitely go out and get a copy? It is a super pacey, fun, short read that really captures in even more detail a lot of the brilliant ideas that Stephen's been exploring on the podcast.


Stephen Cave (33:00)

And indeed, we'll leave you on a cliffhanger because the book has a lot more detail about how these mythologies have shaped how we think about AI and the hopes and fears we have for AI and who gets to make AI. So do take a look. It's published by Cambridge University Press. And it can be bought in hard copy, but it is also free. It is open access, so it is free to download.


Kerry McInerney (33:23)

Yay, we love a free book, so actually, we will attach a link to that once again on the reading list on our website so that you can go straight there after listening to this episode and find out more. But once again, Stephen, thank you so much for coming on.


Stephen Cave (33:34)

Thank you for having me. It was a pleasure.

 
 
Join our mailing list

Thanks for submitting!

Reversed white logo CMYK.png
  • Amazon
  • kradl, podcast recommendations, podcast discovery, interesting podcasts, find-podcast reco
  • Spotify
  • Deezer
  • Twitter
bottom of page