top of page
Search

The Good Robot Hot Take: The Future of Life Open Letter

Updated: May 31, 2023

Welcome to our new format: The Good Robot Hot Takes! In these fun, lively, conversational episodes, we (Eleanor and Kerry) discuss some of the biggest issues in tech, from ChatGPT, and the sexy fembot problem in Hollywood film, to why predictive policing is a scam and why gender recognition is garbage.


Reading List and Resources:



DAIR Response to the Future of Life Open Letter: https://www.dair-institute.org/blog/letter-statement-March2023



'What really made Geoffrey Hinton an AI doomer': https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/


Good Robot Interview with Josh and Derek on 'Calamity Theory':

https://podcasts.apple.com/gb/podcast/the-good-robot/id1570237963?i=1000590042879


Good Robot Interview with Meryl Alper on 'craptions': https://podcasts.apple.com/gb/podcast/the-good-robot/id1570237963?i=1000556281253



Transcript:


DEEPYCUB (Music)

Hot Takes with the Good Robot, Hot Takes with The Good Robot


ELEANOR DRAGE: 

Welcome to our first ever episode of The Good Robot Hot Takes. Every two weeks, Kerry and I will be giving our hot take on some of the biggest issues in tech from ChatGPT, and the sexy fembot problem to why predictive policing is a scam, and why gender recognition is garbage. In our first ever Good robot hot take Kerry and I discussed the Future of Life Institute's open letter, which called for a six month pause in developing large language learning models in the wake of the release of ChatGPT. We explore the problems with framing large language models as foundational and therefore inevitable, what the dangers are of   AI race rhetoric, why the long-term harms of AI are given more attention than what are termed short term or immediate risks, and how race and gender shape the foundational premises of what is known as existential risk.


I also have a bit of a moan about an interview that I did on a BBC radio show called Moral Maze, where I explained that the real risks stem from the concentration of power in the hands of a few tech companies. We hope you enjoy the show.  


KERRY MCINERNEY:

Hi everyone, and welcome to our very first episode of The Good Robot: Hot Takes. So I'm Kerry McInerney. I'm a researcher at the University of Cambridge, and I'm the co-host of the Good Robot Podcast. And I'm here with Dr. Eleanor Drage, my work wife, a Senior Research Fellow here, and also co-host and producer, editor of The Good Robot.


So for our new listeners a big welcome and for our longtime Good Robot fans, you might notice that today's format is a little bit different. So usually we interview someone super cool who we think is doing amazing feminist work in or about tech. Uh, so if you're new, please check out those episodes.


You can find us in Apple, Spotify, or wherever you get your podcasts really, ,but we are launching this new format, the Hot Takes, as a way to have quick and casual conversations about latest topics in the news industry, um, particularly with ai that's our specialty, but also the thinking more broadly about technology and the role that it plays in our societies.


So if you're interested in a particular topic we discussed, we'll link some stuff you can read or listen to further on our website, www.thegoodrobot.co uk where you can also find a full  transcript of the episode. So Eleanor, what are we talking about today?


ELEANOR DRAGE:

So we're talking today about this letter that pretty much everyone has heard about unless you were living under a rock that was published by Future of Life Institute and it was urging for a ban on systems that were more powerful than GPT4 and everyone got involved, all the big tech bros like Elon Musk.


I was on Moral Maze on BBC4... or was it 3!? talking about, um, being asked about whether we should all be shitting our pants, basically about this open letter and about the kinds of systems that are being rolled out in pretty much every domain at the moment. And the reason why we're talking about it today on our podcast is because, well, for me, I was really concerned that when I was asked all these questions by extremely intelligent people on moral maze, they were not letting me talk about the short term risks of AI.


They were just going on about whether we're all gonna  die and it's a really important thing to consider, right? These long-term risks of AI, the low probability, high impact risks. So we don't want a belittle existential risk at all. All we want to do is join the dots between short-term risks like racism, like sexism, discrimination, caused by the police using tools that they really shouldn't be using, and these low probability, really far off risks or potentially closer risks. So we're really concerned that some conversations are happening everywhere and some conversations are now being belittled, which means that they're not being funded so much like, um, problems to do within equality and AI receive  much less funding than existential risk problems.


So those are things that, that really concern us.


KERRY MCINERNEY:

Thank you so much. That's really helpful context. So just to kick us off, um, what exactly is a large language model? So this is the kind of AI system that is largely being talked about in the open letter.


So they use Chat GPT 4 as an example, but there's a wide number of these models. Um, so what is a large language model for people who might not be familiar with this, and why are people worried about them?


ELEANOR DRAGE:

Okay, so I'm gonna go with the easiest definition that is the butchering of computer science. So if you think about when you type something into Google and it guesses what comes next, you know, celebrities play that game where they peel off the end of the sentence like, Is Jennifer Aniston pregnant?


So the system will try and guess, you know, in a sentence what is likely to come next. So that's sort of what LLMs do, these large language learning models through a lot of data that's ingested you know, there's lots of talk about how many billion parameters these systems have, they try and guess what will come next in a sentence.


And this means that they can do all sorts of things based on this probabilistic modeling the likelihood of a sequence of words occurring in a sentence.


KERRY MCINERNEY:

 Yeah, that's super helpful. And you can see suddenly, I think if you have been plugged into any kind of news cycle over the last few months, you've probably seen a lot of excitement about the advances that have been made in large language model technology. So the most famous, of course, are the ChatGPTs, um, Open AI's models that have been recently released.


Um, but there's also ones at Google's Bard, for example, that have made people really, really both excited and terrified about the possibilities these models offer. And it's important to recognize that these models do have, as Eleanor's just mentioned, a lot of really practical uses. So one example, for example, is automatic captioning, right?


That these models can lead to hugely improved accessibility for people who are hearing impaired because they lead to much better translations and transcriptions as opposed to what Meryl Alper has previously called on our podcast, craptions. And so I would really recommend you check out her episode if you're listening now.


But at the same time there's been, again, like a lot of concerns. These have ranged from misinformation that might be propagated by these models to plagiarism, to citationality. How do we know where this information's coming from? But also discrimination. So the way that these models are often trained on really biased dataset scraped from the internet that are really imbued with forms of gendered and racialized discrimination among many other kinds of injustice.


ELEANOR DRAGE:

 Totally, and we're gonna talk a lot about Emily Bender, Timnit Gebru, Meg Mitchell, and others in this podcast, because they were kind of the definitive spokespeople for the dangers of these large language learning models. There was a paper that they wrote called Stochastic Parrots, and it received a lot of attention, even though Meg Mitchell told us on an episode of The Good Robot that, that it wasn't actually that exciting.


They didn't say anything that hadn't been said before. So that's really interesting too. And she pointed out that it was about misogyny. It was about the fact that women weren't welcomed in this extremely lucrative, extremely cutting edge discipline within these big tech organizations and we're being forced out.


So the people who are able to speak out about the risks of these systems are also the people that are valued and the people who are being silenced, who are being fired, teams that are being deleted from Microsoft, those people are not allowed to speak out about the risks, so I'd like to discuss today as well, who can and who can't talk about these existential crises.


KERRY MCINERNEY:

 Yeah, absolutely. And I definitely have like a lot of thoughts about the kinds of existential crises generated by things like large language models. But maybe we should actually start off then with the topical conversation today, the open letter by the Future of Life Institute. So Eleanor, could you just quickly describe for us what was this letter saying what was it about and who wrote it?


ELEANOR DRAGE:

 Thanks. Otherwise I'll just like garble on without having even introduced the theme. So this open letter was written by The Future of Life Institute. And they are a group of people, who are worried about various issues, about long-termism, but they also think about what kind of long-termism they are looking for. Now, of course, they're not representative of the whole of humanity and there's been these photos on Twitter. I dunno if you've seen them, of early future of life conferences where it's literally like 17 white guys on a stage. I really don't want to invisibilize the women that do work for them or are part of that community. I'm sure there are many, um, but its foundations were pretty masculine and I think that plays a big role in the kinds of messages they're trying to communicate today.


KERRY MCINERNEY:

 Yeah, absolutely. And I think, you know what was really interesting about this pause letter is that there's a real failure in corporate responsibility because surely if you really believe what you're doing is a risk, then A, you would've had meaningful ethical checks and balances installed in the production process, or B, and I'd say actually more importantly, you would've just taken a huge step back and said, is this even a good idea at all?


Should we be making these kinds of technologies so the AI Now Institute report for this year, I'd really recommend checking that out and we'll link on our webpage, argues that these large language models represent pathologies of scale. So they produce certain kinds of risks just due to the immense scale and their immense reach.


Um, but they also critique the way that, for example, institutes and, and this includes academia, where we are based as well as corporations, increasingly are referring to these kinds of models as foundation models. So suggesting that there's somehow, you know, a foundational scientific achievement and that we have to have them and they have to dictate our future in some way.


And so I think this is also why the pause letter frustrates me a little bit because you know, I think just saying that there's gonna be a pause also kind of suggests that these models are now an inevitable part of our future life. And like, look, I am not fully a techno pessimist. I think there are a place for different kinds of technological developments, but I also think that the question of why are we doing this in the first place still needs to be at the center of our ethical reckoning with new technologies.


ELEANOR DRAGE:

 Yeah, we are critical techno realists. Well, I am and Kerry also, uh, co-wrote that AI now, piece so...


KERRY MCINERNEY:

 A a general sort of, you know, cheerleader slash reference person. But yeah Amba [Kak] and Sarah [Myers West] who wrote the report are super fantastic and I think, you know, their reflections on large language models and specifically the way that this... the kind of fetishization of these new models is increasing this concentration of power in the tech industry that we're seeing in so many different areas, um, I think is just really accurate. It's really on the money, literally and metaphorically. So, you know, in the report they outlined the way that the uptake of these large language models leads to quite a big first mover advantage. So companies like OpenAI and Google are most likely gonna have, you know, such a strong advantage in this area that it's very unlikely that small companies are going to be able to compete with them because these models require a huge amount of comput compute power, they require very specialized expertise, which is largely concentrated in like a small number of companies. Um, and you know, they're just very expensive to make. And so, You know, I think realistically we're going to see these kind of same few companies, the ones who are kind of calling for a pause, for example, who are also going to still be leading this arena once the a so-called pause would be over.


ELEANOR DRAGE:

 Totally, and I think maybe it's a good time to just go through like a couple of points from the letter, so what they're urging for is AI labs and independent experts to be able to develop and implement a set of shared safety protocols for advanced AI design that are rigorously audited and overseen by independent outside experts, which is kind of ironic because the people writing it are also part of big tech organizations that are really resistant to external audits. This is part of the problem that we have as researchers, most of these algorithms are proprietary, it's very difficult to, to have a look, um, you know, OpenAI was supposed to be non-for-profit and now, then received a 10 billion sum. So it's, there's a tension here between what they're saying and what they're doing, and that's what we are trying to point out is do you mean it is this just a marketing stunt? Then, you know, as Kerry said, you are creating a problem and then purporting to solve it, which is analogous for a lot of tech invention right. In, in my mum's day, you know, she always talked about when you're creating a tech product, you find a need, um, provide a solution and resolve it. But now we are creating a need, creating a problem just for the sake of, of resolving it, which is why you get a lot of like shoddy products that we're now addicted to.


The other thing that's kind of ironic is they're saying that, AI research and development should be refocused on making today's powerful state-of-the-art systems more accurate, safe, interpretable, blah, blah, blah, aligned, trustworthy, loyal, all this stuff. But then as we said before, they are not taking AI ethicists seriously, they are assuming that the stuff that's being done today to make systems, transparent through communicating to citizens the kinds of technologies that are being thrust on them and the option of opting out, that actually that is not related to this thing that they're asking of GPT4 and systems that are more powerful.


So do you want to have these ethics principles incorporated in your systems or do you not? And like personally, I think, you know, if you wanna build something horrific, then whatever, it's up to you and the law will intervene. But if you're saying or pretending to future customers that your system's gonna be super ethical and brilliant, then that is extraordinarily unfair because you are making it seem as though the corporations are the good guys and really it's the technology itself that is somehow vindictive  and can kill humanity.


KERRY MCINERNEY:

I completely agree and I think, you know, more broadly just kinda reflect on what you've raised there. I think it's a real testament to the failure of self-regulation in this field. I think particularly a few years ago there was a lot of. Hope that companies might be able to themselves implement, you know, wide ranging kinds of ethical AI projects that they might be able to create better research and production cultures and create better products as a result.


And I'd say that not at all to demean, you know, all the engineers and all the people who are working in tech to really meaningfully try to make a difference, to really meaningfully try to transform the way that we think about design and the way that we think about technology. Those people are there, they're doing incredible work.


We often have those people on our podcast, but I think, you know, and this really started, or in the public eye with the firing of Timnit and, and Meg, but has continued to go on with the letting go of numerous ethical AI teams, particularly I think the role that Elon Musk has now played in reshaping kind of internet culture and his acquisition of Twitter, um, has really, I think, lessened certainly my trust, I think most people's trust in the meaningful ability of companies to self-regulate and to bring about the ethical cultures that they're promising consumers. Um, and I also think that this is only likely to get worse because we're already seeing this with OpenAI and Google, which is that these companies are also now getting locked into races with one another.


And also this gets tied up with like, national politics or international politics as well, which is something we'll be talking about perhaps on another Hot Take. Um, but you know, I think that also really does not help the development of these ethical AI cultures. And so I'm personally feeling a little bit at loss in terms of, you know, why people might still believe that self-regulation is really going to work.


ELEANOR DRAGE:

 So what is wrong with the race? Kerry, tell us.


KERRY MCINERNEY:

 The race between different companies over these models? I see multiple problems with like the race rhetoric, and I think if you take a little Google around and you try any, um, news articles about sort of AI competition, you will see the language of a race being used. Um, I have like multiple sort of bones to pick with this analogy.


I think like firstly it, it adds to the scaremongering and the hype around these models, right? Because it implicitly and often explicitly draws on other kinds of like historical races that have occurred. You'll often see the space race or nuclear arms races being referenced, um, in these articles. And I think it, it contributes to a general climate of fear.


Uh, and it also encourages, I think, um, a sort of conflictual model rather than a cooperative one. And this, I think, is actually quite counter to previous movements we've seen in the AI industry. For example, a few years ago when there was a big reliance on things like open source software. Um, and so I also think it mischaracterizes AI in this race, rhetoric language as a weapon. AI of course, can be weaponized and used in militarized context. It's very dangerous, it shouldn't be, I believe. Um, but AI itself is this multi-use, multi-domain technology. And so it doesn't really make sense to characterize it as a weapon. And then finally, and I'm drawing a lot here on, um, Seán Ó hÉigeartaigh and Stephen Cave's work, um, the, the race rhetoric can also lead to potentially people ignoring or overstepping ethical guidelines because they see ethics as slowing down the process. And we argue that going slow is actually a really important part of good tech development in general. Um, but in the eagerness to be the kind of gain that first mover advantage and be the first person to have your product out to market, you know, that can end up turning customers into a kind of testing ground where you say, okay, well we are just gonna release this into the wild, hopefully it works and we're going to get a lot of user feedback regardless, because we are the first exciting product out there, and that's really dangerous and it's just a really unethical way to make technologies.


ELEANOR DRAGE:

 Totally. And actually Timnit Gebru pointed out that irony of OpenAI saying that they needed to work really fast to create a solution for existential risks, but, that was through creating AI as fast as possible. So why do you have to build the thing in order to save everybody from it? That just makes no sense at all.


Um, AI ethics is, is slow. It's not a sexy discipline, you know, it's not, it's not seen as frontier. I mean, I think it's really sexy, but generally speaking, it's about, you know, slow incremental processes, doing something every day, um, thinking about risks can be seen as a killjoy work.


And we, we embrace that killjoy spirit in, um, in a kind of Sara Ahmed way.


KERRY MCINERNEY:

 Yeah, absolutely. And I think there's something important in the messiness of that. Like, and again, like I actually don't love the kind of polarization people draw between like the AI ethics people on one hand who are concerned about like justice and then the AI safety people on the other hand who are concerned about like, AGI, like, I don't think that's actually the most helpful delineation to draw.


I think we have a lot of common ground. We're often working towards a lot of common things. We often will get on very well. Um, you know, I do agree that, you know, I think that one community has too much space and funding compared to other communities. But I don't want to suggest this idea of like, we can't get along if we think about harms and risks and different time scales.


But I also agree that I think, you know, Unfortunately the risks which look bigger and sexy and scarier, you know, um, and we particularly see that with any kind of risk that results in the whole of humanity getting wiped out. Also, anything that is really sensationalist. So it relates to sex. And on a side note, you know, Eleanor and I, when we first started this job, like it was very impossible to not get asked a question about sex robots, which is like, fine, but whatever. Um, and so, you know, whereas I think, you know, when I think about the existential risks posed by large language models, I mean, I'm thinking about the environmental risks.


These technologies are hugely costly to the environment because the amount of water needed to cool data centers, the compute power, the electricity, the carbon emissions, like this is like, I think the real existential risk posed by these technologies. And these aren't the ones that are making the airtime.


ELEANOR DRAGE:

 Yeah, totally. And I think as anyone who has ever worked in gender studies or gender studies related things will know feminism and gender studies is not just about sex. It is about sex, but it also can be applied to lots of different domains, like I love Data Feminism by Catherine D'Ignazio and Lauren Klein. And they apply feminist principles from Donna Haraway and these amazing feminist thinkers to data issues, to data collection. So you can also apply these ideas to existential risk, and maybe actually now's a good time to talk about the wonderful book, Calamity Theory, and we interviewed the authors on the podcast a while ago, right?


KERRY MCINERNEY:

 Uh, yes. So we've got an episode with Josh and Derek available in our history, and we'll also link that in the show notes slash on our webpage so that you can have a listen.


ELEANOR DRAGE:

 So Calamity Theory is about how existential risk is defined in a quite a limited way, right? So Nick Bostrom, the guy I was talking about earlier. He writes in Superintelligence that an existential risk is one where everyone dies. Nothing else is existential except he also defines existential risk as the plateauing of intelligence.


So if humanity doesn't progress, capital P progression, Enlightenment- style, then we will decline and that plateauing or that decline will lead to a sort of intellectual death. It's all really strange. And what Calamity Theory the book points out is that genocide is described as just a drop in the ocean in Bostrom's terms.


There's this really, it's a kind of affective politics of uncaring. It's, it's so uncaring and um, so violent in itself. There's a violence to that eradication of violence or that, um, overlooking of these terrible events that, that have been existential in the history of humanity.


KERRY MCINERNEY:

 Mm-hmm. Absolutely. And you know, we have an [upcoming!] episode with the incredible, um, indigenous sci-fi writer and scholar, um, Grace Dillon, and you know, a central theme of her edited anthology Walking the Clouds is this idea of indigenous folks, uh, in many places around the world having already experienced this kind of apocalypse.


And I think that is, you know, the real cruelty of Bostrom's work and labeling forms of genocide, forms of colonial extraction and forms of dehumanization is just this drop in the ocean. I really struggle with that and I think the kind of whole progress narrative behind this idea of, you know, the plateauing of intelligence, I think it's very hard not to see the parallels between that and forms of scientific racism and particularly eugenic ideas and policies, um, that have really focused on the idea of trying to.


Kind of always fully optimize the intelligence of certain subsections of the human race. And I use the word eugenics said very carefully because, um, there has been a lot of pushback against people like Timnit Gebru for using the word eugenics to describe certain kinds of factions of long-termism. And I wanna be really clear, I am not individually accusing anyone who is interested in long-termism, who subscribes to assemblies from long-term approaches to AI and other kind of world events um, as you know, necessarily believing in eugenic principles. I just wanna more broadly zoom out and say, I think we should be questioning some of the fundamental principles though when we say things like what does it mean to try and always be increasing the intelligence of humanity and needing to be taking steps towards that and saying it actually doesn't matter if a group of people experience genocide.


Like to me, I think we should be really, really  concerned.


ELEANOR DRAGE:

And he recently posted some emails that he wrote a long time ago, but he still wrote them saying, Blacks are more stupid than whites. Um, and later used the N word. So, It's important to remember the foundations of these groups because that carries the directions of their research and also the ethical core or lack of, um, of these organizations. Bostrom's racist comments weren't just incidental. They weren't just a side note. They're integral to this philosophy.


KERRY MCINERNEY:

 Absolutely. And so I actually want to pivot, uh, onto a more recent event that has kind of followed this open letter, which itself is, you know, very highly contested, which is Geoffrey Hinton, who is known as the godfather of AI is a really, really significant figure in the development of neural networks, recently quit his job at Google and came out as, you know, as some people have put it as an AI doomer basically with like very, very serious ethical concerns about, um, the development of large language models. And he said, you know, earlier, so living around 2012, he wasn't worried that these models would be able to eclipse human intelligence in certain ways, and now he thinks they can.


Uh, he also does highlight like a large number of different ethical risks, including things, for example, like deep fakes and misinformation. And yet at the same time, I think for people like us, You know, I really respect him coming to talk out about AI ethics issues. I think raising those issues into the mainstream is incredibly important.


But the other hand it was a little bit frustrating because it feels like a lot of people, often people who come from marginalized backgrounds have been saying a lot of this stuff for a really long time. And again, people literally got fired for saying this kind of stuff. And now when Hinton is it suddenly people stand up and pay attention, um, did you have a similar reaction?


ELEANOR DRAGE:

 Yeah, and I think now is a good time to make a brief interjection and also say that even though Hinton is an amazing scientist and does know a lot about AI safety, a huge amount, um, as does Elon Musk, it's interesting that people are willing to buy into their ideas without also listening to people who've specialized in AI ethics throughout their entire careers.


So when I was talking on Moral Maze, they were also saying, yeah, but Musk says, and these are people who don't uncritically buy into Musk the whole time. So I'm just wondering why people now are on critically listening to these people when this is not their reason for doing their jobs.


Um, you know, Hinton's, I think you could kind of mainly say it's his retirement gig and certainly it's not Musk's, um, main focus either.


KERRY MCINERNEY:

 No, absolutely, and I think, you know, like, again, like I'm really glad that Hinton is speaking out about this, but I think it's also, you know, to deal with like what we count as being meaningful knowledge about technology. And again, Hinton is hugely a leading expert in machine learning and particularly neural networks, um, but at the same time, I think it's very telling that people's other kinds of domain expertise, so what it means, for example, to be someone who has like studied the impacts of technology over a long period of time on the community, that that has not really been treated with the same level of respect, even though that requires a huge amount of expertise, discipline, and knowledge.


And so I think it's also, you know, what kinds of knowledge we see as technical and what kinds of knowledge we see as  superfluous.


ELEANOR DRAGE:

Do you wanna then talk about the kind of anti open letter that was pen by all these amazing people like Jessica González and Timnit Gebru and Sasha Costanza-Chock?


KERRY MCINERNEY:

 Yeah, of course. So I know there's a lot of letters going around. So this is the last letter of our Hot Take that I'll be mentioning, uh, which is in response to Hinton, um, a large number of women and non-binary people from the global majority, uh, wrote letter asking news organizations and journalists to look for different kinds of experts because they said, you know, Yes, we agree with a lot of the issues that Hinton has raised, but it's very frustrating that people are only giving these claims a lot of attention now that Hinton has said them, and we've been campaigning and we've been working on these issues for years and years and years, we are experts too, and it's really important that we are demonstrated and we are shown in the news to be experts.


And I think this is something that we have both struggled with. I'm sure it's something that people. You know, who are women, who are non-binary, who are people of color and many, many other kinds of marginalized identities of forms of discrimination, um, experience all the time, which is you're not really taken to be an expert in the thing that you really do know the most about in that particular time and place.


Um, and so this letter, I think was a really important step and maybe trying to rewrite that narrative a  little bit.


ELEANOR DRAGE:

And it's good these letters are really short, so give them a read. We'll put them in in the reading list. Thankfully they weren't all written by English literature majors, otherwise they would be, you know, these 10 page exposes of problems.


KERRY MCINERNEY:

 Yes. They're very to the point. So now that we've kind of ruminated on some of our thoughts about these letters, um, Eleanor, what's your kind of takeaway point? Like if someone skipped through this whole episode cause they couldn't stand us talking about random stuff, what do you hope that they would actually remember going away?


ELEANOR DRAGE:

 Okay, so the scary thing is not necessarily the technology, per se. It's the homogeneity of the people building them, of perspectives about what constitutes intelligence, it's the firing of the ethicists, it's the corporations more broadly.


Those are the things that are terrifying, and actually those are the things that we  can intervene in.


KERRY MCINERNEY:

I mean, that's a great takeaway point, a really crucial one. I think my takeaway point would be, you know, Uh, to reemphasize, I guess what I said earlier, which is to say, while I don't think the excessive polarization of kind of critical and ethical approaches to AI is helpful, I think it's just really important to recognize how historical and current day inequalities get replicated in the conversations that we're able to have about AI.


In which conversations get airtime and which risks are considered to be existential and which existential risks actually get people frightened and running. And again, the environmental problem here, I think is key. And I also want people to take away, um, that, like Eleanor said, sometimes the problem is the technology, but sometimes the problem is that the power related to that technology is concentrated in a very, very, very small subsection of hands.


And that that is maybe what we need to be questioning more than anything else.


ELEANOR DRAGE:

The other thing also is the chumminess between OpenAI, the XRisk community, effective altruism, all these adjacent groups, and a lot of this is founded on these kinds of bromances. And even Sam Altman has talked about taking  people out for lunch and dinner, and that's how, um, these communities have developed.


I don't know whether you want to say something on that kind of triangle between, you know, these different organizations.


KERRY MCINERNEY:

Yes. Yeah, no, I was gonna say that, you know, people like Elon Musk who are huge technological players and have a massive stake in the success of the tech industry are the people signing these letters. And you know, that's really ironic. Firstly, because does Elon Musk have the best record as an AI ethicist given, for example, the way that Tesla have, for example, has been implicated in various kinds of like AI related harms?


You know, I would say probably no. But also secondly, I think that there is this queasy intimacy between, you know, these public ethics projects and big tech. And again, I'm not pretending to be perfect here. You know, Eleanor and I have both worked with tech companies, have both accepted small sums of money in various ways.


Like, I wanna be really transparent about that. Um, but at the same time, you know, I think there is, you know, a real concern given that these companies are making claims to wanting to pursue these AI ethics projects, wanting to bring in AI ethicists people like Eleanor and I, while at the same time they're firing the AI ethics teams, they're cutting them out of important conversations, and that makes me increasingly more skeptical about what it means to be able to do AI ethics works in corporate environments.


And to your point around kind of chumminess and the links between these different kinds of communities, again, like I, you know, this is not a personal attack on anyone who is in the XRisk community, um, but I do think that it is dangerous that to some extent we are allowing the people who have the most power over these technologies to define what the risks of that tech is.


And these are people who often are not going to be experiencing, you know, the risks themselves. I think one of the signs of class privilege of power and its concentration is the people who know the most about these technologies have the most access to them, have the most transparent understanding of them, are deliberately creating a world in which they don't have to be subject to its  risks, but they get its profit. And that scares me.


ELEANOR DRAGE:

That's highly irritating. And I also wanted to say, as you were saying before, existential risk is, is enormous. There's lots of different people doing it and lots of different organizations and what's happening with Future of Life isn't indicative of existential risk everywhere. And we work closely with XRisk people at Cambridge and Kerry's amazing at building these links between these different parts of our centers.


So we are really, really interested in collaborating and a lot of the issues are overlapping. And also XRisk contains things like volcanoes at Cambridge. So it's certainly not all about AI. So thank you so much for listening today. I love this Hot Takes episode. Yay, Hot Takes!


ELEANOR DRAGE:

 This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drageand Kerry McInerney, and edited by Eleanor Drage.

87 views0 comments
bottom of page