top of page
Search
Writer's pictureKerry Mackereth

Josh Schuster and Derek Woods on Transhumanism and Existential Risk

Ever worried that AI will wipe out humanity? Ever dreamed of merging with AI? Well these are the primary concerns of transhumanism and existential risk, which you may not have heard of, but whose key followers include Elon Musk and Nick Bostrom, author of Superintelligence. But Joshua Schuster and Derek Woods have pointed out that there are serious problems with transhumanism’s dreams and fears, including its privileging of human intelligence above all other species, its assumption that genocides are less important than mass extinction events, and its inability to be historical when speculating about the future. They argue that if we really want to make the world and its technologies less risky, we should instead encourage cooperation, and participation in social and ecological issues.


Joshua Schuster is an Associate Professor at Western University and the co-author of Calamity Theory: Three Critiques of Existential Risk. His research focuses on American Literature, poetics, and environmental ideas. His first book, Ecology of Modernism: American Environments and Avant-Garde Poetics (U of Alabama P, 2015), focuses on modernist American literature and music in relation to environmental problems of the era between 1900-1950. He is currently working on a new book that discusses the literary, philosophical, and psychological implications of the extinction of animals. He teaches courses that cover a range of American writing, as well as courses on environmental literature, contemporary philosophy, and poetry.


Derek Woods is an Assistant Professor, Communication Studies & Media Arts at McMaster University, and the co-author of Calamity Theory: Three Critiques of Existential Risk. He is a writer, scholar, and theorist who works in the fields of environmental media, science and technology studies, critical theory, and US and British literature. After completing his Ph.D. in English at Rice University, he held positions as Postdoctoral Fellow in the Society of Fellows at Dartmouth College and Assistant Professor of English at the University of British Columbia. His publications include articles in journals like 'diacritics,' 'New Literary History,' and 'Forest Ecology and Management,' and the book Calamity Theory: Three Critiques of Existential Risk, which he co-authored with Joshua Schuster (University of Minnesota Press, 2021). He has given lectures in the UK, Germany, France, Russia, and the United States.


READING LIST:


Calamity Theory: Three Critiques of Existential Risk


TRANSCRIPT:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Ever worried that AI will wipe out humanity? Ever dreamed of merging with AI? Well these are the primary concerns of transhumanism and existential risk, which you may not have heard of, but whose key followers include Elon Musk and Nick Bostrom, author of Superintelligence. But Joshua Schuster and Derek Woods have pointed out that there are serious problems with transhumanism’s dreams and fears, including its privileging of human intelligence above all other species, its assumption that genocides are less important than mass extinction events, and its inability to be historical when speculating about the future. They argue that if we really want to make the world and its technologies less risky, we should instead encourage cooperation, and participation in social and ecological issues.


KERRY MACKERETH:

Brilliant. So thank you both so much for joining us. We love a double episode so it's really exciting to get to chat to you both. So could each of you introduce who you are, and what you do, and tell us what brings you to the topic of thinking about technology and existential risk?

DEREK WOODS:

Sure, so I am an Associate Professor of English at Western University, in Ontario, Canada. And I would describe myself as an Environmental Humanities scholar and thinker. And I've been working on the topic of extinction for both animals and humans for about a decade. And as I was sort of trying to read everything in the field that I could, as kind of stumbled on to the work of existential risk, and specifically the work of Nick Bostrom, a philosopher in that field, and found that he had a pretty remarkably developed understanding of extinction from a very different perspective than I had thought of. But immediately I saw how my own work and thinking was very different than his and that there was a conversation there to be had.

JOSH SCHUSTER:

And that was Joshua Schuster, my co-author, who's an I'm Derek Woods. And Joshua really is the first author on this project, and because of his work on extinction, took the lead and but along the way we started having conversations about existential risk. And because a lot of my research kind of goes in this direction as well we decided to write this book together. So I'm an Assistant Professor at McMaster University in Communication Studies. And in the past I've kind of worked my way through many disciplines, and I started out in Forestry and Ecology, and then worked into English Literature and Philosophy. And now I'm, I guess, in communication and media. And so, you know, I'm interested in problems related to technology, high technology, politics and the sciences. And I guess the fields that I work in most directly now are the interdisciplinary fields, Science and Technology Studies and the Environmental Humanities, right. So that's a big part of what leads me to the topic, because in the Environmental Humanities, we have to deal with, discuss and think through many, many different cultural layers of Apocalypticism, right of anxiety about extinction, fantasies of future ends of the world, and so on. So the field of existential risk is one version of that. And so this is kind of the story of how Josh and I both ended up writing a book about it together.

ELEANOR DRAGE:

We are The Good Robot. So we'll ask you a $3 billion questions: what is good technology is even possible? And how can thinking about technology and its risks help us get there? Can you go first, Josh?

JOSH SCHUSTER:

Sure. I want to start by making a distinction between what I would call ecological content and ecological form. And I’ll answer the question in a sec - because I think of ecological content as objects, infrastructure, self driving, electric cars, solar panels, artificial intelligence, robots, and also bicycles, parks. So that's ecological content. And I think of ecological form as how we relate to each other. How we use our imaginations, how we practise consultation, consent, non-coercion, encourage participation, all this sort of I would call ecological form, sort of how we relate to that content. And I think it's applicable to robots and artificial intelligence and technology too, this sort of technological content and technological form. And in my mind, you know, artificial intelligence will go well, if we focus as much on the form. So if that artificial intelligence itself is sort of practising things like consent, and participation, and encouraging sort of things like imaginative critique, and these kinds of what again I would call sort of like formal approaches to our situation.

DEREK WOODS:

And for me I guess to answer the good technology question, I've been thinking hard about it. And I'm trying to think of something good. And I guess, you know, the part that I'm sure almost goes without saying, but doesn't in the tech world, in many ways, it is that good technology should promote equality. And ever since the rise of the Internet, it has by many measures gotten worse, right? The different kinds of inequality have gotten worse, so good technology should reduce even economic inequality. And then it's sort of moving in the right direction. And the other thing, I think, is that in some ways, good technology has to have turned towards nature. Broadly speaking, right, there are a lot of different ways to talk about that. But, you know, instead of abstracting away towards rationality, or sort of disembodied intelligence, or say, trying to leave the earth behind for other planets, you know, and continue that basically colonial project, that we should be looking to the life that's around us as as a model for design right to kind of find intelligence there. And that's maybe dangerous because it can be romantic and, and say that nature is always good, which it isn't in any way. But I think like I'm more willing to stay with that danger than to go in the direction of, again, abstracting intelligence away from the Earth, right, in some way many species have to participate in the project of good technology.

ELEANOR DRAGE:

Thank you so much, Derek. And can you continue then by telling us about, about what existential risk is defining it for us? Who are the key players? And I think a lot of people who are listening will probably have heard of Bostrom and Superintelligence, but if they haven't, what's it about? And why should they be listening critically to these thinkers in AI?

DEREK WOODS:

Sure, I'm happy to maybe after I say some things, Josh will fill in more details. But existential risk in many ways goes back to a paper Nick Bostrom published in 2003, I believe, and kind of laid out the stakes of the field, provided a typology of different extinction risks of the worst kind, right? So the whole human species is driven to extinction. And long story short, part of why the topic is of interest to your listeners is that for people like Bostrom in this field, the worst risk of all seemed to be the risk of AI, right, which, some would argue has a one in six chance of driving humans extinct within the century. So those are kind of the stakes they're laying out for us. But you know, over time, the field has expanded, many other people have gotten involved, we talk about the work of Toby Ord quite a bit as well. And institutions have been put together, institutions which in many ways are different, so there's the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, of course, and, and others as well. There's one in Boston that's sort of associated with MIT, there's one at Stanford. One of the things that's been most interesting about this field that studies future human extinction risks for me, is I guess in particular its massive investment from tech billionaires. So there have been millions of dollars poured into it by figures like Elon Musk and Jaan Tallinn of Skype right so the these these folks also sit on the board of directors at places like the Cambridge Centre, so there's a real interest on the part of say like Silicon Valley as a broad industrial / intellectual world. Or to put it less generously on the part of the tech bro intellectuals, there's a deep interest in existential risk. So the part of what draws me to it is also that and the idea that by studying it, we can learn something about, I guess I'll just call it the ideology of Silicon Valley.

ELEANOR DRAGE:

Fantastic. Maybe Josh can help us with the next thing that I wanted to ask, which is, it seems as though you know, when you look at Elon Musk, and all these other investors that are concerned with existential risk, what they're mostly concerned with is the end, not necessarily of human life, but of intelligence, its intelligence that they are really concerned with human intelligence that is, so how did transhumanists define intelligent life? And how do they think intelligent life can disappear?

JOSH SCHUSTER:

Sure, yeah. So I'm just going to quote Bostrom’s initial definition from that early essay, where he defines existential risk as “one where an adverse outcome would either annihilate Earth originating intelligent life or permanently and drastically curtail its potential”. So we spent a lot of time in the book unpacking that initial definition. Specifically, he doesn't mention human life or animal life or anything ecological, but rather Earth-originating intelligent life as this sort of unit of value and the kind of most important objective for thinking what is a true existential risk. And we found many problems with that. Firstly, it suggests that - it's this is something he brings up in the essay that, you know, there could be catastrophes that that will inflict violence on humans, animals, but as long as they don't ultimately curtail the pursuit of Earth-originating intelligent life, then eventually things will work out the way they should, towards this transhumanist objective. Another problem we have with it is that he doesn't really define intelligence, or superintelligence beyond a kind of very limited rationalistic sort of scientific understanding of that. And we respond by saying, well, there's many different kinds of intelligence. There's emotional intelligence, artistic intelligence, social intelligence. And these are just as important in understanding our existential condition as much as this thinner view of what Bostrom thinks superintelligence involves.

KERRY MACKERETH:

Fantastic. That's so interesting. And Boston's idea of super intelligence obviously has been so influential for thinking about the future of humanity for thinking about the future of artificial intelligence, and something that Bostrom claims is that humanity has to kind of we have to leave our mortal fleshy cells and become this immortal, 2.0 version of ourselves merged with technology, or otherwise will have failed to achieve our potential as intelligent beings. And of course, other amazing guests in our podcasts, people like N. Katherine Hayles have really critiqued this kind of perspective, this kind of posthuman thinking. Feminist scholars of science, and you're really emphasise the importance of embodiment as a human condition. But for some like Bostrom to be sort of just humans, just as we are, it's not good enough. But there's also another interesting irony that you observe here, because achieving this human 2.0 will rely on massive resource extraction and material infrastructures that benefit the few. So for example, AI is hugely energy consuming and climate change we know affects the world's poorest. So could you explain what the significance of these ironies are?

DEREK WOODS:

I think that first of all, like you said, the tradition that N. Katherine Hayles work represents is very important, I think, in the background of our field, our sort of assemblage of loosely organised fields as a way of talking about embodiment, as a way of of theorising the relationship between embodiment - having bodies as humans, and as humans that are always in relation with different kinds of non human species. And also, as well as bodies that have races, genders, and that are that are in different ways marked, right. So in a sense, intelligence isn't something from our perspective that can be extracted from the body, or in some sense, elevated above what we are, our existential condition as animals, that we're always kind of becoming in relation to technology. So that that tradition that we can use Hayles to kind of mark represents a lot of important kind of feminist work about embodiment, when I think that's one of the things that's well, basically completely absent from the discourse of existential risk and yet, it's been important to thinking about what intelligence is. So one of the main ironies that you mentioned that comes up is that it's a bit unclear what intelligence means compared to the human so what is the relationship between intelligence and the human that we're talking about? If we look across the discourses of existential risk, and look at the basic model, at times, the precise definition from Boston that Josh mentioned, is kind of front and centre, right? So we're worried about the extinction of Earth originating intelligent life. But at other times, we're talking about humans that other times we're talking about civilization. So what exactly is the relationship between what we are now, the thing that we would call human, which is in and of itself a bit of an unsteady definition? And intelligence, right? And what is it that we're trying to protect? Especially if we extend the timescales far enough? And we look out towards some kind of cosmic horizon, which is what existential risk people do. At which point, we have to ask what will the human become? And then I guess the second big irony related to this kind of transhumanism problem has to do with the relationship between the super intelligence that is the fear, the worst risk that might befall us and that might drive humans to extinction, and the goal of fulfilling our full potential and becoming transhuman, disembodied intelligences, which, in a way sort of suggests that super intelligence is both the goal and the problem, both where we're trying to get and the risk in a way. So if we're trying to get to a sort of good version of superintelligence, but we have to avoid letting super intelligence extinguish us or any number of others from a long list of risks of extinction threats.

JOSH SCHUSTER:

But I just add that, you know, why can't we say superintelligence is like, you know, caring for the planet caring for each other, you know, having a kind of ecological sensibility. And some, you know, it's always the case, like, there's definitely a sort of like, cosmological or sort of planetary imagination that is central to Bostrom, or Elon Musk, or many in this field. And, you know, you have to wonder, other other life forms would probably ask the same question instead, but they might not be so obsessed about our technologies, but they might be very interested to know, how do we take care of our own planet? And so that to me, I don't understand why super intelligence wouldn't feature that from the get go.

DEREK WOODS:

Yeah, absolutely. You know, it's the idea that that's kind of helpful for me here sometimes which was part of my education because I my dissertation supervisor, Gary Wolfe writes about post humanism but it makes a distinction between post humanism and transhumanism now we're Transhumanism is kind of the Ray Kurtzweil like early Silicon Valley let's kind of separate intelligence from the body, ultimately, that dream which sets sidebar to me as a deeply Christian dream, right? This is the immortal soul etc., but you know, even leaving that aside, we have this kind of extreme humanism which wants to create a posthuman that is a more powerful version of certain aspects of the human, generally rationality. And, and to kind of maximise that right and multiply it many times. But then there's another post humanism, which would say, look, the concept of the human is the problem. By thinking of the human as some kind of disembodied intelligence, we are taking ourselves out of the mesh of relations that really creates us, and that's the most interesting and important, politically and ethical thing about living on a planet.

ELEANOR DRAGE:

Completely, it does seem like we're chasing our own tail here by trying to define or delineate the boundaries of the human, when actually we know from our very intimate relationship with technology, with the natural environment with animals, that those boundaries are not that clear. Which leads to the question of why human extinction, why not animal extinction - you've partly talked about that just now. But you've quoted in your book indigenous scholars like Winona LaDuke, who explore animal extinctions, and how animal extinctions intersect with human life. And I think a lot of people listening will be very aware that animal extinctions and the deterioration of the planet in general are a big problem for humankind too. So what are the biggest risks that in relation to taking extinction way more seriously, than animal extinction, taking human extinction more seriously than animal extinction? And not seeing the two as interrelated or not foregrounding that interrelatedness?


JOSH SCHUSTER:

So yeah, well, that Winona LaDuke quote is very important to us, her argument being that there really is no case of animal extinction that doesn't also intersect with a kind of human oppression, or instances of human violence towards each other. And specifically there's a relationship between indigenous communities and their oppression and the disappearance and ultimate extinction of animals. And so it's remarkable that it seems very few in the existential risk field mention animal extinction, even though that's obviously a huge resource on which to understand the process of extinction, and consequences of the absence of life. But then there's the obvious other relationship that as animals disappear, that increases other threats to human existence or intelligent existence. So they never were separate all along. And then I think another issue is related even to the question of technology, artificial intelligence and super intelligence, in which there's all this discussion about - it's called ‘value alignment’ which trains machines to have the same values as humans would. And we mentioned in the book that humans have struggled to align values with each other, but also with other life forms, including animals. So there's really no consistent human-animal value alignment out there. And that should be a key sort of site for understanding value conflicts and problems that we would think that artificial intelligence would have in connection with humans. So animals there are present all along, and we should animal extinction issues should be front and centre and thinking existential risk.

KERRY MACKERETH:

Absolutely, yes. And I think this also raises this kind of broader point, I think around sort of race and racialization and coloniality. And the field of existential risk, just because, you know, I think, for myself, and probably a lot of other people who are adjacent to this field, it's one of the issues that we've just felt hasn't really been grappled with in any meaningful sense by the vast majority of key thinkers in this figure in this field like Bostrom. So for example, even if we think of, you know, the horrific history of famine in the British Empire, and the way that this mass suffering caused by the British Empire still isn't really understood as being an existential risk to those groups that experienced it. And so I think, you know, people like you and other scholars are bringing this much greater critical engagement with who is positioned as being at risk and who is positioned as being kind of the threat. Like my own work focuses a lot on anti Asian racism and AI and you know, even if we look at the COVID 19 pandemic, if we look at discourses around AI, you know, we see kind of China being positioned as a sort of external threat and we see particular groups such as, you know, white Silicon Valley, males being positioned as sort of the people who are at risk. And so kind of why I wanted to ask you a bit about how histories of racial violence are being positioned in the field of existential risk. And why we need to be framing these kinds of violence like genocide, like attempted aresia as existential risks, not just kind of local risks.

DEREK WOODS:

Here I think it's kind of important to make some distinctions at the beginning, because first of all, when we’re talking about existential risk, we're talking about a fast changing field. And our book focuses a lot on the early model where we're trying to go back to the philosophical roots, where you find one version, one answer to this question. A lot of people who are now working under that umbrella, which has grown quite a bit, in part because of tech industry funding, are often working on something like policy relevant risk analysis that deals with future threats, human suffering, but doesn't really work with this extreme model. And at the same time, there's a pretty big popular footprint around existential risk as well, which isn't so much focused on the wider umbrella, but is focused on the original philosophical model, and that includes a major web presence, all kinds of, you know, much viewed YouTube lectures, articles in everything from the wire to the New Yorker to The Wall Street Journal. So I'm going to centre left to centre right publications, a lot of attention has gotten to this topic. And that kind of attention, broadly speaking, there is no attentiveness, really, I think, to race and coloniality. And the big problem, I guess, is that in the course of defining an existential risk as the most extreme risk to Earth-originating intelligent life or to humanity, the original philosophical model we saw in folks like Bostrom and Ord tends to understand this extinction risk as something that has to include past genocides, or say, like the kind of massive waves of suffering and famines produced by the British Empire that you mentioned, Kerry. So those aren't seen as, as existential, basically, that they're not part of the definition. They might be horrible in many ways. But they're, they can only kind of take, and some of the language is often a bit dismissive that people use around this, they can only take kind of a backseat role in relation to this larger scale cosmic trauma. There, they're described kind of as blips along the way to the broader evolution of human intelligence. And I think even for people who haven't been directly the victims of this kind of genocidal violence, that kind of language is pretty offensive.

ELEANOR DRAGE:

Yeah, it's kind of difficult to read in, in lots of ways. And often there is this language of nonchalance where they use kind of distressing language and pass it off as not a big deal. And I wonder whether some of this analysis of the language they use is from you, Derek as a Professor of English, among other things, I don't know who wrote this of the two of you! But you talk about how Boston uses language like bang to describe a sudden extinction or crunches to describe human civilization as stunted because transhumanity has never achieved shrieks, and whimpers and these also described these different, these different kinds of risks. And these words come from TS Eliot's The Hollow Men, for any of you listening, who might have recognised that which is a story of lost souls. And the souls do actually in the poem realise their plight with some humility. And Eliot took some of that imagery from Dante's Inferno, which is itself about those who cannot be redeemed. The Hollow Men has been described as eerily preempting humanity's conflict with each other that resulted in genocide, war and spiritual emptiness, and still does still continues to do even beyond Eliot's time. But Bostrom uses that language in a very different way, to how Elliot used it, referring to a sad humanity that's either been wiped out by an extinction event of its own making, or hasn't managed to yet become immortal. So what's the significance of overlooking internal human divisions and hatred, which is what Eliot's poem was about, and instead focusing on human extinction, as Bostrom does?

JOSH SCHUSTER:

We find Bostrom’s own language pretty jargon heavy in strange ways. Why those terms from TS Eliot? How useful are they? We appreciate the sort of metaphorical creativity there. But it's hard to assess how helpful those terms are. One thing that Bostrom does is he assumes that if humans don't ever achieve transhumanism, that too, is an existential failure or an existential disaster. And so there's really only one way, one narrative here, you have to achieve some form of utopian superintelligence, and anything less is already a set up to be a kind of disaster in his mind. And we find that to be really astonishing, as a definition. On the other hand, I give Bostrom and some existential field credit for being creative and imaginative about sort of speculative futures, I think it is, I think it's always the right thing to do this kind of utopian sort of scenario questing. At the same time, as you sort of exercise your dystopian imagination as well, to me, those are the right coordinates for being, you know, alive today, it's hard to - I mean, every day seems to be another sort of utopian dystopian scenario. So I think this kind of philosophy is really helpful for sort of refining that kind of thinking. But it hasn't cornered the market on either utopia or dystopia. These are huge fields. And we can learn from so many different people, certainly people across the sciences and humanities, from Oslo as well as community, indigenous communities, communities, that have a different relationship to utopia, and like Afrofuturism that include a kind of concept of race, and community as Central. And these are other traditions that we bring a little bit into the conversation with the field of existential risk. But I think there's a lot more to talk about.

DEREK WOODS:

Yeah, absolutely. I just would add, I mean, there's, there's something strangely, ahistorical, I was just thinking about this listening to what Josh was saying, something strangely ahistorical about the idea of kind of casting our speculative minds so far into the future that we're thinking about a deeply imaginary extinction event. And using that to kind of rewrite the significance of history, of the history we experienced, especially the history of, of colonial genocide and, and the wars of the 20th century, right. And so to frame those as much less significant in relation to this larger scale catastrophe that could befall us. You know, I think that it just speaks to the need, if we're doing futurism. And sure, like, what we should, you know, we should talk about the future. We should read science fiction, we should speculate, and, and we should plan for risk. But this should always happen in a historical way, which seems sort of paradoxical. But I think to talk about the future well you have to be doing history, you have to be looking into what's already happened in the past. Someone who, who makes a good point about this, right, he's pretty widely known, at least in my area, is Kyle Powys White, the Indigenous Environmental scholar, who's kind of argument, I think, which is widely circulated for good reason is that, you know, isn't it strange that settler people are now worried about - and like white Europeans more broadly, - are now so worried about apocalypse, because indigenous people have been kind of living through an apocalypse or at the end of their world for centuries as a result of colonialism, right? So that's like, that's one good example of where we're going to talk about Apocalypticism, and looking to the future, we should be looking into that past and using it to nuance and politicise the definitions of something like existential risk. Yeah, and also to just expand -which is a big part of what our books are trying to do to - expand the meaning of, of existential and by bringing other traditions up to the table.

KERRY MACKERETH:

Yeah, absolutely. I think this refracting between past and present is just so crucial when it comes to, as you say, thinking critically and politically about existential risk and what that means. And then finally, I want to bring us back to that question of thinking about the future, because our futures are often framed strangely in this field as both hyper-collective - like humanity as sort of an undistinguished mass - and yet strangely individual, it's this sort of liberal humanist kind of individual, often male, often white figure, that still seems to be at the heart of it, but we know that our features are collective, and then we know that we're bound to each other. And as feminist scholars, I think this sense of collectivity and solidarity is really important to us. And so I guess I want to ask you, why does solidarity and collectivity not really play very much of a role in this work? And what kind of hopes do you have as people who work in this field for how ideas around solidarity and collectivism could maybe transform the field of existential risk going forward?

JOSH SCHUSTER:

Yeah, thanks. Great question. So it's, I would first of all say it's not just solidarity across humans, but it would include animals, you know, the nonhuman world, the environmental world, and there's sort of so many opportunities for solidarity, so many sort of ways of being connected. And so that's great. There's sort of so many opportunities for participation. And this is something I tell my students or friends, as well as, like the environmental movement is about all different forms of participation and connection. And that's a wonderful opportunity. And in that existential risk field, I was just astonished how little they've mentioned things like consent and consultation and coordination, cooperation, and participation, sort of basic social glue, that makes the world less risky. So if you want to reduce existential risk, you should encourage these kinds of social bonds. And they say I was just for example, I was listening to one lecture of a well known scholar and existential risk, who was talking about the need to do geoengineering. And specifically for volcanoes might need to sort of make them erupt on command, because that would lower the atmospheric temperature. And that this should be something that should be studied and probably implicated eventually. And what I didn't hear is that, you know, most of these volcanoes are on islands that are populated by indigenous people, not not only but in most cases they are. But where's the emphasis on consultation, consent and participation of the people who live next to these volcanoes? And how, you know, shouldn't their voices be the first ones to be guiding this kind of thinking? And I didn't hear that. And I'll just add, though, that it's interesting, I find an existential risk. They're very radical in terms of sort of utopian imagination, but they never really, you know, make basic criticisms of things like capitalism, which tend to increase existential risk situations by putting everyone in a risky economic state. And they don't entertain other forms of sort of political collectivity, as all that compelling. Where, as we think that's, you know, it seems to us that we're in this moment, where that kind of how do we think of all of ourselves on this earth, you know, sharing the planet? What sort of political structure will that mean? There’ll be multiple forms of that. It won't just be the UN or something like that. But that's there's real promise there of achieving a new kind of collectivity. But we haven't, it's not really present.

ELEANOR DRAGE:

Fantastic. Well, thank you. You're completely right, the slow violence of capitalism is not as sexy as these hard and fast existential crises. So yes, thanks for bringing that to everybody's attention. It was amazing to speak to you. And I hope that we'll get to see you again very soon.

DEREK WOODS:

Well, thanks so much for having us on. I've really enjoyed listening to different episodes of this podcast, so it's fun to actually get to talk to you.


ELEANOR DRAGE:

This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.



29 views0 comments

Recent Posts

See All

Comments


bottom of page