Transhumanist Fantasies with Alexander Thomas
- The Good Robot Podcast
- May 13
- 24 min read
In this episode, Eleanor talks to Alexander Thomas, a filmmaker and academic who leads the BA in Media Production at the University of East London. They discuss his new book about transhumanism, a philosophical movement that aims to improve human capabilities through technology and whose followers includes Jeff Bezos, Elon Musk, Larry Page, and also apparently the DJ Steve Aoki. Alex is himself one of the foremost commentators on transhumanism. He explores transhumanist fantasies about the future of the human, is obsessed with the extremes of possibility: they either think that AI will bring us radical abundance or total extinction. Transhumanism, Alexander says in this episode, reduces life down to information processing and intelligence, which amounts to a kind of IQ fetishism.
Dr Alexander Thomas is a multi-award-winning film director and screenwriter. His academic research questions what it means to be human when cultural, cybernetic and biotechnological developments undermine the notion of the human as a cogent and eternal category. Alexander has directed four multi-award winning short films including Beverley which screened at over 100 international festivals and events, won 38 awards and was longlisted for an Oscar. Alexander is the host and producer of the A-Z of the Future Podcast which explores 26 key topics of our times to provide a better understanding of our future and includes interviews with global thought leaders. He co-runs the company IntoTheFuture, which develops creative projects designed to entertain, educate and inspire on future-related themes. He has contributed chapters to books, written numerous articles and reviews and has featured on Radio 4’s Thinking Allowed.
Reading List:
The Politics and Ethics of Transhumanism: Techno-Human Evolution and Advanced Capitalism by Alexander Thomas.
Calamity Theory: Three Critiques of Existential Risk - Joshua Schuster and Derek Woods.
Bacteria to AI: Human Futures with our Nonhuman Symbionts - N. Katherine Hayles.
Staying with the Trouble: Making Kin in the Chthulucene - Donna Haraway.
Posthuman Feminism - Rosi Braidotti.
Also check out our previous episodes with N. Katherine Hayles, Rosi Braidotti and with Josh Schuster and Derek Woods!
Transcript:
Kerry: Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: What is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list compiled by every guest. We love hearing from listeners, so feel free to tweet or email us. We’d also really appreciate you leaving us a review on your podcast app. Until then, sit back, relax, and enjoy the show.
Eleanor: In this episode, I talked to Alexander Thomas about transhumanism, a philosophical movement that aims to improve human capabilities through technology and whose followers includes Jeff Bezos, Elon Musk, Larry Page, and also apparently the DJ Steve Aoki. Thomas is a filmmaker and academic who leads the BA in Media Production at the University of East London. He actually interviewed me recently for his upcoming documentary on transhumanism, so I'm thrilled to be able to speak to him today.
Alex is himself one of the foremost commentators on transhumanism. He explores transhumanist fantasies about the future of the human, is obsessed with the extremes of possibility. They either think that AI will bring us radical abundance or total extinction. Transhumanism, Alexander says, reduces life down to information processing and intelligence, which amounts to a kind of IQ fetishism. Kerry was on leave, so I'm on my own for this one. I hope you enjoy the show.
Eleanor: Well, it's such a pleasure to have you on today. And can you kick us off by telling everybody who you are, what you do and what brings you to the topic of feminism, gender and technology?
Alex: Yeah, sure. So first of all, thanks so much for inviting me. It's a great podcast, so a huge privilege to speak to you. My name is Alexander Thomas. I'm a filmmaker and academic. I lead the BA in media production at the University of East London and also teach on the film courses there. What brought me to these questions of technology and transhumanism, which we're going to talk about today was my PhD, which was looking at the kind of political and ethical implications of transhumanist technologies emerging into the context of capitalism, essentially, advanced capitalism. So I was really thinking about both the way capitalism produces certain types of technologies and people, how that kind of informs our techno-human trajectory, but also how transhumanist aspirations and imaginations are kind of informed or reflective of that kind of capitalist thinking.
So after completing the PhD, I adapted the thesis into a book. It's called The Politics and Ethics of Transhumanism, Technohuman Evolution and Advanced Capitalism. Bit of a mouthful. And it's published by Bristol University Press, Open access, so it's free to download and read online. So check it out. Yeah.
Eleanor:
And it's a fantastic book. I really enjoyed having a flick through and super important themes that I think are completely underexplored and are everywhere the way that we see politicians and AI opinion leaders and big tech talking about technology today. So before we get into those big ideas,
Alex:
Thank you.
Eleanor:
Can you respond to our three good robot questions? So what is good technology? Is it even possible? And how can feminism help us get there?
Alex:
So what I love about that question to start with is that it provokes a recognition that technology isn't neutral, which is very important. Technology and science are sometimes presented as pure or in some way detached from the rest of the human life world. Kind of science as a spectator on existence rather than an active participant in it. But technological developments of course are a human pursuit. They're enabled and dependent on a context,
a place, a time, a culture, prejudices, interests, and all the other contingencies that go into their making. So we always need to ask in whose interests are technologies being developed, with what purpose in mind and with what forms of world making in mind as well, I'd say. So in that regard, good technology is possible, but it is an ethical question, not just a technical one.
And of course, ethics are...a bit more difficult because they're situated, they're derived from experience and they're kind of therefore bound by living in a given context. So they come from many specific points of view. There's no transcendent basis or view from nowhere from where we can derive truth to all ethical questions. So in a way, the answer to what is good technology would be technology that is built with that understanding in mind, technology that enables pluralism and inclusivity that makes ways for different viewpoints, which I think feminism has been absolutely key in making us aware of, and also derived from very different experiences and life ways of people on the planet. I would say the modern world is dominated really by a focus on the ever greater expansion of human means, but without a similar concern for thinking about inclusive, pluralistic, ethical ends. It's largely the profit motive, I think, that ultimately determines ends in the modern world.
And given the radical technological developments we're beginning to see, that the kind of the banality of that profit motive as the architect of the future, I think is becoming increasingly obvious and problematic. So technology, just like human life is not something that can be solved despite the claims of some transhumanists. It is something that requires an ongoing ethical response. And I think good technology should be aimed at enabling these pluralistic responses, different forms of world making, not a constriction or a confinement or a demand for conformity to a kind of certain dominant or universal form of thinking or life ways.
Eleanor: That was the full package answer. There is, that's it. There's nothing more to add. I was at the AI summit last week and there was a lot of pre-conferences around that that were very much focused on just the technical responses to AI issues and society and culture were very much sidelined. So this is the dominant way of thinking and it totally makes sense because...
Alex: Yeah.
Eleanor: If you have a conference mostly full of computer scientists and practitioners, they will try to solve their way out of a problem. Unfortunately, life is very complicated and you can't always do that. But I think this is sort of the war of the disciplines to some extent. so you've said that there's many different kinds of transhumanism. And I think I'm very much guilty of lumping all transhumanists into one boat. So I'm interested in you telling us.
Alex: Completely.
Eleanor: What are these different strands of transhumanism that are different obsessions with what you call the three supers? super longevity, super intelligence and super wellbeing.
Alex: Mm-hmm. Yeah, okay. So, I mean, essentially transhumanism is a, is a philosophy or maybe we should think of it as an ideology really that claims we should radically enhance the human condition using applied techno science. So it's, it's, it's self-directed human evolution. and that can sound a bit abstract. So those three supers help us to think about, you know, the concrete things they're talking about. So transhumanists would like us to potentially massively increase human lifespans, maybe live forever, or at least until we choose to stop living ourselves.
They think maybe we could become much more intelligent, perhaps godlike in our reason, our capacities for reason. We could also improve on what it feels like to be human. We could become better than well, maybe radiate pure joy and potentially derive new capacities. We could have more choice and freedom over what our physical bodies look like. Why shouldn't we be able to echolocate like a bat or out-sprint a cheetah, for example?
So in a way, if the history of scientific and technological progress is our attempt to use nature to better serve our ends, transhumanism can be seen as the revision of human nature to better serve our fantasies. But as you say, there are different versions of what that should look like. So I think there's a few reasons why it's really, really important to...know and think about transhumanism right now and one is that some of what transhumanists are claiming is becoming manifestly true. So our relationship with technology is spiraling at a rapid rate, our entanglement with technology is becoming deeper, it's more complex like you've kind of intimated there, it's often more opaque as well, we don't necessarily notice it sometimes and so we need to think hard about the implications.
Now who does it really appeal to at the moment? and I would say well what do you get the man that has everything including a God complex? And the answer to that would be eternal life and omnipotence. So at the moment transhumanism more than anybody, more than any other group, I think it appeals to the tech billionaires of Silicon Valley. And for them, it justifies all sorts of horrors and injustices of the modern world. And, you know, it allows the problems that perhaps their own view of the world and what they're doing kind of help to create. It gives them a narrative that says they're actually leading, they're on the verge of leading humanity to untold levels of value. So super intelligent AI, post-biological consciousness, living limitless time scales, colonizing space.
So it is the scale of that imagination and possibility of transhumanism that tends to dwarf modern problems and injustices. So that's who I think more than anybody else loves it nowadays. But it is worth noting, as you've said, that there are many disagreements between transhumanists and different views of what our kind techno human evolution could and maybe should look like. Transhumanists are definitely not completely uniform. There are libertarian transhumanists, there's actually socialist transhumanists, there's religious transhumanists, which seems like it would be a kind of, you know, contradiction in terms in some ways if you think of transhumanism as this kind of post-enlightenment idea.
And there are transhumanists who argue that we should remain a biological species, and there are others that think we should actually replace humans with a kind of mind children, they call them, intellectually superior digital beings or that we might be able to transfer our consciousness and become the digital being ourselves in some way So there are lots of different types of transhumanism and despite I think it being predominantly white male and people that love gadgets and technology generally It would be wrong to suggest they're all like that.
But for me, I think the the Silicon Valley libertarian inflected version of the philosophy is becoming by far the most relevant and and certainly I think the most dangerous form of the philosophy. In terms of who, you know, maybe contests the philosophy or argues against it, traditionally, I think it has been a group that you could call the bioconservatives. And what they're arguing for really is advocating, kind of keeping the integrity of the modern human. And I think this is quite a misguided critique because we've always actually developed alongside our technologies. We're always changing and becoming something new. Life itself is process.
So I think clinging on to some kind of imaginary essentialist quality of the human, I think is quite misguided. And instead, I think we need to ask those ethical questions that we mentioned before, questions of power, ecology, complexity, pluralism, inclusiveness. And that is what we should use to critique these kind of fantasies of modern transhumanists. you know, because they do tend to serve the narcissistic and kind of beliefs of economic and technological elites.
Eleanor:
So narcissistic. mean, the epitome of superficiality. And actually, it was interesting what you were saying about rethinking the human and technology, because I was writing this morning about biologists who understand the human not as a single entity as a species distinct from other kinds of species, but as a holobiont. So as a symbiosis of bacteria and human microbiome that
Alex: That's it.
Eleanor: That's what it means really to be human. have to have existent these symbiosis otherwise we become extremely unhealthy or ill and die. There's this guy, David Pierce, who's the founder of the World Transhumanist Association, which is now called Humanity+ which is how most people might have heard about him. And he talks about the bug-ridden genetic code. It's part of this idea that, as you've said from Silicon Valley, you can optimize computers. Now we want to optimize the self, that our purpose here on earth is to become perfect. And I often think of this in relation to the Vitruvian man. So Da Vinci's very beautiful muscle man on a wheel, the image of human perfectibility. What's wrong with this bug-written genetic code mind computer metaphor?
Alex: Yeah, very, very good question. So I think it's important to think of transhumanism, not just as a story about technological progress, but very much a story about what it means to be human. So they conceive of being human as it's a kind of project of self-transcendence. It's the denial of limitations and never-ending progress to ever greater power and control over nature and the self.
So the mathematician John von Neumann famously said, stable processes we shall predict and all unstable processes we shall control. And I think that idea has definitely inspired a great deal of transhumanist thought. actually cybernetics was really influential in kind of allowing this type of thinking to come about really, where the human mind is conceived along with everything else as just an information processor.
So if information is conceptualized as potentially separate from the material world in some way, then intelligence becomes this kind of magical force that allows us to process information more efficiently or more in line with whatever our desires are. So all of life can then become a question of just increasingly potent intelligence controlling information effectively. DNA becomes just a code of life and it becomes readable like a book and therefore editable.
So, and therefore it's, you know, tractable to our will, to human will, or more generally kind of intelligent intervention of all kinds. So the metaphor of life as a book, I think, it characterizes life as knowable and especially to those who aren't those scientists and technologists who understand and speak that language. And that kind of promotes a a Promethean urge, I think.
that we can not just understand life but write it anew. So that's why it's so freeing for the transhumanist imaginary really. And transhumanist discourse is absolutely full of language talking about humans in machinic and computer-like terms. So humans are suboptimal systems and they're, as you say, bug-ridden. They're a bug-ridden code, you know? So these metaphors are really appealing to transhumanists, I think, because they enable fantasies about limitlessness of resources and time.
Everything can be quantifiable, readable, and therefore the complex, inter-relational kind of aspects of reality, which in truth defy reductionism when they're understood fully. These are simply removed from consideration. And in doing so, questions of meaning disappear too. If everything is just intelligence and information processing, even being human becomes something that can just be transferred to the digital realm. The questions of meaning disappear; you know, what does it mean to be human is just not a relevant question anymore. So, so of course, you know, in reality, the material world does matter. There is not an information material binary separation. So life isn't just code, and you can't escape the ethicality and questions of meaning by pretending everything is computer, but it is very appealing, especially if you happen to be a programmer or something or you're good at code, then it's a very appealing fantasy, I think.
Eleanor: Yeah, they're doing their damnedest to pull the wool over their eyes and escape from the fact that we are compost, as Donna Haraway puts it. When you talk about optimization, and this is the big critique of the transhumanist movement or movements, plural as you suggested, it seems that it's always just a hop and a skip away from eugenics and dysgenics, population hierarchies.
Alex: Yeah. Absolutely.
Eleanor: Its fundamentals really are ableism and colonialism and even genocidal ideas. Do you think that that's strictly true? Do you think that there can be a transhumanist movement that is devoid of those symptoms or those I actually symptoms is completely the wrong word. There's the underlying causes.
Alex: I think that is a really important question. I mean, my take on that would be, I'm not sure that transhumanism is necessarily the best starting point for such a perspective, because it begins with the technological rather than the ethical. And it also starts with this impulse of the individual as well, which I think is problematic. So I think it's useful to imagine positive and optimistic techno-human futures.
But that would need to include thinking about ecology, systems, embeddedness, relationality, solidarity, and actually resisting the urge to try and conquer everything, but instead maybe to coexist and think about questions of flourishing together, for example. So I'm not sure such a perspective would ever be transhumanist per se, but it could very well leave open certain possibilities that many transhumanists envision. So I do think we do need, your positive techno-optimistic imaginaries, but definitely not techno-triumphalism. And we need to think about positive inspiring futures that could emerge from ethical conceptualizations of what it could and should mean to be human. And that can include technological possibilities for sure, I think. Personally, I think critical post-humanism offers some wonderful insights and ways to think about...these questions of ongoing techno-human relations. Rosi Braidotti who I believe was one of your guests, so you know, she says post-humanist thinkers are bonded by the compassionate acknowledgement of their interdependence with multiple human and non-human others. And that's a good way to think about things. She states that we are all in this together as the ethical formula par excellence. So I think that's a much better starting point for alternative thinking about the future.
In the penultimate chapter of my book, I do try and sketch a kind of ethical, meta-ethical framework maybe for what it might mean to be human in the 21st century. And you know, I'm obviously calling for systemic alternatives to advanced capitalism aimed at doing less harm primarily, and with a focus on leaving space for pluralistic ways of human and non-human being as Rosi Braidotti says there. But it doesn't simply reject kind of technological development in all its forms, nor does it say we should just stop techno-human relations. We've always evolved with technologies and obviously we'll continue to do so. But it does need a reassertion of values and, you know, as you were saying earlier, know, philosophy, the humanities, the arts, along with non-Western perspectives of human meaning, I think need to play a much more prevalent role in our culture.
And that might allow us to challenge the dominance and pervasive power of a few to accumulate endless capital and escape with all the spoils of technological development, while at the same time just neglecting responsibility for the social environmental damage they're causing. yeah, I think, you know, probably transhumanism is not the right framework to develop these alternatives, but there are certain aspects of transhumanism that are very much interesting. And obviously one thing that transhumanists do that's really positive is really engage with the radical technological powers we are suddenly beginning to amass, even though they use it sometimes in hyperbolic ways. I think just that engagement with these questions is really useful and really important. But I do think we need different starting points.
Eleanor: That's very generous of you. You know we don't just do critique here, we can also be nice. What about reason and rationality because often transhumanism and existential risk leaves you with a distaste for reason, which is unfair because we have such a long rich history of thinking about reason in philosophy. So can we redeem those two aspects of thinking about how we go about thinking?
Alex: Yeah, no, that's a really good question. I mean, I think Adorno is really useful for thinking through those things. I draw on Adorno in my ethical framework as well. Thinking about the need to kind of critique reason as you do it, the potential for all reason to dominate and so on. So I think that's really important. But I think it's also worth thinking about what reason currently does and it's linked to some of the things we mentioned again earlier - the idea that the human can be knowable, measurable, controllable, etc. and is an individual. And I think those kind of forms of reason are becoming increasingly problematic. And in that regard, actually transhumanism and capitalism, which is central to my book, it's those two things together. They've got a lot in common. They share a lot of kind of... They help reflect and bolster each other, I think. And they take kind of...complementary objectifying stances towards the human, I think. So for example, with capitalism, obviously for anything to be turned into capital, it needs a calculable value. It needs an exchange value. And that means that totally different things, incomparable things are forced into a kind of system of equivalence. So everything now exists in a manufactured reality with an imagined price tag and becomes an object in that way.
Eleanor: Mm.
Alex: And that's true of people too, we're objectified by the role we play in the system, but also our very being becomes objectified and made available, made as something that can be exploited by capital essentially. So our data, our genes, our desires, all of those things are abstracted into products and used for profit making. And that objectification is also obviously necessary for transhumanism because the human is the object, the raw material on which we get to kind of design this post-human future.
Capitalism as well is dependent on growth, which is also constantly measured. Yeah. So the possibility of endless growth on a finite planet is dependent on a notion of something like perpetual progress. And of course, progress, growth, conquering new frontiers, these forms of reason, I think, whether that's outward into space or inward into the secret codes of human being, that's all integral to transhumanist thought as well. And, you know, I mentioned the notion of the individual.
Capitalism conceptualizes humans as free, rational, autonomous individuals. That's where, again, reason comes in here. We all wield our reason, and we're all responsible for our own position in the market, and we are at liberty to choose what to buy and when. Transhumanism, again, echoes this system with a concept they call morphological freedom, which basically is the claim that each of us is at liberty to pursue our own version of enhancement.
So for capitalism and transhumanism, we're essentially entrepreneurs of the self applying our own reason on ourselves. But that doesn't account for power relations, including those relating to markets and market power. The truth is we're not all free in such a system. So and, know, finally, another facet of capitalism and transhumanism together points towards a kind of dehumanization, which again comes from, I think, reason. It's the you know, it's reason leading to dehumanization, which Adorno's philosophy is useful because it constantly acknowledges that potential within reason and then looks for a different way out. So capitalism as a system, as Saskia Sassen I think suggests, it kind of depends upon and creates concentrations of wealth and at the same time expulsions, kicking people out of that kind of bubble of capitalist relations. And in a transhumanist context, that could mean something much more extreme.
So transhumanist Steve Fuller, for example, calls for the construction of a new Republic of Humanity, he calls it, and that would be exclusively for entities that should have or should be regarded as having political rights. And animals could enter into that system and machines could, as could humans, but all of them, as well as being able to gain entry, can be expelled based on their use to the system.
So that expulsion opens you up to something he calls necronomics or the economics of death, which aims to generate the most societal value from death making. So again, it's this kind of capitalist reason taken to extremes in the goal towards creating this transhumanist future. So put simply in that context, you compete or die. It's a radicalization of the expulsions and concentrations that we've seen are inherent to capitalism really. And indeed other transhumanists have suggested that most humans will have no purpose in this future dominated by superintelligence because our reason as individuals is not powerful enough, it's not worthwhile enough. Automation will be rife, we won't have a role to play. So, you know, they argue in that case what we need is more, is to build virtual worlds or better drugs, for example, for the masses who would have no real role in this transhumanist future.
So we can see how reason is bound up with those questions of measurability that we spoke about before and turning everything into discrete and delineated entities rather than inter-relational entities. And we can see how it's bound up with the potential for very dehumanizing implications that are concerning. And I think transhumanism and capitalism share a lot of those same logics, which I think need resistance at this time.
Eleanor: Yeah, so as you've said, these people really do have a strong neo-Darwinist impetus, the survival of the fittest, idea we have to compete at all costs. if I had time to write another book, I'd write one on collaboration, the history of human and non-human collaboration versus competition. I think it's been done in part by some other people, but I want to connect it to AI and combat what you've just been talking about, sort of horrible idea that we are constantly competing against each other at all costs. You've talked a little bit about the apocalypse and the apocalyptic narratives. There's obviously something very messianic about this idea that, you know, there will be to come a judgment day, an apocalypse, a singularity, and we have to be ready. And
I've always found it really fascinating how Judeo-Christian narratives are so central to the transhumanist ideology. They've called it the territory of the frontier, and this is reflected in AI terminology. have frontier models and also the empire to end all empires. So it's very colonial as the way that they're talking about the likelihood of a singularity.
There's a link here to utilitarianism and you've explored that a little bit through the desire to calculate this impetus to plot out exactly how it's all going to happen, even though it's very hypothetical. And I'm not going to read out all the estimates on the likelihood of us surviving because they're so silly and they're so literally throwing numbers into the air. But you cite in your book Lord Martin Rees' 50-50 estimate of human survival, of the post-singularity as a kind of shrug, a flip of the coin. So can you just summarize for us the relationship between transhumanism and apocalypse?
Alex: Yeah, yeah, again, really good question. And again, for me, this gets to another reason I think it's really important to think about transhumanism right now. And that is that we are actually enmeshed in multiple ongoing crisis, crises. So, you know, the climate crisis is deepening at alarming rate, and that's bound up with other spiraling environmental catastrophes. We've got entrenched political instability all over the world now. You know, democratic politics are becoming more and more endangered. We've got increasing levels of polarity and less confidence in a shared reality and in part that's due to the very technologies we're talking about here, a kind of new complex media ecology if you like, and of course economic precarity is on the rise, levels of poverty are on the rise, the spectra of nuclear war has returned, wars and genocides increasingly prominent in the new cycle, and of course AI itself signals novel technological threats too.
So these factors are creating a sense of epochal anxiety really, and indeed apocalypse is the lived reality for many people around the globe right now and its imminent threat is more and more relevant for more and more people. all of these crises might suggest to us that our current life ways are not a suitable way to ensure a continued existence on planet Earth. But transhumanism again is a story that comes along and just says keep going. Keep your foot in the accelerator. There's a progress explosion just around the corner. You can have radical abundance, immortality, digital consciousness, super intelligence, all there for the taking if we just keep on going. So that's a really really useful story for some people who might not want to think more deeply about the roots of these multiple crises in which we're enmeshed.
So it's again no surprise that transhumanism is becoming a kind of salvific bedtime story for big tech billionaires, know, Sam Altman, Elon Musk, Jeff Bezos, all of them really. The language they use and the ideas they perpetuate are transhumanist narratives in this context of a kind of apocalypse for many people. But apocalyptic thinking is actually also extremely useful to those people because it offers extremes of possibility.
So on the one hand, we are about to explode into a radical kind of world of radical abundance and immortality. And on the other hand, if we get it wrong, we're all going to die. So questions of existential risk have become one of the main pillars of transhumanist thought. They're very keen to tell us, for example, that misaligned super intelligent AI would destroy planet Earth. So that way, these technologies they're developing offer not only the world's biggest stick, but also the world's biggest carrot. They've got both of them on hand there to hit us with.
So what we've seen in recent months, in fact, is this kind of oligarchic techno gang. They've aligned themselves, obviously, with Trump's MAGA project. And that's raising the specter of a new techno authoritarianism. And Gil Duran has dubbed this the nerd Reich, which I think is quite a funny framing.
Eleanor: Wow.
Alex: And there's this three particular strands of this political imagination, which I think are really worth thinking about, you know, because they're nourished by this kind of transhumanist ideology. And these can be summarized as hierarchy, exit and scale, which is, I think, quite a fitting acronym because it's he's which is, you know, maybe suggests the influence of patriarchal capitalism here. So I'll start with scale. So so Nick Bostrom, who's, you know, maybe the most influential transhumanist philosopher of the 21st century, he helped to create this kind transhumanist offshoot called long-termism and it's proved extremely appealing to Silicon Valley billionaires because it you know it places these elites of scientific and technological progress as the main characters in the most important moment in history. So Toby Ord who's a long-termist as well he wrote a book called The Precipice so again it's emphasizing that we are teetering on the edge of salvation or destruction and long-termism is all about the use of scale to create a disorienting ethical perspective.
So if we can imagine the future to be vast and glorious or limitless even in potential value, it can be made to outweigh any current injustices or social problems. And Bostrom claims that 10 to the 29 potential human lives are wasted every second that we are not colonizing the Virgo supercluster with computer generated minds of human equivalents. So these trillions and trillions and trillions of digital consciousnesses vastly outweigh the interests of a few billion humans alive today if you use that utilitarian ethical perspective of long-termism. So climate crises, genocides, wars, all minor episodes as long as some survive and pass on the baton of our technological expertise. And of course it is these techno barons of Silicon Valley who are these very important people holding on to that baton. So the vast and glorious future or extinction is in their hands in this framing.
So that's the you know that's how scale works in this system. At the same time you've got exit so the billionaire symbols I think of space rockets and underground bunkers those kind of talk to, they reveal the desire of escape, either as forms of extending projects of colonialism into space or to hide underground from the impacts of our destructive social systems, our exploitative social systems. So we can see the desire to claim all the spoils of capital accumulation and technological development without the need to deal with the social and environmental catastrophes left behind.
And transhumanism speaks to transcending limitations in the same way. It chimes with this politics of exit. The fantasy that some people can free themselves from certain or all constraints, whether that's aging, death, or more pressingly right now for these billionaires, the evasion of taxes basically. So again, we see the kind of merging of billionaire fantasies and transhumanist imaginaries. And as for hierarchy, if life is just information processing, then intelligence is the thing that enables us to process information more efficiently or more in line with our desires. So it becomes the most important virtue there is.
So the ill-defined concept of intelligence tends to be simplified in transhumanist discourse and it's characterized as the ability to solve complex goals. That's Max Tegmark's kind of, you know...framing of it. But that framing, of course, it avoids contextual and meta-level questions, really, about the meaning and purpose of life. It avoids deeper ethical questions about our relations to each other and the rest of nature. Instead, we get a kind of glorification of the kind of intelligence AI seems to display and the fantasy that life is simply information processing.
So in the modern techno-authoritarian culture of Silicon Valley, this celebration of intelligence is actually beginning to manifest in forms of IQ fetishism and you know obviously culminates in the return of the discourse of eugenics which is no surprise as you mentioned and and you know of course transhumanism has always held the threat that this would be so based on the idea because it's based on this concept of enhancement being better which implies hierarchy to begin with but the billionaires also want to believe that their power is natural and justifiable because they are simply better so the dehumanizing potentialities I mentioned with the, you know, with the humans being rejected from this Republic of Humanity of the future for Steve Fuller, they're justified through the notion of hierarchy, patterns essentially of colonial injustices of the past.
So unfortunately, I fear this kind of techno authoritarianism is far more likely to accelerate any apocalypse than save us from it. But I think in general, apocalyptic thinking is more effective at inspiring kind of nationalistic, resentful, zero sum and fear based politics than the kind of solidaristic and compassionate imaginations that I think we need. So you can see why it is part of this framing of transhumanism and especially the kind of techno-authoritarian transhumanism that we're starting to see.
Eleanor: Alex, thank you so much. You have given us plenty to roll our eyes about. And I highly recommend your book to all listeners. It's really comprehensive. It has all the best ideas and thoughts on this topic and in its broader implications. Thanks very much.
Alex: Hahaha. Thank you. Cheers, Eleanor. Thanks a lot.
Eleanor: This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage.
Comments