top of page
Search

Symbiosis From Bacteria to AI with N. Katherine Hayles

In this episode, we talk to N. Katherine Hayles who's the distinguished research professor at the University of California Los Angeles (UCLA) and the James B. Duke Professor Emerita from Duke University. Her prolific research focuses on the relationship between science, literature and technology in the 20th and 21st centuries. . We explore her newest book, Bacteria to AI: Human Futures with Our Nonhuman Symbionts, and discuss how the biological concept of symbiosis can inform the relationships we have with AI; how a neural network experiences the world; and whether ChatGPT can be conscious.


N. Katherine Hayles is the Distinguished Research Professor at the University of California, Los Angeles, and the James B. Duke Professor Emerita from Duke University. Her research focuses on the relations of literature, science and technology in the 20th and 21st centuries. Her twelve print books include Postprint: Books and Becoming Computational (Columbia, 2021), Unthought: The Power of the Cognitive Nonconscious (Univ. of Chicago Press, 2017) and How We Think: Digital Media and Contemporary Technogenesis (Univ. of Chicago Press 2015), in addition to over 100 peer-reviewed articles.  Her books have won several prizes, including  The Rene Wellek Award for the Best Book in Literary Theory for How We Became Posthuman: Virtual Bodies in Literature, Cybernetics and Informatics, and the Suzanne Langer Award for Writing Machines.  She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefellar Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships.  She is a member of the American Academy of Arts and Sciences. 


Reading List:


N Katherine Hayles, Bacteria to AI: Human Futures with Our Nonhuman Symbionts


Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living


Stuart Kauffman, At Home in the Universe


Kim Stanley Robinson, Ministry of the Future


Federica Frabetti, Software Theory



Transcript:


Kerry: Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: What is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list compiled by every guest. We love hearing from listeners, so feel free to tweet or email us. We’d also really appreciate you leaving us a review on your podcast app. Until then, sit back, relax, and enjoy the show.


Eleanor: In this episode, we talk to N. Katherine Hayles who's the distinguished research professor at the University of California Los Angeles and the James B. Duke Professor Emerita from Duke University. Her prolific research focuses on the relationship between science, literature and technology in the 20th and 21st centuries. And she's the author of classic works, including Unthought, The Power of the Cognitive Non-Conscious, which is by far the best book out there on contemporary theories of mind, and How We Became Posthuman: Virtual Bodies in Literature, Cybernetics and Informatics, as well as the iconic My Mother Was a Computer.


In this episode, we talk about her latest book, Bacteria to AI. And if you can see me on YouTube now, you'll see that I'm holding up a very well-thumbed copy that's been in the bath with me, that I've scribbled on in the margins. we'll be interrogating Hayles about some of her ideas about the individual, how a neural network experiences the world and of course, whether ChatGPT can be conscious. We hope you enjoy the show.


Kerry: Brilliant. thank you so much, Kate, for joining us back on the podcast. For our most loyal listeners, you'll remember that Kate Hayles was one of our very first podcast guests when The Good Robot was in its infancy. And now that we are much older, we've recorded many more episodes. We are just beyond delighted to have Kate back to discuss her very exciting new book. So more on the book later. But first, since you are, think maybe our very first ever returning guest.


We want to ask you a twist on our good robot questions. So when you first came on, we asked you, what is good technology? Is it even possible? And how can feminism help us work towards it? And now we were wondering, how do you think about those three questions now? Has anything changed in the way that you think about good technology over the past couple of years?


N. Katherine Hayles: Well, I don't really remember what I said your first guest, but good technology, good technology for me is technology that facilitates a more robust and empathic identification with other species here on the earth. And good technology helps guide us away from anthropocentrism and toward a more symbiotic view of our relationship with other species. I think there's tremendous future prospects in AI as well as serious risks, but AI could certainly qualify as a good technology, in fact, a complete game-changing technology if it matures in a beneficial way.


Eleanor: Fantastic, thank you so much. Let's get stuck in then. You've already talked about symbiosis in your first minute speaking. What is symbiosis? We should probably ask you that to start.


N. Katherine Hayles: Thanks, Eleanor. That's a great question. So symbiosis is a term appropriated from biology that means two species living in close proximity to one another. So we can think about the cattle egret perches on the cattle's back eating the bugs. It helps the cow. It helps the egret.


Typically, symbiosis connotes a positive relationship where both species benefit. But as a technical term in biology, it also includes parasitism, where one species benefits at the other's expense. So that's symbiosis. The way that I use it in my book, Bacteria to AI, is as technosymbiosis.


So technosymbiosis is a symbiotic relationship between humans, non-humans, and technical devices such as AI. And it carries mostly a positive connotation, but just as in the technical term in biology, it includes the possibility of harm as well.


Eleanor: So we're here to talk about the amazing Bacteria to AI. I have been waiting for this book for a very long time. It's incredible. And what is interesting about it is that your position is quite clear that you are into symbiosis and we need to explore the relationships that we have with other creatures, whether they're computational or biological. But you're also, and I don't know whether this is fair to say, quite human centered. So you are also clear that there can be a human individual. And I want to ask you about that relationship then, because you say, and I'm going to quote you, the evidence for symbiosis is very strong, but isn't the evidence for individuals equally overwhelming?


I wonder if it's possible to hold both views in balance, the view that all animals are holobionts and the opposing view that they have the capacity to act as individuals distinct from the influence of their symbionts.


N. Katherine Hayles: Well, thank you. Thank you for that question. Obviously, my position is that it's not only possible, it's absolutely necessary to keep in mind both a symbiotic view and the realization that organisms do act as individuals. So in the Western tradition, there's never been any question that a human is an individual. And perhaps that's an over-emphasis that needs to be corrected. But it should not be corrected to the point where the boundaries of the individual disappear altogether. To me, that's just a counterfactual position that can't hold up for more than two seconds, if you think about it, because we make decisions all the time, which may be influenced in ways we do not recognize by our symbionts. There's some evidence, for example, that gut bacteria do affect how we make a decision. But nevertheless, the fact that we can act as individuals seems to me self-evident. And not only humans, porpoises act as individuals, weasels act as individuals, parrots act as individuals.


So it has to be, there has to be some recognition that symbiosis, yes, is at work, but at the same time, organisms do make decisions about what to do, and the decisions that they make are universally in favor of continuing their existence. This is glossing over some complexities in the situation such as sociobiology argued, know, seagulls will favor the offspring once removed to a certain percentage and so on and so forth. But ignoring those complications for the moment, ⁓ Darwin, of course, was correct that individuals strive to survive, individuals strive to reproduce.


We see that as a universal biological mandate. So if you were to take away the capacity to act as an individual, you would have to be fundamentally rewriting the entire script of evolution for which there is massive, massive evidence. So I don't think that we can discard the idea of the organism as an individual. It's absolutely essential to the whole idea that individuals do strive to continue their existence.


Eleanor: To me, your view on this is very connected to the way that you try to reclaim humanism, a bit like Paul Gilroy, who's a critical race theorist. [He] tries to redeem humanism through its abolitionist origins, you also say that there is something of liberal humanism that can be reclaimed. And I wonder whether we are just redefining the individual then as something that emerges through symbiosis. Individuality can be expressed in a way because of these multi-species interactions. Is there a point in this term individual? Do we still need it? Does it do anything?


Also, you talk about operational closure. What's that got to do with individuality?


N. Katherine Hayles: Okay, well, there's a bit of a segue here, so let me try to fill in for our listeners some of what I think you're alluding to. So is there still a place for the individual? Of course, I've just argued that not only is there a place for it, it's essential. It's not only essential to the whole story of evolution, it's essential to something like systems theory, which makes a clear distinction between a system and an environment.


So a system is an individual, a system acts like an individual. But now to go on to operational closure. So, Maturana and Varela in their highly influential book, Autopoiesis and Cognition, argued that no information from the outside can ever enter into an organism. Their argument was that all information is interpreted by an organism in terms of its own neurology, sensors, et cetera. So all an organism can ever understand is what it itself is equipped to interpret. Now this leads to a very convoluted way of thinking because you jettison causality here.


Whatever happens in the environment is never causal in their view. It's only a trigger for the conclusions that the organism draws itself. And to my mind, this complicates things unnecessarily so that you're sort of squeezed into a pretzel-like thinking. But when Niklas Luhmann built his systems theory, he built it on Maturana and Varela's autopoiesis and cognition book, but he made one fundamental change, and that is that he changed informational closure, which is what Maturana and Varela had argued for, to operational closure.


So from Luhmann's point of view, a system can absorb any amount of information from its environment, but to avoid being overwhelmed by the much greater complexity of the environment, the system operates on that information only in its own terms. So it processes that information according to its interior categories and rules. So what it does, according to Luhmann is to duplicate internally categories that proliferate and that match up in a kind of wrenched referentiality to what's happening outside, but it's in its operational closure because it responds by generating its own categories and subcategories and it processes the information in terms of its own categories. So you see here how he moved from informational closure to operational closure.


Operational closure seems to me a much more defensible and rational idea. And Luhmann had a lot of evidence for how systems operate this way. The system of law, for example, and all other kinds of social systems. So in medicine, the doctor takes a report from the patient. That's information from the environment. But then how does he process it? He processes it according to the categories of his own discipline, is this person anemic, is this person have a heart problem, etc., etc. And the doctor can only do that, that is, he can only process it in terms of his own categories unless, a very rare occurrence, he begins to suspect there's a new category that may need to be constructed to somehow account for this anomalous phenomenon.


But as a general rule, sure, a system processes its information through the categories constructed in its own interior.


Kerry: That's really fascinating. I think it brings us on to another kind of major focal point of the book, I think, which is this idea of meaning. And something that you argue in the book is that prebiotic Earth had no meaning because life wasn't there to attribute meaning to it. But you then have this fantastic section later in the book where you explore the temporality and the evolution of minerals.


And you also then explore this idea that GPT-3, a large language model, has no way to create sort of real meaning. And so we actually want to dig into this a little bit more. And we were wondering, you know, why was it important for you to claim that there was no meaning pre-life? And if GPT-3 and other kinds of large language models aren't able to create real meaning, you know, what kind of meaning then do these large language models create?


N. Katherine Hayles: Well, thanks for that question, Kerry. So the chapter in the book on mineral evolution charts how mineralogy has undergone a revolution by moving from taxonomic categories for minerals based on their chemistry and their crystalline structure and so forth to a much more dynamic scenario that has minerals evolving.


Now within the field, that's a controversial move that many mineralogists have argued that mineral evolution is really just a perspective that one takes. So there's some argument within the field whether mineral evolution is a reality or simply a new way of looking at a series of facts. But putting that argument aside for the moment, the thing that interested me about mineral evolution was that it encapsulates a moment when the prebiotic earth encounters biota. So minerals evolve through chemical changes and in general the direction of mineral evolution has been through more and more and more proliferation of mineral types. So after


After the earth was formed there were about 300 minerals. Then the effects of wind, tide, water, etc. increased that number to about 600. But the real explosion took place when bio-memorals came on the scene. The joining of biota with minerals. And that exploded the categories to over 2,000 and still counting.


So from my point of view, what this represents is a critical moment in the history of Earth when an abiotic situation meets biota. It's that junction that fascinates me. And the reason it fascinates me is because Kauffman and Giuseppe Longo and one of his students have argued that biological evolution takes place in a distinctly different temporal realm than the evolution of physical processes. So physical processes are abiotic processes. So a rock eroding is an example of a physical process.


Now what they argue is that physical processes, even complex ones like evolutions that show inflection points going from one kind of regime to another, chaotic systems and so forth, can all be graphed into phase spaces.


So phase spaces are basically a way to identify the relevant variables within a physical system and graph them in a way that encapsulates all possible trajectories of that system. So a famous example of a chaotic phase space is the butterfly form that emerged from studying weather patterns.


And that's been central to the whole science of chaos and complexity as we know. So their argument is that biological evolutions in contrast to physical processes can never be graphed into phase spaces. And that's because their trajectories leap from niche to niche. That's kind of Stuart Kauffman's idea that the way biological evolution occurs is through a series of 'adjacent possibles', as he calls them. And the thing about adjacent possibles that distinguishes them from physical processes is precisely the marvelous creativity of organisms as they fulfill their biological mandate to continue their existence. So Kauffman and Longo give the example of a lungfish, a fish that both breathes air and water. As they put it, some water got into lungs. Now you have a new biological niche that's possible. So you have this little pocket of water inside a lung. What could evolve in there?


Well, maybe a worm that lives in that environment, maybe some form of bacteria. So it presents a new ecological niche ready for exploitation by an evolving species. And of course, this is what has happened over and over and over in the history of evolution. An adjacent niche opens up, organism mutates to occupy that space, and then we're off and running in a completely unpredictable direction. So what makes ultimately a biotic system different from an abiotic system is this element of unpredictability. And the unpredictability comes not from the kind of reason it comes in chaos theory, that is, an uncertainty in initial conditions, but rather from intentions and selections made by a living organism. That's what makes it unpredictable in an entirely different way from unpredictability in chaos theory.


Eleanor: An incredible description. But then are computers unpredictable if biotic life, so living things, have that kind of inherent unpredictability about them? What makes software unpredictable?


N. Katherine Hayles: Well, what kind of software are you thinking of when you say it's unpredictable?


Eleanor: I have been working a lot with Federica Frabetti who's a philosopher of software, who talks about the inherent doubt and uncertainty of software and the fact that it always produces errors. It is never fully complete. It always will go wrong in some way. It never can be known, totally, there's always something of the unknown about what you're creating. And I love that idea there's some proximity there to how you were describing the human or biotic life in all its complexity and unpredictability.


N. Katherine Hayles: Well, I would be fascinated to read more of this person, Eleanor. Maybe you could send me a link after this interview is over. That sounds like a fascinating hypothesis.


The point is, the point I'd like to make is, that software is overwhelmingly predictable. Our entire complex society runs on the predictability of software. And if your philosopher wants to make a lot of the few instances in which it's unpredictable, more power to her. But the fact of the matter is that software is overwhelmingly predictable. If it weren't, we would be in serious trouble.


Now I appreciate that a very small margin of unpredictability may still have large philosophical implications, which I think is what you're getting at. But in point of fact, software is overwhelmingly predictable with one exception. And that exception is neural net software, which is engineered precisely to be unpredictable because the parameters of the, of the neurons evolve independently of their initial programming according to the data that's ingested by the artificial intelligence. it's engineered basically, and AI is an analog application running on top of a digital computer.


So it has many aspects of an analog system. It's continuously varying parameters, for example, which are then implemented in digital format. So it has taken a lesson from life, so to speak, appropriated some of the analog characteristics responsible for life's unpredictability, and engineered them into this application.


And it's the analog component here that makes AI capable of performing human or even more than human feats, such as production of language, for example. So I think it's great that engineers, software engineers have found a way to introduce the kind of inherent unpredictability that we see in the biota realm into a software program that enables AIs to do what they're able to do. Still, they run on a digital platform which accounts for their reliability to the extent they are reliable.


Eleanor: Well, let's get on to AI consciousness then because this is I think the next step of our conversation. When I started to read the paragraph in your book that said, we may start by asking whether GPT-3 has a mind", I literally rubbed my hands together in glee and went to get a cup of tea from downstairs.


I am always fascinated by your explorations of machine consciousness. They're the ones that I cite most often. I find them the most interesting of all theories of mind. So you say cognition is a spectrum of possibility rather than a binary choice. So it's not a question of whether cognition is there or not there. Cognition is so much more interesting and diverse than that. So can you tell us a little bit about the way you are defining machine consciousness or thinking about machine consciousness? And perhaps in relation to this beautiful description that I will just read, where you say that "learning in humans takes place accompanied by a rich panoply of sensory information".


What is that relationship then between cognition, consciousness, and sensory information?


N. Katherine Hayles: Yeah, and this can also allow us to return to the question of meaning, which I certainly wanted to say something about. now we have this marvelous software that we call AI, which is capable of producing literary texts that are of great interest to me as a literary critic. How does it do this? Well, essentially, what it is doing is making billions and billions of correlations between the text that it's read. And from these correlations it's able to draw analogies and from the analogies inferences. So let me give you an example from Stephen Wolfram's book What is chat GPT and how is it doing what it's doing.


So what Wolfram did was to take the GPT-2, which was the smallest program that could still run on a desktop computer. And he began to map out what the individual neurons were actually doing. And what he was able to show was that he shows a graph in his book, which I reproduce in my book. But essentially,


Let's take the terms king and queen. the software knows that king and queen have certain social roles and a certain hierarchy and that that maps on to the binary man and woman. Now it therefore knows that the same kind of hierarchy that distinguishes king and queen distinguishes man and woman. In other words, a gender-based hierarchy which is then translated into hierarchy of social terms. But it's not only social terms like this, it's colors, it's physical properties, over and over and over. What the neural nets do is to say A correlates with B and then the relationship between A and B correlates with C and D. So it's constructing analogies.


Or maybe more precisely we could say it's constructing homologies and out of these homologies it draws inferences. If man is superior to woman and of course we want to put that all in quotation, if man is superior to woman then king is superior to queen etc etc etc. So it's making correlations and from those correlations drawing inferences.


Does this mean it can create meaning? I want to argue strongly yes, that it can create meaning. Now the people who argue it cannot create meaning frequently, I'm sorry, I'm recovering from a cold so forgive me if I have to cough momentarily or blow my nose or whatever, but anyway, that through these correlations it creates meaning by constructing a whole series of relationships. The people who argue it cannot create meaning tend to say meaning connects words with things in the world. If you cannot connect them with things in the world, then you cannot create meaning. In other words, if I use the term, tree, but I've never seen a tree, I've never touched a tree, I've never sat beneath a tree, then I really have no idea what a tree is. But to me that argument is obviously flawed because every literary critic knows that we obtain meaning not only between a relationship between a word and a thing, but also between words. In fact, most of the meanings we construct are between words and only secondarily or way down the line to the thing itself. Democracy, freedom, on and on. Those are powerful ideas in which words relate to other words. So we might have some referent in mind when we say democracy, but it's an idea, basically. It's not a thing in the world. It's a form of social organization.


So, it's crazy to say that you can't create meaning if you don't know what the thing in the world is through firsthand experience. But I would also say that AIs do know a great deal about things in the world that they have gleaned through their ingesting data and through the correlations that they make. Now they frequently fall down on very simple tasks like if you take a three pennies of diamond and nickel and stack them up how high is the stack? Well any kindergartner could do that, could make a little stack, put up a ruler and give us an answer. But an AI finds that an inexplicable puzzle because it doesn't have the information to make the correlations of what that would actually require. So it lacks all this first-hand knowledge about the world that we acquire as creatures who live in a physical environment and move through that physical environment. It has never moved through a physical environment. So the sort of next obvious task would be to connect up an LLM with a robotic body and there are some early experiments doing that.


But putting that aside for the moment, what kind of knowledge or what kind of meaning then can an LLM create? Well, it can create meanings insofar as those meanings relate to how different ideas, different words relate to each other. Now granted there are serious constraints on that kind of meaning that come from its lack of embodied and embedded knowledge. But nevertheless, to say that it can't create meaning seems to me disputed by every time you interact with one of these LLMs. Sure, you can trip them up. You can show their limitations. But at the same time, there is absolutely no doubt that when you ask chat GPT a question, it gives you a nice little summary of points and a conclusion and so forth. They have meaning. To say they don't have meaning seems to me just contradicted by one's obvious experience.


Kerry: And I really think that so much of what you're saying will resonate probably with ordinary users of large language models and these related technologies because I think for so many of us we can know on one level that what we're talking to is, you know, a machine and yet for so many of us, you know, these are incredibly important and kind of meaning-filled like interactions and I think that why your work is so important is I think that it does help clarify for a lot of people like why we have particular kinds of emotional responses to the conversations that we have with ChatGPT and other platforms, even as we know that they are functioning differently to the way that humans do in conversation.


I want to come to this question of the human that Eleanor brought up much earlier in the conversation. Maybe this would be a nice kind of direction for us to close on, which is, you mentioned kind of in the beginning in your description of symbiosis, why this was such an important concept to be borrowed from biology as a way of challenging anthropocentrism or a particular kind of ideology or political worldview that puts kind of the human at its center and also puts the human as sort of above all other kinds of species and maybe above the environment and the planet.


And that this is linked to kind of a whole lot of other pernicious ideologies and beliefs that empower humans to control or degrade the environment or other species in pursuit of our own benefit. And at the same time, as Eleanor mentioned, like the distinctness of humans still seems really important to you. And so, why is human distinctness still a really important project? And I guess, how do you balance kind of the pursuit of human distinctness while also challenging and the kind of predominance of anthropocentrism in a lot of human institutions and worldviews and makings.


N. Katherine Hayles: Well thanks Kerry, that's a great question. So it seems to me silly to deny that humans do not have distinct capabilities that no other species has. No other species can write a symphony. No other species can construct a mathematical proof. No other species can build a telescope. No other species can send a rocket to the moon. So of course humans have distinct capabilities. So then the question becomes, how can we think about the uniqueness of the human species in such a way that it is not detrimental to respecting and empathizing with other kinds of cognizers on the planet? That's really the crux of the issue as far as I'm concerned.


And that's why I want to return to liberal humanism and begin to think about the ways in which it's unabashed advocacy of the human can be reconfigured in ways that minimize its detrimental aspects and maximize its positive effects. And that's what the last chapter of the book is really trying to do, to try and thread that needle to suggest a kind of template that shows the direction in which these sorts of thinking might be able to move. And that last chapter focuses, as you know, on Kim Stanley Robinson's book, Ministry of the Future, which presents itself as a kind of handbook of here's ways that we can think differently about our myriad environmental problems. And the book is unabashedly utopian.


For every problem, he has a solution no matter how far-fetched it seems. He thinks the Hong Kong revolution of young people is going to succeed against the Chinese authorities and on and on and on, which of course history has proven that it has not. putting that aside, I think we need that kind of utopian optimism because the alternative is to just throw our hands up and say well, you know, we've screwed it up really badly, there's nothing we can do about it, so let's just go and party until the apocalypse. And that to me is not very productive. I think we need an optimistic view that is going to suggest yes, we can do something about these problems, here are some possibilities. So I am...


I am resolutely optimistic in that final chapter. I hope not simplistically so. the expression that has become so pervasive nowadays, at least in the States, I'd be interested to hear if it is in the UK as well. It is what it is. Do you hear that a lot?


Eleanor: That resigned... comment.


N. Katherine Hayles: Yeah, it is what it is. Well, okay, you know, let's go to a bar.


Eleanor: Invented


N. Katherine Hayles: I understand why people say that, but I probably said it myself. But nevertheless, we still have to struggle onward. And, you know, I say that as an American under the Trump administration in the darkest days this country has probably ever seen.


So I'm dedicated to the idea that we have to hold on to the notion that we can and we will do better.


Eleanor: Well, on that joyfully optimistic note, and I love that you have faith in the human, I really like ants that carry 10 to 50 times their body weight and don't need antidepressants. But we can, you know, sort of agree to disagree on the capabilities of the human. Thank you so much for this incredible explanation of this amazing book. I cannot recommend it more. I read it with such joy. So thank you for giving this to us and to the world.


N. Katherine Hayles: Well, thank you and thank you so much for your generous invitation to be on this podcast with you and for your gracious, gracious endorsement. Thank you very much.


Eleanor: This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage.

 
 
 
Join our mailing list

Thanks for submitting!

Reversed white logo CMYK.png
  • Amazon
  • kradl, podcast recommendations, podcast discovery, interesting podcasts, find-podcast reco
  • Spotify
  • Deezer
  • Twitter
bottom of page