top of page
Search

Louise Amoore on Why Machine Learning is Political

In this episode, we talk to Louise Amoore, professor of political geography at Durham and expert in how machine learning algorithms are transforming the ethics and politics of contemporary society. Louise tells us how politics and society have shaped computer science practices. This means that when AI clusters data and creates features and attributes, and when its results are interpreted, it reflects a particular view of the world. In the same way, social views about what is normal and abnormal in the world are being expressed through computer science practices like deep learning. She emphasises that computer science can solve ethical problems with help from the humanities, which means that if you work with literature, languages, linguistics, geography, politics and sociology, you can help create AIs that model the world differently.


Louise Amoore is Professor of Political Geography and Deputy Head of Department. Her research and teaching focuses on aspects of geopolitics, technology and security. She is particularly interested in how contemporary forms of data and algorithmic analysis are changing the pursuit of state security and the idea of society. Her most recent book, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, is published by Duke University Press in Spring 2020. Among her other published works on technology, biometrics, security, and society, her book, The Politics of Possibility: Risk and Security Beyond Probability (2013) examines the governance of low probability, high consequence events, and its far-reaching implications for society and democracy. Louise’s research has been funded by the Leverhulme Trust, ESRC, EPSRC, AHRC, and NWO. She is appointed to the UK independent body responsible for the ethics of biometric and data-driven technologies. Louise is co-editor of the Journal Progress in Human Geography.


Reading List


Haraway, D. (1990) Simians, Cyborgs and Women: The Reinvention of Nature

Haraway, D. (2016) Staying with the Trouble: Making Kin in the Chthulucene.

Butler, J. (1990) Gender Trouble.

Harding, S. (2008) Sciences from Below: Feminisms, Postcolonialities, and Modernities.

Browne, S. (2015) Dark Matters: On the Surveillance of Blackness.

Wilcox, L. (2016) Bodies of Violence: Theorizing Embodied Subjects in International Relations.

Puar, J. (2007)Terrorist Assemblages: Homonationalism in Queer Times.

Benjamin, R. (2019) Race after Technology: Abolitionist Tools for the New Jim Code.

O'Neil, C. (2018) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

Amoore, L. (2021) "The Deep Border" https://doi.org/10.1016/j.polgeo.2021.102547

Hayles, N. K. (1999) How We Became Posthuman.

Daston, L. and Gallison, P. (2007) Objectivity.

Crary, J. (2013) 24/7: Late Capitalism and the Ends of Sleep.

Derrida, J. (1974) Writing and Difference.

Fazi, M. B. (2018) Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics.

Halpert, O. (2014) Beautiful Data: A History of Vision and Reason Since 1945.

Parisi, L. (2019) 'The Alien Subject of AI'


Louise Amoore on Why Computer Science is Political


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

In this episode, we talk to Louise Amoore, professor of political geography at Durham and expert in how machine learning algorithms are transforming the ethics and politics of contemporary society.


Louise tells us how politics and society have shaped computer science practices. This means that when AI clusters data and creates features and attributes, and when its results are interpreted, it reflects a particular view of the world. In the same way, social views about what is normal and abnormal in the world are being expressed through computer science practices like deep learning.


She emphasises that computer science can solve ethical problems with help from the humanities, which means that if you work with literature, languages, linguistics, geography, politics and sociology, you can help create AIs that model the world differently.


This is important, because when an AI makes a decision or a prediction, it often presents its output as definitive, as if it’s the only answer. But we need to consider how the branching pathways of its neural nets and the sequences of code could be put together differently, or could have used different data, and therefore could have resulted in other ways of interpreting the world. We need to make sure that AI represents and considers all of those possibilities.


In this episode, Louise uses some computer science vocabulary so I thought it might be useful to give you a very high-level explanation of them. If you’re in the know about computer science, just skip forwards through this section.


The feedforward neural network: This is your basic neural network. Neural networks, by the way, are called neural because they imitate the structure of the human brain. Or at least they’re meant to, and there’s lots of debates about whether or not it’s a good idea to analogise brains and computers.


So let’s say you want your feedforward neural network to be able to identify sandwiches in pictures. You need to input lots of images, some with sandwiches in and some without, into the feedforward neural network, which will then train itself to recognise patterns in them. Then, when it’s fed a new batch of images, it should be able to take what it has learned from the last batch of data and identify different images of sandwiches. To do this, it needs to work out which pixels or bits of the image are the best indicators of what a sandwich is. This is tricky anyway as around the world people do sandwiches differently.


If you search for these networks online, you’ll see rows of circles, which are the layers of neurons, and then you’ll see arrows between them, which are the channels. There is an input layer, an output layer, and then multiple hidden layers in the middle. The data travels through the first layer of neurons and then across smaller layers until they get to the output layer, where it tells you how likely it is that the image is a sandwich. And what exactly is this data? Well they’re pixels from images of sandwiches.


The idea is to transfer only the information that is most useful in helping to predict whether the picture contains a sandwich across each layer of neurons. To do this, each channel is assigned a score of its importance. This is called a weighting.


Once you get an output, you need to compare the prediction that the network makes about how likely it is that the image features a sandwich against the real answer. Yes, it’s 100% a sandwich - or maybe it’s less than 100% if some people disagree whether or not it’s a sandwich. If the score is too low - it rates the McDonalds chicken sandwich as only 40% likely to be a sandwich, then you need to change the weightings in the network so that different parts of the image are more important for the network’s prediction.


Once you measure how much you need to change the weightings by, you need to send that information backwards through the network so that the weightings and the information transferred can change. This is called backwards propagation or back prop.


Forwards propagation is what happened before that, when data was carried forwards through the network in a way that tells the system what the most useful information is.


This process happens again and again and again with lots of data input into the network and it continues until the weights are assigned correctly and the network becomes excellent at identifying sandwiches, so that you can Google a sandwich and you don’t get pictures of logs or a mattress.


A Recurrent neural network is a network that helps guess a sequence of something. So for example, when you type something into Google, like “what do mermaids do” it helps work out what you want to say next. 'What do mermaids do for fun, all day, in Sims 4'. These networks are at the heart of speech recognition, translation and more.


Louise also discusses transfer learning, which is a ML technique of transfering a model that has been used to do one task for use on another task. This can be really great, but it also means that all the assumptions and prejudices that one model has in the way it sees the world will be passed on.


KERRY MACKERETH:

So thank you so much for joining us here today, it really is such an honour to be able to chat with you. So for our listeners, could you tell us who you are, what you do? And what brings you to thinking about gender ethics and technology?


LOUISE AMOORE:

Good morning Eleanor and Kerry, thank you so much for inviting me. It's such an honour to be on your wonderful podcast. And hello, everyone. I'm Louise Amoore. I'm a Professor of Political Geography at Durham University. But my background is actually really in Politics and International Studies. And going back even earlier than that, it's in literature and languages. So I have a kind of quite a diverse background, which I think means that I come to these kinds of questions of technology, gender, and ethics, drawing on so many different influences across science, social science, and humanities. And the question of gender has been something that's always present for me in terms of thinking about the areas that I work on, especially the work that I've done on kind of biometrics and embodiment and thinking about algorithms and the way that we live in a situated way with machine learning algorithms. So I guess for me, you know, your questions in the podcast around feminism and around what it is that feminist work has contributed to my own work, it has really been about dealing with the invisibilities and the displacements. So how we think about extending and amplifying the politics that are already at work within a particular spatial arrangement of technology. So in a sense that is in part about those subjugated knowledges, you know, those things that we would say they're under-attended to and we need to really think about these things. But I think it's also a question of revisiting and thinking again about science itself. And so in this sense I share I think, probably with many of your interviewees, a kind of a real debt that I owe to the work of people like Sandra Harding and Donna Haraway. And actually, Judith Butler's work has always been really significant for me in terms of the kind of political geography that I do. And, you know, why is that? Because I think this is not simply about a critique of science, it's about saying, Actually, all scientific knowledge itself is already partial and situated, let's think differently about what that scientific knowledge is and what it means. So, in my most recent book, Cloud Ethics, for example, this was really about saying, Well, where do we locate the politics and the algorithm? You know, is it out with the algorithm in some sort of sense of a code of ethics or a code of conduct? Or ought we to think actually about the arrangements and the assumptions and the science and computer science ideas of the algorithm itself as already a question of politics? So I think, for me, that has been the curiosity. I mean, isn't that what always drives us, this kind of curiosity? Let's think about how we could think about this arrangement of propositions as already a question of politics. So, you know, and I'm sure all of us are very inspired by the work of a diverse range of scholars from Jasbir K. Puar’s work on thinking about racialized geopolitics and geopolitical technologies. And Simone Browne and her work on surveillance and, and your own Lauren Wilcox and her work on embodiment in relation to drones. You know, these are the places I asked my students to look for kind of thinking again about what political geography could mean, actually, also.


ELEANOR DRAGE:

I'm sure Lauren will be very pleased with a shout out! She's the Director of the Centre for Gender Studies at Cambridge for those of you listening. So we usually ask people these 3 billion dollar questions, then, on what is good technology? And of course, as you just said, that's definitely a political question. Is it also a partial question, you know, can you only get a partial answer from that? Will there be some kind of objective response? Or is that not possible? And then the two, the other two parts of that are, how can we get there? And how can feminism help us get there?


LOUISE AMOORE:

Yeah so there we are, with The Good Robot and good technology. I mean, I actually think this is, it's a kind of exercise that's really interesting, you know, that we could almost try and think together also, maybe with the listeners, you know, what would it mean to try to identify or somehow categorise good technology? And, you know, we might try and do it, we might say, well, look here is where this particular kind of neural network using deep learning, it's collaborating with geneticists in new ways, and it's helping to detect and to visualise forms of breast cancer tumour, for example, that have not already being detected and discussed by scientists. And we might say, Okay, look here, here is a good neural network, you know, we might do that through its application. But then we might see a very structurally similar convolutional neural network at work in an autonomous weapon. And we might say no, here that technology is bad. And, you know, for a really long time, my work has really been focused on the problem of that adjudication, you know, of saying here is the good and here is the bad. And part of that now, I think, is increasingly a practical problem. So because of the increased importance of transfer learning, one can never fully detect the lineage of, you know, how those final layers in transfer learning of a neural network were trained, and what might its genesis have been, you know, which other places might have been deployed and trained and were they all good? So rather than try to kind of involve myself always in this adjudication of the good and the bad, I have always been trying to ask this question that came from a really long time ago, in an interview I had with a computer scientist who was working across domains. So they were working on gait recognition for biometrics, as well as and alongside medical diagnostic type applications. And their line - and I talk about in my Cloud Ethics book - was, well, we need to know what good looks like. So what we need in our anomaly detection is to know what good looks like and once we've modelled what good looks like, then we can think about those things that could be threats and potential dangers. So I guess what I'm saying is that a good approach to that sense of thinking about the politics and the ethics of technology would in a way work with the impossibility of the adjudication of good and bad, like accepting that impossibility, partly because of the practicalities of transfer learning. And partly because my approach to those technologies is always to say, well, actually, there will always be things we don't know. And there will always be things that can't be fully disclosed somehow. But that actually is the point not only of thinking about technology, but also thinking about politics. Its intractability, and its impossibility ought to spur us to have these conversations about ethics. And they should not foreclose them, you know, they shouldn't be the place where we stop, I suppose, in thinking about them. So the question then of, well, how does feminism get there for us? Or like, how do I kind of think through those ideas? And you'll have seen probably in some of my work, that I find this question of accountability, and giving an account to be something that I find … well, it just sort of nurtured my method, I suppose as I'm looking at these new forms of machine learning on the project I'm working on at the moment. So what the distinction would be between making a technology accountable, trying to somehow, you know, explain it, understand it, and enforce some kind of responsibility for it. And where that form of account reaches its limits, I find work like students Butler's work around, well, one's account of oneself will always fail and fall short. Let's start there. So let's start with our kind of perennial condition of not being able to give a full account. And so that when we find the limit, with holding the technology to account, we should think instead about well, what kind of account does it give off itself? And actually, the more that I am kind of now in the new project, looking at different kinds of emergent deep learning models, the more I'm providing that actually, they are giving accounts of themselves all the time. I still think I was sort of right about that in Cloud Ethics, that they do give accounts, and that there are ways to push at the openings of those accounts, even where we can't have full accountability. So I think that would count for me as a kind of ethico-politics of artificial intelligence and machine learning, to demand that an account is given even where full accountability falls short.


KERRY MACKERETH:

That's really, really fascinating. And I think that the work you're doing around sort of what it means for an algorithm to be accountable, but also accounted for is so important in this particular climate, what algorithms are increasingly seen as kind of the bearers of a kind of neutral truth. And there's so much fantastic scholarship on this for our listeners from Cathy O'Neil through to Ruha Benjamin's work on the notion of how algorithms can propagate a sort of veneer of objectivity around older racist structures. For example, I want to ask you a little bit about sort of the ethics of trying to pursue this kind of politics of accountability because I think sometimes when we say like, oh, we can never fully have good and bad as like discrete categories, we can end up with a kind of sort of techno pessimism or a kind of fatalism around certain kinds of technological development. And I think what I really like about your work is, I think you're providing a kind of third way out of that. So I guess I want to ask you about that, like, do you feel like a bit stuck sometimes in your work between the kind of like, techno optimism, move fast break things, Silicon Valley approaches to technology, and then this sort of throw your hands up in the air, like, oh, all technology can be appropriated for like, really bad ends. And so where do we go from there?


LOUISE AMOORE:

Yeah, I think I mean, that's the difficulty, isn't it? That's the difficulty of what it is that we're studying. And I'm really enjoying my current project, Algorithmic Societies, I have a great team of people working with me on this. And I think we're sort of beginning to try to devise a method of engaging computer science on its own terms. And so almost reading computer science accounts of the development of a particular model, and taking seriously some of their categories. So this could be, what do they mean by rules? What do they mean by the discovery of a rule? What does it mean to extract a feature? How are attributes understood? What is clustering? And to think about those things that are already present in computer science, and say, actually, these are things that as philosophers or social scientists, or as humanities scholars, we are accustomed to thinking about. How does a community of similarity emerge? And how do we think about that? So, you know, it's not only a question of the practice of computer science, we have those resources already available to us, which I think means that I'm not pessimistic. So you know, I'm deeply horrified by some deployments, many deployments of some of these technologies, especially where they are directly designed to foreclose alternative readings. So you know, so one example of that would be, you know, some of the more recent work that I've done on the deep border that this technique and technology for trying to automate the assessment of visa and immigration applications, that one of the most significant kind of foreclosures here is that an alternative future is not possible for that person, and an alternative understanding of what they might present in their application is not possible. So I think it's possible for us to signal the harms, and really focus on the harms. But to do so whilst not annexing this as a kind of horrifying, fully automated autonomous machine, but instead to think about it as something that is integral to the way that we have for hundreds of years thought about population, society and so on. So to think about the histories and statistics, the histories of ideas about the bell curve and norms and how the idea of averages and norms acted upon particular people, you know, what are the harms to women, and to other minorities of those things? These are long standing questions in our fields. And so it's a question then I think of not, yeah, not annexing the thing that we're studying as something which we're fully standing outside of, but saying to ourselves, actually, it's already in meshed with our ways of being. And it is quite fundamentally, I think, changing our ways of being. So we need to understand the full implications of that, which is not only what is the impact on politics or on society, but actually, how might some of the ways we think about fundamentals like democracy, like fairness, like justice, like equality, how are even those ‘already vulnerable’ categories being reinterpreted through the lens and the vocabularies of deep learning, which I really do think is happening, and I think is often troubling.


ELEANOR DRAGE:

I completely agree. And, you know, we can kind of hold up these principles of democracy and goodness to a standard where they're very, they seem very pure, very outside of the messiness that we're trying to grapple with. But they are always contaminated by these kinds of questions, by the technologies that while we may disagree with from the outside, you know - which is why they people think that we do in the kind of humanities approach to AI ethics - we are still complicit in the technologies that we talk about, and I kind of wouldn't want it any other way I suppose. I wanted to ask you again about the politics of machine learning. And I particularly love non-stem definitions of these processes or descriptions of them. And we know that - although I don't know technically - but I understand that machine learning models when they're being taught to predict something, this involves continually adjusting the weights in the model until the system is deemed to be good at recognising something. So can you describe that waiting process and perhaps give an example if you can, and explain why the choice of weightings is also political?


LOUISE AMOORE:

That’s such a good question. Yes. And, you know, I suppose this question of the weightings, to me, is always both technical and political. Right. So the weights that we afford to something are also a question of those things, those voices that we don't hear, or those things that we don't make present. So in some ways, you know, a key part of the feminist methodology for me is this question of well, actually, how historically have some things been given greater weight than others? So can we institute that kind of attentiveness to those things which are made visible and those things that are invisible by actually understanding that kind of technical process of weighting? So if we imagine a really basic kind of feedforward neural network from the input layer, you know, through to the hidden layers, and the output layer, in order for that algorithm to learn something, it has to adjust its weights based on the output. So this is what back propagation means, of course, it's about the convergence and divergence from a target output. And each layer of neurons, in adjusting in a minute ways the weights of those neurons, the model will diverge and converge around the target output. And you can actually, you know, in terms of method, I'm sure that, you know, you've both probably done this already in some of your other work thinking about actually what are the practices of computer science and data science? And what are they actually doing in their labs? And I think that maybe sometimes the idea that a human adjusts it can be slightly … It's not fully the case now. So if, you know, when we talk about an unsupervised neural network, we are talking about an automated update of weighting. And sometimes I find that frustrating, but we've had people ask questions about automation. So is it automated or not? Even if the final decision on outcome is actually made by a human being appears not to be automated in many cases, the adjustment of the weights is made by the model, right? So it's the back propagation of the error. What's the gap between the desired target output and the output? So I think it opens up for me then a politics, because it's not only a question of a kind of technical choice between different emphasis or weight that is placed on a stream of data or on a large data set. But it's also I think a question of the weights being a kind of emphasis on the assumptions that the algorithm has. So I've heard, you know, data scientists working with their clients agreeing or disagreeing, or kind of contesting, well, what is it that you want from the model? And what do you want it to be able to do? And they'll often use language like, well, you can tune it, I've heard this so many times, you know, that they will say, well, once we've completed this, once it's acting in your domain, which could be in something like policing, or it could be in the medical sphere, or whatever the domain is, you can tune the model, they often say, to fit your own needs. And that means a continued adjustment of those weights. So that’s back to the adjudication problem. You know, we might say that this is a fair and just machine learning algorithm, but it is adapting and adjusting in line with its environment feedback all the time. So we can never fully say that it's been fixed somehow or that it's neutral. So for me this is political, because the weights mean that every branching pathway has an alternative pathway. So you know, if crucial to our political and ethical discussions are well, were there alternatives? Was there an opportunity for us to discuss in the UK whether there were alternative models of the pandemic that might have led to fewer deaths? We don't get to ask those questions when there are so many data-based models acting in the Cabinet Office and other places that say here is the problem of the pandemic, as an agent based model with machine learning we will manipulate the parameters in the weights and see what the outcome will be. So it forecloses the capacity to have a political discussion of alternatives, because it gives the appearance that you have a final optimised outcome, even though the adjustment of weights always implies there were alternatives. And I guess that if anything, that's my, yeah, that's my kind of key message in all of this, that we pay attention to what might look like purely technical processes, and to the kind of political consequences of some of those processes.


ELEANOR DRAGE:

I think that's so important for people to know, because when we're told on TV, you know, this is the science, this is what the maths says, people need to be able to look into that and say, Well, you know, what does that mean, and actually science is just as just as ripe for different interpretations as the humanities. And even, you know, when my mum, who studied maths and she's a scientist, and she knows that just implicitly when she’s watching but I, you know, I don't feel the same way because my degree is in languages and literature's and it's just so important that people recognise that that is happening. I wanted to ask you something slightly different about attention spans, because we know that screens and new technologies do something different to our attention spans or require a different kind of paying attention. And you've said something fascinating about the different attention spans of for example, of different kinds of AI, so in hypertext, or in deep learning, and can you tell us more about that?


LOUISE AMOORE:

Well, I mean, the guts of the I guess there are two really big influences for me here. And one would be Katherine Hayles. Work, you know, long standing work around differences in practices of reading and writing. And, you know, that's really important to me, I think, also to think about deep learning and machine learning, also as engaging all sorts of forms and multiplicities of reading and writing that we need to think about and understand. And Hayles has this really nice discussion in her Unthought book, you know, about the differences between kind of deep learning and kind of deep reading and a kind of more sort of, you know, the question of what that means in terms of our attention. But the other is actually a history of art, you know, thinking about people like [art critic] Jonathan Crary, and they're, you know, very detailed histories of attentiveness, in which the key point is, all forms of attention involve distraction. So you know, and this is long standing. So histories of perspective, histories of cinema, histories of scientific instruments that we see in work by people like Lorraine Daston and Peter Galison that I think are so important to me in terms of resources. So then it becomes a kind of, you know, Henri Bergson-inspired question of, well, yes, but how does the thing of interest become extracted and seized from its environment, which is so important to discussions of artificial intelligence, it's not only a question of sort of natural language processing, or natural language generation or sentiment analysis, but actually, it's at the level of feature extraction. So when we're asking what form of attention do we pay that we should also ask? And what form of distraction does that involve? And what are those things that we must then necessarily pay less attention to which are involved, I think, also in these foreclosures, so maybe I should clarify a little bit that, that, for me, deep learning absolutely does not involve deep reading, or it becomes difficult to do deep reading in relation to deep learning. And potentially, I mean, this is certainly something to talk about, I can't really offer a definitive account of it. But there's an illusion of depth involved in deep learning that has a particular allure, I want to say for you know, those policymakers and, and other people who are kind of waiting in a hungry way for some of these models because it it gives this illusion of depth, but actually the depth itself is secured through increasing the number of layers available in the neural network. So depth is equivalent to increasing the capacity of the algorithm to decompose the problem. And we're really used to it aren't we and things like object recognition and image recognition where effectively the image or the object is decomposed through data inputs that are usually about pixels and gradients. But that process of decomposition and recomposition that produces an output is similar to what I was saying about how we need to locate the kind of politics and ethics of that existing practice. What does it mean to break down an image but also what does it mean to break down a political problem or be able to break down an economic question and then decompose it. So to understand deep learning, as part of a longer history of technologies of attention and distraction is something that I think is really important and potentially has some possibilities for us and thinking about what the humanities can contribute, you know, though these are things that we already have expertise in, we don't need to look out with discipline.


ELEANOR DRAGE:

Yeah, absolutely. And it's so important that we know what's going on. And it's so important that, like you say, we understand that deep learning is not really a deep analysis at all. We had Meg Mitchell on the podcast yesterday and she was saying that it's really dangerous when we can't document the input of language models, particularly large language models, we really need to know what that input is, and be able to analyse and document it. And that seems to me a kind of similar point.


LOUISE AMOORE:

Yes, I think that's right. And, and there are people working, of course, in this kind of space, people like Beatrice Fazi for example, and now work on contingent computation, which I think you know, I find that just hugely significant, you know, what she's talking about in relation to contingency. But yes, I mean, the question of the input data, but also the multiple translations that happen through the model itself. So there's no question of actually how the model itself yields data, for its own learning and for the learning of other algorithms. So thinking about kind of almost a series of translations that take place in all forms of machine learning, and not only actually in language models? Yeah.


KERRY MACKERETH:

That's so fascinating. And I think it draws us back really nicely, again, to that question of how the humanities have these different forms of knowledge, which are so vital and valuable in this field. And I want to actually ask you about your literature background, Eleanor comes from a lit background. And I also came to this field, like a lot of people do through science fiction. And so I want to ask you about the novel, because we often think about novels and stories as somehow kind of the antithesis of machine learning. But you actually argue that their structure is comparable, in a way. And so for our listeners, could you explore that kind of comparison, and like why you see these two kinds of projects as being synergistic in a way?


LOUISE AMOORE:

So this is in a sensitive back to the writing question, isn't it and what's at stake in different forms of writing. And part of it is personal to me, I suppose that I have always found engaging novels to be really productive in terms of my work. So even going back to 10 years ago, to my Politics of Possibility book, you know, there was a novel in every chapter and because the book was really about post-911 imaginaries. And thinking about what sorts of technologies and political economies emerge from that. And every chapter involved also engagement within a so-called post-911 novel. So why do I do that? I mean, that's one of those things, isn't it? Where you're asking yourself the question of what it is about that? And I think to me, it's not strictly about the subject matter. It's not “is the novel about technology?” Or “does the novel address science and technology?” It's something closer to the kind of form or the genre of the novel and sort of that challenge of well, what's the form and the genre of other forms of writing, then, you know, so what are the forms and genres of computer science journal articles, which is what I'm working on with some of my, my colleagues at the moment and a paper on reading and computer science. So when we're talking about text, I mean, for me, this is about a commitment to a certain philosophy, I suppose that I'm still an avid reader, always of Jacques Derrida and written to remember my own kind of undergraduate training and kind of deconstruction, which, because they, Oh, this is old fashioned. And this is not out of style. But this question of what is and is not a text, and why would we not treat all of these things as texts, including the podcasts of some of these leading computer scientists like Geoff Hinton and others, and for me, the kind of real, the real, the productive heart of thinking about some of the novels that I engage also in the Cloud Ethics book, was this question of what it means to have what the novelist John Fowles calls the trace of the rejected alternative. So in the writing, there are always also these branching pathways and the reader doesn't always have access to what those are, but there are these gaps and spaces in the reading and the writing that allows the reader to enter that space and very often with with machine learning we see the opposite of that, we see a desire precisely to close the gap so that the output will be read clearly by the person actioning it, whether it's in a, you know, military or public policy sphere, or whatever that output is and how its action. So I want to, I want to sort of reinvigorate this kind of sense from the novel, that there remain gaps, that there remain places where the reader has to enter the text in order to try to complete it somehow, for themselves. But it doesn't mean the same thing each time that it's read and returned to. So it's almost like taking the openness of the genre, and saying, why would we not locate the openness in other forms of writing, that are driving towards closures, driving towards a kind of singularity of an output and a kind of clarity around that output? So I'm not sure how clear that is, but it's a way of kind of juxtaposing forms of writing that we think of as being anti-algorithmic, that's like, not codified in the same kind of way, and saying, Well, okay, how might that allow us to think different thoughts about the forms of writing that are at work in a kind of machine learning age or in an algorithmic society?


KERRY MACKERETH:

I really, really love that answer. Thank you so much for that, you know, and I wonder like what we would gain if we stopped thinking of algorithms as kind of these like bearers of objective truth, or these sorts of heralds of the future, and instead started thinking about them as like unreliable narrators or as people who provide, you know, this very kind of particularly warped vision of the world sometimes for good or for ill.


LOUISE AMOORE:

I think that's good work on that, you know, Luciano Parisi has done some really good stuff around the alien. And I think that Orit Halpern - I love her book, Beautiful Data - I can't think, you know, why is everyone not reading her book, Beautiful Data, I think she's just about to finish a new book, I think. But, you know, her account of cybernetics and her kind of location of kind of histories of psychosis and so on in cybernetics, I find these really productive ways for us to not become caught and trapped in a particular way of thinking, critique, I think in terms of artificial intelligence.


KERRY MACKERETH:

We always publish reading lists with each of our episodes on our website. And so we will put these wonderful books that Louise has mentioned on the website as well. But speaking of wonderful books, just to bring this interview to a close, the time always goes so fast. Finally, we'd really love to hear about the book project you're working on at the moment about how algorithms, you know, in themselves world building, in a way. So for our listeners, could you provide a quick overview of what you're working on at the moment, and what excites you about it, or how you're feeling about this project at the moment?


LOUISE AMOORE:

I think it's probably a very long way away from being a book, but it's also at its most exciting point. So I have a five year European Research Council grant for studying algorithmic societies. And I have a really fantastic team, Alex Campolo, Ben Jacobsen, Ludovico Rella, and we're having just some really exciting conversations about how we might think differently about the kind of ethico-politics of algorithms in machine learning. And in some ways, you know, the pandemic has prevented some of the initial field work that we're going to be doing. But that has also meant that we've spent so much more time reading, thinking about method, and thinking about how actually we might situate some of our questions as longer standing thoughts about what kinds of world making is involved in these kinds of technologies. So it's a very, very early stage.


ELEANOR DRAGE:

Oh, well, we're so excited to read it. And it's just amazing to hear your work here, kind of in those early stages. We can't tell you how influential you've been to us. And it's just an extraordinary thing to be able to interview you. So thank you so much for coming on the podcast today. And we hope we'll meet you very soon.


LOUISE AMOORE:

Thank you both so much. It's wonderful to have this opportunity to talk to you both and to contribute to your podcast which my students always listen to you. So thank you so much for everything that you're doing.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.





393 views0 comments

Comments


bottom of page