top of page
Search
Writer's pictureKerry Mackereth

The Good Robot LIVE! From Berlin

This special bonus episode was recorded at the AI Anarchies conference in Berlin. We held a workshop exploring with participants what good technology means for them, and why thinking in terms of ‘good technology’ actually limits us. Two amazing participants offered to be interviewed by us, Christina Lu, who at the time was a software engineer at DeepMind and is now a researcher on the Antikythera programand Grace Turtle, a designer, artist, and researcher that uses experimentation and play, like Table Top Games, LARPing, and simulation design to encourage us to transition to more just and sustainable futures.


Image Credit: AI Anarchies Summary School


Reading List:




KERRY MCINERNEY:

Amazing. So yeah, thanks so much for joining us for our first ever live podcast recording. So for our listeners, just to let you know, we've never done this before, so it might be really chaotic and weird, but we're really, really happy to be in conversation with so many fantastic people here at the AI Anarchies conference in Berlin. But just to kick us off, we're gonna get out very brave people who have volunteered to be live guests to introduce themselves. So Christina, can we start with you? Can you tell us a bit about who you are, what you do, and what's brought you here to the conference?


CHRISTINA LU

Yeah, so my name is Christina Lu. I'm a software engineer at DeepMind, where a lot of AI research occurs. But at DeepMind I also do a good bit of socio-technical research. So sort of thinking really critically about technology and sort of bringing in fields of thought that aren't often in these spaces. So I've written papers about the cybernetic autopoetic relationship that forms identity. And recently, I've written a paper thinking about the algorithmic internet. So yeah, I'm at AI Anarchies today, because I really wanted to just kind of get together with the community, people who I often don't talk about AI with, and trouble some of the existing preconceptions I might have.


KERRY MCINERNEY:

That's fantastic. It's so interesting. Okay. You said something which I didn't understand. So I am going to ask you to explain that. So you said the cyberpoetic…


CHRISTINA LU:

Cybernetic autopoetic …


KERRY MCINERNEY:

Okay, what does that mean?


CHRISTINA LU

Autopoetic is a word that came from molecular chemistry, if I'm not incorrect, it's basically about a system of self-reinforcing processes that constantly reproduce its own structure. And so that's sort of how I think about identity, like human identity - interpersonally. It's like, sets of relations and processes that reproduce its own structure. But it's also open to dynamism and movement.


KERRY MCINERNEY:

That's really fascinating. Starting off nice and chunky, but we love it. And Grace, we'd like to introduce yourself.


GRACE TURTLE:

I just want to say, I'm also into the idea of autopoiesis.


ELEANOR DRAGE:

Oh yeah go on!


GRACE TURTLE:

But um, so who am I? My name is Grace Turtle. I come from the world of design and foresight. I am a third … what do you call … I am one part of three of a studio called Becoming studio, we do design futuring. And I'm also a member of the decode network, which is a PhD programme that is rethinking design for inclusive features. And the whole idea behind decode is to rethink design in terms of artificial intelligence, more or less, everything connects to artificial intelligence, but we look at that for at the level of interaction and policy at the level of the algorithm are some examples. And within the decode network, I specifically look at code predictive relations. So predictive technologies, specifically digital twins, and I do that from a queer perspective. So I am specifically looking at wearing AI from the lens of logic, or ethics aesthetics and imaginaries.


ELEANOR DRAGE:

Oh cool, what is a digital twin? And how can we queer it?


GRACE TURTLE:

Okay, so I'm a digital twin is, simply put, a virtual copy of a physical entity or asset. Usually, when I'm explaining what a digital twin is, it’s like having a conversation like this, I usually have a glass in front of me. But you know, we'll start with a glass. So glass is like a very simple object. It doesn't have a lot of materials. It has a single purpose, or metal, you know, you could I guess glitch the purpose of the cup. But it would exist as a 3D model, potentially before it is made. And so you'd be able to prototype that class before it's produced. A digital twin, essentially, takes these physical assets doesn't always have to be physical. It creates this virtual model and you could think of it as the data double. It's meant to be real time. And essentially, there is this information loop that flows between the physical asset and the virtual asset. And so the digital asset is intended to optimise the physical asset, it creates what if scenarios. And it's all intended to kind of manage or maintain and optimise this physical thing. So the cup is a very simple example. But you could think of a rocket ship, you could think of a city, and then it becomes a little bit more complex.


ELEANOR DRAGE:

So it's used everywhere in engineering, and also poses its own ethical issues of copying and reproducing systems as digital twins.


GRACE TURTLE:

Yes.


ELEANOR DRAGE:

So how do you queer them?


GRACE TURTLE:

So how you would queer a digital twin - the way I think of it is, you know, starting at the logic, that the level of the logic of the digital twin. I think it's good to think of where digital twins come from. So they come from engineering, they come from the production of machines that can kill people. And so you want to manage risk with a digital twin. So the way that I think about it is there is kind of desirable predictability that you want to manage, there is undesirable predictability that you want to manage, and there's undesirable unpredictability, that is a huge risk. And then there is this space of, of desirable unpredictability. And this is a space that no one really thinks of, because it's a happy accident. It's like a pleasant surprise, you know, intentionally designed for that. And so, if you were to think about this like an axis, that is the quadrant that for me is most interesting, because that is the space of opening future potentialities, and that is the space for queering. I look at queering digital twins, again, coming back to logic, from this lens of mutation, disidentification with the purpose to reengineer meaning and this space of possibility, or in betweenness, which I think of as the borderland. And so how you queer a digital twin, is you apply those that that logic into the structuring and intention of the digital twin. And that as a thought exercise, you should be able to focus more on opening futures rather than closing futures, which is really the goal of digital twins today.


KERRY MCINERNEY:

That's absolutely fascinating. And just the kinds of conversations we've been having here today and the keys, yeah, just trying to think about all the different ways people are envisioning these different kinds of futures has been phenomenal. And so for our listeners who might not know what AI Anarchies is, and it’s something we've been talking about a lot, could you both explain a little bit what you've been doing here over the past week? What kinds of themes or conversations have you been having? And yeah, what are you hoping to take away from this amazing event that's been co-organised by Maya Indira Ganesh, one of our previous guests on the podcast and a member of the Centre for future of intelligence. So Christina, do you want to kick us off?


CHRISTINA LU

I think during this time at AI Anarchies we've sort of, I think, at least I personally, have grappled a lot with binaries, right. There's resistance to technology versus repair. There's, you know, trying to deal with the harms embedded in the technology that exists now, and imagining a possible future perhaps with different under different circumstances. And I think, one theme that has come up through this conference or school for learning and unlearning is at least for me, like sort of abandoning the notion of the binary going towards this territory of a secret third thing, and thinking of things as existing within a field of potential with, with potential to draw you into different directions, and this sort of thinking of like technology as like, there are different vectors that technology could go towards. And that allows flexibility and dynamism when thinking about AI and what is harmful because we're not saying like ad hoc, Oh, massive scale is bad. What we're saying is like, maybe we should move in this direction for a bit and see what's going on.


ELEANOR DRAGE:

So you said before, to me that the secret third thing is not this thing or that thing, but something beyond, and that's as yet unthought. Do you have any concrete examples of that, of how we're kind of trapped between two possibilities, like reform and resistance, and we want to think of another option?


CHRISTINA LU

Yeah, I think sort of in similar to what Grace was just saying earlier, a lot of what I'm interested in these days is like, you know, we have all of this technological infrastructure in place, what our ways of hacking and leveraging this infrastructure and corrupting it for our own ends and like contaminating it, sorry, it's very xenofeminist. But yeah, I'm interested not in incompatibility, but like, being incompatible and like seeing something new and like leveraging the structure of the existing infrastructure, usurping it, contaminating it to use it for something different. And I think that is - I wouldn't say that is necessarily repair nor refusal. I think it's sort of a bit more sneaky.


KERRY MCINERNEY:

We love something sneaky.Grace what are your thoughts? You look unsure.


GRACE TURTLE:

I think I agree. I think someone brought up the other day, indirectly, this quote that is often quoted, about the master and the masters tools. How does this thing go?


CHRISTINA LU:

“You can’t destroy the master’s house with the master’s tools”?


GRACE TURTLE:

Yeah. I can't remember what comes after that.


ELEANOR DRAGE:

Doesn't matter. It's the thing that's extracted most from Audrey Lorde! Maybe, maybe in context, it actually means something very different! But it's used as a provocation.


GRACE TURTLE:

When I think of this quote, you have to think that it's not it's not about the tools or about dismantling the house, it's about remixing, it's about building a new house and doing multiple things at the same time. Thinking about this kind of dualism between reform and revolution is not so useful. You both need to reform and have a revolution at the same time. There is, you know, nothing is inherently good or bad. I don't know where we're going with that. But there are third things that you can do, you can also mess with the thing to make it to make it better.


ELEANOR DRAGE:

Yeah, it's quite a bind. And every year I get Gender Studies come to me with their dissertation saying, “Can I knock down the master’s house with the masters tools?” And I'm like, just focus on the dissertation for a bit first!


GRACE TURTLE:

I think we need to move away from this whole master’s house and tools business. Yeah, it's even though I really like you, Audrey Lorde, what an amazing human being, but we need to move away from the Masters House, just ignore it.


KERRY MCINERNEY:

I think that's so interesting. And I also find it interesting as well, when we're thinking about repair versus resistance. And I would even say revolution is a kind of a third category, so many ‘r’ words out there. And that's something we were discussing as a group was trying to think about, you know, what does revolution look like in this space? Because we so often talk about how do we reform, how do we resist on quite an individual level? Do we kind of, you know, individually stop using platforms, for example. But as Jack Halberstam, one of our guests previously talked about, what would it mean for us to all collectively refuse to take part in some of these tech systems. And is it even possible, though, to be able to opt out? But with that in mind, I want to take us to our core questions that we asked on this podcast very much as a provocation, which is what is good technology? Is it even possible and how can feminism help us work towards it? And so Grace, let's jump to you. You've talked about your different kinds of interests around these sorts of unpredictable spaces that can be really desirable and design but we'd love to hear more broadly about what you think about this idea of good technology.


GRACE TURTLE:

Yes, So I think before we started recording I questioned why this question. Why good technology? I think there is no good and good is not necessarily something that we should be aiming for, I think we should be complicating our relationship with technology. And I think of that in my work in a few ways. Coming back to this, you know, what we started talking about in terms of autopoiesis. It's like, what is the auto, what is the self, when it comes to technology, to complicate the relationship that we have with technology as a tool, because the tool is, again, very binary, it's very connected to Cartesian thought. It supposes that humans are kind of somehow above these tools. And when we co-perform with technology so much, and I think we've seen a lot of really fantastic examples over the last week, here, we have a shared agency. And something that I look at a lot is like this co-performativity in the shared agency, that occurs when we relate to technology. And that again we are not inherently good or bad so how can technology be a good or bad. So I'm more interested in forms of complication.


ELEANOR DRAGE:

And the Cartesian dialectic, so that's what you were just talking about, is this guy Rene Descartes, who came up with the idea that the mind and the body are separate. They are binary opposites of each other. And that's a myth that lives on in AI today with people thinking that artificial general intelligence will be this big brain detached from a body. It's not a body tool, there are no material aspects to it, it's this kind of floating mind, and you see the floating mind appear again and again, again, in science fiction.


GRACE TURTLE:

Also to add one more point there, it's like, I feel like when we think about AI, specifically, it's, we have to think of its origins. It's genealogies, epistemologies and whatever. And go even further back to its early imaginings, but I particularly recently have been thinking a lot about that Turing test. And I feel like no one acknowledges very specifically that this is a gay man that modelled a system of intelligence passing as human intelligence, because he was a gay man having to pass as straight. And this test is so important for testing artificial intelligence, because it's trying to pass as human. And this is inherently wrong. It's like, again, you keep coming up with these binary ways of understanding intelligence or artificial intelligence that are problematic.


CHRISTINA LU:

Sorry, not to jump in, but I think - I’ve ignored the question of good technology, but I I've been thinking deeply about how like, artificial intelligence, as it exists today in machine learning is like, sort of fundamentally incompatible with queerness because it cannot hold this multiplicity, right, like, both supervised and unsupervised learning exist to make categories and data or rely on categories already existing in data. And so the whole reason I was like talking about autopoiesis in the beginning is because in this paper, we were trying to diagnose how the way of thinking about identity is so flawed in machine learning, because people think of it as static categories. They think of them as discrete. And they also think of it as epistemological, it's fully unknowable to one person, when I would say it's co-constructed among people and with machines as well. And I think that is something sort of difficult for me to grapple with, or just makes me think about how the way that machine learning exists today is incompatible with these alive dynamic processes that make up who we are?


CHRISTINA LU

Can it be done differently? Yeah, I mean, we thought of some examples in this paper and I think it would require actually troubling the notion of machine learning the statistical foundations of in the first place. But like one idea we had in the paper, I think mostly from my co-author Jackie Kay. But they spoke a little bit about, I mean, machine learning depends on data sets, right. And it's often in these data sets that these categories, these partitions, first come to be. And so one idea we had for, you know, if you want to collect data on a group of humans, a group of people, why not have a relational data set in which each person in the data set categorises everyone else according to whichever metric or we've decided, and thus, you're specifically encoding the subjectivity that exists in the group. So you're specifically trying to encode whatever perspectives are contained. And I thought that was quite an interesting idea. But yeah, I think it definitely requires a rethinking of the logics that underpin AI right now. And like you said, I'm quite interested in machines that can allow us to be subversive and co-construct something together with it.


KERRY MCINERNEY:

That's really fascinating. I love the idea. Well, I'm really intrigued by this idea of a relational data set. And certainly, you know, I think this is something that Eleanor and I think about a lot is how not only the kinds of categorizations that we use are themselves really invested in these histories of racism and sexism, but also just how the process of labelling is often so erroneous because of the non-consensual collection of data and then just the mislabeling of people in a way, you know, that tries to associate external appearance with internal characteristics that is itself such a scientifically racist logic. But we now actually wanted to bring you back to that question. You've kind of touched on it a lot, of what is good technology? Is it possible and you know, can feminism help us work towards it? We'd love to hear your thoughts on that.


CHRISTINA LU:

Yeah, so I’ll answer the question now. So when I think about what makes good technology, - we also didn't talk about the structure of this autumn school, but in the morning, we have had provocations from amazing academics like Sarah Sharma, Jackie Wang, etc. So I think on the first day, Sarah Sharma came and talked a little bit about the necessity of scrutinising the logics that underpin why AI is built. And I think, when you ask me what good technology is, I'm like, what are the logics that are implied when we build technology? And often it's like, Sarah's talks about the utility of others to us. And other logics that I think are troubling are like efficiency, ease, massive scale, etc. And so I am interested in a technology that instead, seeds new models, infrastructures, etc, that allow for, you know, chance encounters and inefficiency - here I am going on again. But yeah, technologies that sort of go against these ideas of efficiency and productivity, and instead, help sculpt us into new forms that allow for different social connections or different mediations.


ELEANOR DRAGE:

Yeah. And here you converge, Christina and Grace, because Grace was talking about meandering and getting lost and the wilderness. So there's something there, right, that's come out of this week, where you've all thought about how can we programme against efficiency somehow, and stay in a place that's very human, you know, it's very human to sort of fail and have chance encounters and that serendipity is what makes life really beautiful. So is that what you think - are you're attached to that kind of way of living? And is that what you want to replicate in AI?


GRACE TURTLE:

Replicate? I think so. I'm not sure what we would be replicating because I'm not sure where it exists today, but I think it is definitely introducing new friction to create something new, because we, we could say that our existing economic models or capitalist, heteropatriarchal systems, forms of techno-determinism shaped by particular worldviews, are destroying us and everything around us. So I think it is really important that we introduce new logic that we kind of break old logic to create new worlds basically. So it's not resisting the technology as such but it is rejecting and resisting particular ways the technology is inscribed that reinscribe us back into this very toxic system that we exist within today, I would say.


KERRY MCINERNEY:

That's really fascinating. I feel like that actually, yeah, really loops us back to what Christina was talking about at the beginning of this, of this episode. And I think these themes have constant inscription, but also trying to create these different messy, wild futures is so key. But I really want to say thank you so much to both of you for spontaneously appearing on our podcast. Thank you to everyone who took part in this workshop, it has been such a delight to be in conversation with you all. And for our lovely listeners. We hope that you've really enjoyed hearing more about the AI Anarchies workshop, and we look forward to talking again soon with both of you. So thank you so much.




119 views0 comments

Comments


bottom of page