top of page
Search
Writer's pictureKerry Mackereth

Margaret Mitchell on Large Language Models and Misogyny in Tech

In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT-3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it's a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny at Google that led to her firing.


Margaret Mitchell is a computer scientist at Hugging Face. Her research generally involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.


Reading List:


On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?


S Antol, A Agrawal, J Lu, M Mitchell, D Batra, CL Zitnick, D Parikh

Proceedings of the IEEE international conference on computer vision, 2425-2433


H Fang, S Gupta, F Iandola, RK Srivastava, L Deng, P Dollár, J Gao, X He, ...

Proceedings of the IEEE conference on computer vision and pattern …


A Sordoni, M Galley, M Auli, C Brockett, Y Ji, M Mitchell, JY Nie, J Gao, ...

arXiv preprint arXiv:1506.06714


M Mitchell, S Wu, A Zaldivar, P Barnes, L Vasserman, B Hutchinson, ...

Proceedings of the conference on fairness, accountability, and transparency …


BH Zhang, B Lemoine, M Mitchell

Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 335-340


Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

In the race to produce the biggest language model yet, Google has now overtaken Open AI’s GPT 3 and Microsoft’s T-NLG with a 1.6 trillion parameter model. In 2021, Meg Mitchell was fired from Google, where she was co-founder of their Ethical AI branch, in the aftermath of a paper she co-wrote about why language models can be harmful if they’re too big. In this episode Meg sets the record straight. She explains what large language models are and what they do, why they’re so important to Google. She tells us why it’s a problem that these models don’t understand the significance or meaning of the data that they are trained on, which means that Wikipedia data can influence what these models take to be historical fact. She also tells us about how some white men are gatekeeping knowledge about large language models, as well as the culture, politics, power and misogyny issues at Google that led to her firing.


KERRY MACKERETH:

So thank you so much for joining us. It's really an honour to have you on the podcast. So just to kick us off for our listeners, could you tell us who you are? What do you do? And what brings you the questions of gender ethics and technology?


MEG MITCHELL:

Yeah, so I'm Margaret Mitchell. I'm a computer scientist and researcher. I've worked on natural language processing, computer vision, and ethics in AI more broadly, What has brought me to gender ethics and technology? A mix of seeing results from the systems that I've been building, as well as my own personal experiences as a woman, and, and my ability to see patterns and speak to them, and then further, not to be believed in understanding those patterns and articulating them, so wanting more and more to just really bring my knowledge of gender and ethics and technology into the broader conversation as much as I can.


ELEANOR DRAGE:

Fantastic. Well, we're thrilled to have you here. Our big, you know, billion dollar questions are, what is good technology? Is it even possible? And how can feminism help us work towards it? And when people aren't that they often break down good into, you know, what is good? You know, this is our kind of critical theory lens. But so how can you bring your experience to those questions, but also for, you know, what is feminism? And how does it play an important role in your work into trying to make technology better?


MEG MITCHELL:

Yeah, that's, that's a lot. There's a lot there. So, in terms of what is good, that's a very ethics question that I don't think there will be, you know, general agreement from everyone around what is good. But I would say that given a set of options for how AI can evolve we can imagine situations where AI essentially replaces the roles of people, or where AI assists and augments people. And I would say, of those two choices, the assistive and augmentative direction is one that is better than the one where humans are more replaced. There are a lot of reasons for that. But one of the ones that is most relevant to ethics and gender, is that AI systems propagate biases against marginalised groups, we don't have the ability to train AI systems to behave in equitable sorts of ways. And so when we're in a situation where women in general are treated as less than men, then the AI will similarly pick up that kind of viewpoint. And so, to answer your question about how to work towards good AI and good technology requires A) recognising that women are marginalised, and I don't think there is a lot of agreement on that within the ML developer world, which is unfortunate. B) that the systems we develop in AI do mirror that and that we don't have the techniques and technology to address that. And C) to look at what actually might be useful towards people of all different colours and races and genders, given that AI is likely to disproportionately harm the marginalised populations that are implicit in the data it learns from. So that means moving towards situations where we can look at what can help people, what can help women, what can help people of colour, and how to have AI technology work towards those sorts of goals, as opposed to trying to do some sort of blanket replacement that will just more clearly, disproportionately harm some people more than others.


KERRY MACKERETH:

Fantastic, and you're a real leader in the field of AI ethics. So we want to ask you, as people who are really interested in this area, what do you think the landscape ethical AI looks like at the moment? And where do you think it's going?


MEG MITCHELL:

Yeah, I think one of the most important things going on right now in the ethical AI landscape is around large language models and large models more generally. So we have systems like GPT-3, we have systems like PaLM, which open AI and Google have developed respectively. We have systems like DALL·E which is a vision and language model. And these are massive, massive models that have behaviours that we can't predict. But they are very powerful for a tonne of different applications. So for example, Google Search leverages language models in order to understand what the intent of your query is, when you type something in what's most likely, modelling all the different kinds of bits of language that's most relevant to what you're typing into the search query, as well as things like semantic question-answering, things like that, where you see the little box that gives you a direct answer to your question and things. These are all powered by language models. And what that means is that we have these massive models that are being used and deployed, that people are now being affected by every day, where we know that there are biases, we know that there is a tendency to disproportionately represent the views of men and younger men compared to any other populations. And yet, there isn't any mechanism in place to contest that kind of behaviour and to actually address any of it. And so because we see these massive models now being used in everyday life, without the ability to really trace through the ethical concerns or do anything about it, this has really emerged to the forefront, I would say, within the past six months or so as one of the main critical issues in ethical AI work.


ELEANOR DRAGE:

Amazing. And can you just explain for listeners, just how significant large language models really are for companies like Google? And how much you know, the paper that you wrote on Stochastic Parrots - you were fired for this paper, identifying the risks of these large language models. Obviously, this threatens Google's bottom line, can you explain a bit about why that is?


MEG MITCHELL:

Yeah, there's so much to say there. So first, I can take a step back to just give a little bit of a higher level response on what language models are. So I think my partner explains what I do well, when he says she makes robots talk. And you can kind of think of language models this way. Like if you are working with a system, and it's just talking to you, and you really have a sense that it's thinking about what it's saying and it sounds like a human, that's what language models are doing. And you know, until very recently, it was easy to tell when you were talking to a language model versus a person. Now the lines are blurred. Now it is really difficult to tell, it's doing long term, or I should say, long distance dependencies over conversations, it's going into great, very specific detail about different kinds of topics. It really seems human-like in a way that's hard to hard to differentiate from actual humans.


ELEANOR DRAGE:

Can you also explain what long distance dependencies are?


MEG MITCHELL:

Oh, right. Yeah. So yeah. So when you say like, “I knew a woman named Kerry, and she went to the store, bla bla, bla, bla, bla” and you say more things, more sentences go by, and then you say, “and she also..” and that “she” refers to the Kerry at the start, right? So Kerry is the topic of the conversation or of the paragraph or whatever it is, but she was referred to much, much earlier. And the ability to have coherence across sentences is traditionally not something that language models have really been able to do. But now we're seeing that it is able to do this basic tracking of the entities that are being spoken about through not only sentences but entire paragraphs. And that's really what starts to give the sense that it's human-like, right, when we're talking as people we don't generally tend to forget, well, sometimes we do. I mean, we're not perfect! But when we're having a conversation, we generally have a sense of the topics of the conversation and when we're going down rabbit holes and popping back up. Language models were never able to do that. And now they can. So these long distance sort of relations, embedded conversations within conversations and popping back out. That's the sort of thing that they're basically able to do now. And that's generally been one of the real tells for whether it's human or not. And now you can't really tell.


ELEANOR DRAGE:

So a kind of Turing test?


MEG MITCHELL:

Yeah, totally. Yeah, related to the Turing test, same sort of idea there. I guess one of the fundamental issues is that the tendency in tech is to implement first and ask questions later. So if you are beginning to implement something that is giving you a higher interaction with your users, which means more ad time, more times that they'll be seeing ads, things like that, then you will tend to press on and keep doing that, right, you keep wanting to make profit in a quarter based way, as opposed to a long-term way. And so you'll just keep pushing the technology that's giving you more and more click-throughs with ads, basically. And so when you have language technology that's very engaging, because it can understand things in a human-like way, then you'll tend to just keep pushing and pushing and pushing on that. And then further make the argument with regulators and things like that, that this is good, this is what should be being done. However, part of the responsibility of people in the ethical AI world is to do what I call due diligence on the basics of technology. So there's nothing specific about language models in Google, or language models in any other company. Language models are just a fundamental type of model in machine learning that a lot of companies are starting to leverage because they can see how they can make profit out of getting people to interact with the platform. And then the more interaction time, the more time that they can see ads and things like this. But without the basic due diligence on the fundamentals of the technology, you don't really know what to look out for in terms of foreseeable harms and foreseeable risks. And so part of what I had been doing at Google, and what I continue to do, is explore what are the foreseeable harms and risks of language models? I mean, it's a very basic question. I think people outside of tech would think that this was obvious, right? Like, if you're developing something that is going to affect billions of people, maybe you should do a basic literature review on some other aspects of the technology, other than how to use it for profit, right. And I would hope that that thing wouldn't be threatening for tech companies, just generally, because you want to understand the technology better. But I do think that there's a sort of macho-ness involved in tech decisions so that even when you're doing something really basic and reasonable and fairly boring, if you are not like them in the right way, they're a lot more likely to be angry at you break down your opportunities, bar you from doing things, or just, you know, disparage you publicly or whatever it may be. It's tied up in misogyny, in my opinion. And so yeah, I had done some work on the Stochastic Parrots paper that you mentioned, it was a very basic paper, I didn't think people would actually be that interested in it. And I think to be fair, most people that read it have said like, this is actually not that interesting. I mean, in the research world, it's, it's I mean, it's a basic literature review, right. It's like for the past 50, well, for hundreds of years, for 1000s of years, we've had people discussing things about language and knowledge and this sort of thing. And the paper basically took this work and sort of put it together to talk about its relevance to language models. From the ethics perspective, this is also very important as we start to connect with regulation. So tech companies really want to make the case that they can self regulate. But the only way that you can prove you can do any basics of self-regulation, is showing that you can do due diligence on the basic topics that you're working on, on the basic technology that you're working on. And if you can't show that you can do due diligence, then now you're putting yourself in a position where you can't self-regulate, which means that top down regulation can come from, you know, public institutes, congress, things like this. And these people don't understand the technology as well. So it can really hamper innovation when you can't do some self-regulation. And so, you know, in the long term, I would say Google and other companies are making a very wise move to do due diligence and to publish on it, because they're demonstrating to regulators that they can have some amount of self regulation. So while there might be some short-term profit concerns when people are a little bit more slow to trust different technologies because of concerns, in the long term, I would argue it's much more profitable, because now you aren't hamstrung by people who don't understand the technology saying that you can't do all these things, and/or fining you for things that you think you ought not be fined for. And so in terms of my own experience, I think what I did was, I think what we did in the paper was unremarkable. I think that part of why it might have been threatening to Google was because we were women. And we were critiquing something that a lot of Alpha macho people really wanted to get promoted for, honestly. So I do believe it's fundamentally tied up with misogyny unfortunately.


ELEANOR DRAGE:

That's so interesting, because I always thought that this was about undermining Google's bottom line. And, you know, saying that a technology that makes Google a lot of money was fundamentally harmful, rather than it being a cultural issue. I think that's fascinating.


MEG MITCHELL:

I mean, we didn't say it was fundamentally harmful. I mean, that's sort of the thing that was surprising about the paper, I think, to anyone in the AI world who read it. This is not something that is … we're not saying anything that people didn't already know, we're not making any massive cases against anything, we're simply saying, Here are some facts. It was not inflammatory at all. It's completely reasonable, which is part of why we had pushed back against Google saying you can't publish this without any explanation. Right? Like our job as researchers, it's literally our job to publish. And part of being a researcher is having peer review and interacting back and forth with what needs to be changed. But there are people at Google, as there are at other companies, who have incentives to own territory, essentially. To say this, technology is mine, other people can't work on it, and who have incentives to be promoted. And you're promoted in a quarter based system or half yearly cycles and it's based on what you launched. And so if anyone is slowing down your launch, that means you might have a longer time until you get promoted. So there's all these other incentives at play, which have a lot more to do with power than they do about the actual technology. And I think both Timnit Gebru, my co-lead at Google and I very much felt that this was about some people who wanted territory, and who had misogynist tendencies and racist tendencies, and not so much about what the paper was saying.


KERRY MACKERETH:

That's incredibly, you know, frustrating and disappointing, and yet not surprising in many ways that, you know, ultimately, even come down to the content of your work but as I'm sure so many of us have a woman experience, you know, simply what you look like, what you are, what you can't really change about yourself, and who is delivering certain kinds of messages. So I do definitely want to move on to that point after but first, just for the sake of our listeners, could you sum up very, very quickly, you know, what were the kinds of reasonable risks around these models that you were summing up in the paper? Since, as Eleanor pointed out, there's a lot of misinformation, a lot of confusion now, because of the extreme way that certain people in Google reacted to this paper.


MEG MITCHELL:

I know, I know. I think one of the takeaways from the paper - and also to answer your question - is that language models are, we can say that language models are too big, when we cannot document what their input is. And the reason that we need to understand what their input is because the input is from data scraped from the web. There's lots to be said about copyright and licensing and consent just within these laissez-faire web scrapes. But we do know from analysing the different kinds of data that is scraped from the web, that it tends to over-sample of white men, often North American or White men from the USA between, I think it's something like 25 and 35. And it tends to encode prejudices and stereotypes against women, against Black people disproportionately. So there's stereotyping and prejudice against all kinds of people on the web, the kinds of things that we're getting in the data we're scraping disproportionately has hate directed at women, as well as people of different religions. And so when we know what the data sources are, we can curate them, we can say, you know, I want to include this kind of thing, I don't want to include that kind of thing, we can make informed decisions about what we want the system to learn. But the laissez-faire attitude is that we don't want to do that for some reason. Instead, we just want to take as much as we can, it's the ‘more is more’ kind of viewpoint. And, and then just sort of see what happens. And so now we're at a point where these models are being trained without understanding what's being learned from the data. And so by going through some of the basics of the data, we can already see, you know, for example, the c4 data set is a large data set used to train language models. It heavily samples from Wikipedia, and Wikipedia is predominantly written by white white men in their 20s. Many of them are in the US. And so you'll find that, for example, until I started giving this as an example, Black History redirected to African American history, which, for people in the UK, you can understand how that is completely inappropriate. People in the US don't see it, they're like, yes, Black, African American, what's the deal. And that's the kind of myopia that we're trying to get out. Right. So first off, Black history is not going to be represented as well, if something like Wikipedia is the main is the main source, because it tends not to be not to be edited as much or accepted as much, when people who are writing about Black history try and put something in, it's just not there as much. And also, you know, because obviously, it's just reflecting these viewpoints that are very American-centric, and that leaves out the rest of the world, right. And so essentially, that's, that's what we were saying in the paper. And that's what we keep trying to say is that, you know, we can get to a point where we're training very nice, helpful models. But to know that they will be helpful, we have to know that the data is helpful, or at least to have a sense that they will be helpful, we have to know that the data is helpful, which means we actually have to think about the data. And data is definitely seen as unsexy compared to models. And there's a lot that can be said about that, and gender dynamics and things like this. And so it's really looked down on as if analysing data is something dumb or silly, which means that these massive models are being built and built and built. And if you try and talk about the data, people just get angry at you, which is just so insane to me. So that's sort of the fundamentals of what's going on there.


ELEANOR DRAGE:

That’s so interesting. Thank you so much. And I think during that when you were talking about history, I remember having a debate with someone a while ago, who was saying that the Notting Hill race riots, which were in 1958 in London, weren't history or shouldn't be taught as history in school.


MEG MITCHELL:

Right


ELEANOR DRAGE:

And they happened just over a decade after the end of the Second World War, barely after the Second World War. And so yeah there is still that feeling in the UK, and this is because we’ve had a Conservative government for so long as well and being in the heat of the culture wars, that that history is not history.


MEG MITCHELL:

Yeah, exactly.


ELEANOR DRAGE:

It’s horrifically frustrating. And also on your point on Wikipedia, from our data that we collected working with a big tech company, it was really interesting that people felt that data from Wikipedia wasn't about people. They said that it wasn't people data and therefore wasn't sensitive data, and didn't need ethical attention. So people weren't aware that gender isn't just about counting the heads of women. It's also about exploring power dynamics within these information systems.


MEG MITCHELL:

Yeah, exactly. Yeah. Yeah. And the point about history is so spot on, because, you know, language models can only learn from past data, right? It can't, it can't learn from data that isn't there yet. And so the question of whose history matters is a very fundamental one because it's only going to learn, you know, from the histories of people who have been the sort of winners in different kinds of cultural battles, which means that people who have generally been marginalised or harmed are then represented, much less their stories are seen through a lens of domination and colonialism instead of something much closer to their actual lived experiences. So you're just propagating the same sort of colonialist idea, but now through a language model.


KERRY MACKERETH:

Absolutely. And that’s something that actually I just wanted to ask you, because I'm really interested in this is around the politics of exposing the way that these biases are embedded in certain kinds of datasets, because this is just something I feel really on the fence about, because, you know, I think there's some really interesting art projects and various kinds of activist projects, which try to expose, for example, the way that certain kinds of like word associations are, you know, very much replicating sexist and racist stereotypes, but then, you know, as a multiracial person, I don't find it a particularly, you know, spiritually restorative experience to kind of go and see, these sorts of very, very horrific kind of racially biassed stereotypes and things visualised, and also word associations through how images are tagged. So I think I feel a bit ambivalent about like, you know, who these projects are for and what they're doing, but at the same time, I recognise, like the urgency that we do need to really demonstrate that these datasets aren't neutral that yes, they might not be, “sexy” but they're really, really important. So yeah, just wondering if you had any thoughts on like, how do we do that awareness-raising in a way that is also really sensitive and really ethical?


MEG MITCHELL:

Yeah, well, I mean, I think things like this, this is part of why I've been doing podcasts. Twitter ends up being, unfortunately, a very good platform for getting these kinds of messages across. I mean, there's a lot to be said about Twitter being a negative headspace. And, you know, I try and handle that when I talk about these issues on Twitter as well. I think that it is not great to be talking about serious issues around data in a situation where people are primed to fight with one another, right? You want to be sort of on a more kind and understanding and nurturing ground. Relatedly when you talk about things like ethics, it touches very personally on people's morality. So you do get very emotionally-driven responses in a way that you don't get with other kinds of work. And that often is overlooked in discussions. But, yeah, I mean, raising awareness, it means writing the papers we're writing, doing the podcasts we're doing, talking to the journalists we're talking to, and using whatever social media attention we can have to really point out the issues and trying not to get too depressed when people inevitably get angry at you and tell you you're dumb or whatever.


ELEANOR DRAGE:

Exactly, I remember one of my favourite philosophers Gayatri Spivak saying - in what was really out of character as she writes in this very dense prose - she said in a speech that it really depends how much coffee you've had in the morning, on the days you’ve had more -


MEG MITCHELL:

Yeah, I have a don't tweet before coffee policy. Don't tweet before coffee. I've done it a couple times. And it's always been terrible.

ELEANOR DRAGE:

I mean yeah don’t talk to me before coffee, let alone Tweet!

MEG MITCHELL:

Exactly, exactly. Yeah.

ELEANOR DRAGE:

Just to end, you know, we wanted to ask you about this massive question, how do we then hold big tech accountable? Or really what should be done, then to make big tech responsible for the products that they're developing? And does this have to do with I don't know if you can answer this to do with data governance, that's something you look at a lot we're looking at, you know, which different communities have different kinds of privacy allotted to them. So for example, there's been a lot of noise recently about people's pictures being scraped off the internet without their approval and their proper consent.


MEG MITCHELL:

Often on Flickr. By the way if you have images on Flickr, there are reasonable chances that this is being scraped and used in AI systems.


ELEANOR DRAGE:

Luckily, I'm not cool enough to have images on that platform! But yeah, you've talked about how particularly racialized communities are particularly vulnerable to having your images stolen. So yeah, if you wanna talk about data governance, or are generally holding big tech accountable.


MEG MITCHELL:

Yeah. Um, so, you know, obviously, I tried for years to go down this route of making change from the inside. I think ultimately, you know, you can only make as much change as your management chain and CEO are okay with and so unless you have a management chain up to the CEO that wants to fundamentally shift the tech paradigm, then you can't really make change from the inside. You can do stuff that PR will use, and to the extent that the CEO doesn't know what you're doing, you can do something. But you can't make any fundamental changes unless the CEO is willing to change the very system that made them a billionaire. So that's not really a thing you can do, you know, I've come to realise. And I've started to think that maybe the difference between Ethical AI, which is what I've been working on, and Responsible AI, which is what tech companies usually say, is that Responsible AI is like Ethical AI, but built on structural discrimination. That seems to capture a lot of what I've noticed about the difference between these two things. So where do we go from there, it means that we have to have external forces, which means that we have to work with legislators and policymakers. I've gotten more and more involved in trying to provide consulting for people involved with policy and regulation in the US, and somewhat in the EU and UK. So there's that, just bringing as much information as you can to bridge the gap between what the public understands and what tech companies are actually doing. And when it comes to data governance, there's so much to be said about who owns the data, and how it should be disseminated. You know, there's this issue that once you share data, if there isn't one server where everyone accesses the data and can't move it off the server, then datasets will be proliferated, you know, you download it onto your local computer, someone else wants it, you upload it to some drive, you send it to them, things like that. Which means that if there's private content in there, unconsented content, which is I think, true for almost all datasets - it's unconsented content - at least most large datasets, then you can't undo that, right, you can't get the genie back in the bottle. So one of the ideas with data governance that I've been working on is can we have centralised repositories for datasets, where they can only be accessed through that repository, they can't be downloaded locally or shared across different machines; that people can put forward contestations - so requests to have their data removed - if they are in there, and then that data will be removed. This obviously comes up against a little bit of issues around reproducibility and benchmarking. But that's, that's maybe for another discussion. But essentially, you know, people whose data - people whose instances are in the data - should be rights holders for that data, and should be able to consent to having those instances represented in the data or contest having those instances represented. And I think we're seeing data governance and I would say data protection laws more and more going towards that model. I know that the EU has been working on having these centralised repositories of data but there are a lot of ethical issues there. But there are some good ideas there. Karen Hao has been putting out these amazing pieces on colonial AI and recently she had a piece about governance through the Maori perspective, which is around guardianship. And you want to make sure that you pay attention to the Maori values in your use of the data. And so this gets to defining governance structures in terms of the values that you want to uphold that, you know, have appropriate respect for all the people involved. And making sure that the data can't be abused or misused as much as possible.


KERRY MACKERETH:

Thank you so much, there were many important things distilled so clearly, throughout this whole interview, I shouldn't say a huge thank you from Eleanor and I for taking the time to appear on to talk through your work. And also, you know, us like I'm sure a lot of other people are like, incredibly sorry, like everything you've gone through this last couple of years, but also just so incredibly grateful for people like you, you know, working in this field and also kind of being able to sort of speak this kind of truth to power and be kind of safeguarding public good in so many really crucial ways. So yeah, thank you so much.


MEG MITCHELL:

Thank you so much, and thank you for having me on and letting me talk. It's really an honour. I really appreciate it. Thank you.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.

Fritzchens Fritz / Better Images of AI / GPU shot etched 2 / CC-BY 4.0




237 views0 comments

Comments


bottom of page