top of page
Search

Alex Hanna on Vague AI Ethics Principles and why Automatic Gender Recognition is Nonsense

In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.


Image: Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0


Reading List


Auto-essentialization: Gender in automated facial analysis as extended colonial project

Morgan Klaus Scheuerman, Madeleine Pape, Alex Hanna

Big Data & Society


Data and its (dis)contents: A survey of dataset development and use in machine learning research. Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, Alex Hanna


On the Genealogy of Machine Learning Datasets: A Critical History of ImageNet

Emily Denton, Alex Hanna, Razvan Amironesei, Andrew Smart, Hilary Nicole


Timnit Gebru's Exit from Google Exposes a Crisis in AI

Alex Hanna and Meredith Whittaker

WIRED


Transcript


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

In this episode, we talk to Dr Alex Hanna, Director of Research at the Distributed AI Research Institute which was founded and directed by her ex-boss at Google Dr Timnit Gebru. Previously a sociologist working on ethical AI at Google and now a superstar in her own right, she tells us why Google’s attempt to be neutral is nonsense, how the word good in ‘good tech’ allows people to dodge getting political when orienting technology towards justice, and why technology may not actually take on the biases of its individual creators but probably will take on the biases of its organisation.


ELEANOR DRAGE:

Hi thank you so much for being here today with us, we’re so excited otot have you on the show. Can you start by telling us a bit about yourself and what brings you to the topic of gender ethics and technology?


ALEX HANNA:

Yeah, my name is Alex Hanna. I am the Director of Research at the Distributed AI Research Institute (DAIR). We're a nonprofit research institute that focuses on centering the perspectives of marginalised communities in the development of technology and AI. What brought me to gender, ethics and technology is that I am someone working at the intersection of society and technology and especially focusing on questions of how the technologies we use can possibly exacerbate race, gender and class inequality. And so talking about this, it's really helpful to talk about this from the perspective of the feminist lens with the lens and tools that it affords us.


ELEANOR DRAGE:

Yeah absolutely. People often I think underestimate just how complex and contested feminism is. Here in the UK we unfortunately have several transphobic and transexclusionary radical feminists who take up a lot of air space which is abhorrent and it distracts from the really terrific and inclusive feminist work happening across the UK in various domains. There’s also a lot of work that needs to still be done over here getting to grips with how race and colonialism continue to shape feminist work. So can you tell us about your relationship to feminism and how you use it when working in the tech space?


ALEX HANNA:

Yeah, I mean, it’s very scary in the UK, the turn that feminism has taken, and over here in the States, it seems so far away but we have many of those same kinds of trans exclusionary feminism. And so, yeah, I mean, the way that I think feminism is helpful in many regards is thinking from the perspective of using tools of inclusion, of understanding one's own standpoint, in which one's positionality, and how that affects how one is doing science, how one is doing research. That's very much the way that I that feminism enters into my own work. And in the way that, you know, in kind of really building off much of the feminist research that does come, especially the work from Black feminists, that has gone into building the way that In Vitro technology, and the way that DAIR views technology as well.


ELEANOR DRAGE:

Ok So that’s one great example of how feminism informs the development of what we might call a good technology, a technology that’s done a lot of good certainly. So can you help us then answer our three billion dollar questions: what is good tech, is it even possible? And how does feminism help us work towards it?


ALEX HANNA:

Yeah, I mean, what a question. $3 billion is right. I mean, I think it's a very, so this is an interesting way of framing it, what is good technology, because I think that, you know, there's a long history, at least from you know, where I come from in sociology, of thinking through technology, and it's, you know, technologists - some apolitical technologists like to say - well, technology is, you know, neither good or bad, it's, you know, it's this what you do with it, and that's not right. It's definitely not right. Because technology can be developed in ways that are fundamentally fundamentally evil, fundamentally amoral, there's no way you know, for instance, to have a neutral nuclear bomb, or a neutral nuclear or a neutral, you know, semiotic weapon. These are, you know, these are out and out weapons right? And so and so, you know there is the way that things and design getting embedded into technologies. You know, technology has these ramifications of who builds them and for what intent and how institutions and organisations manage them, right. And so, you know, what is good technology? I mean, it's sort of, you know, almost easier to say what good technology is not. And I'd say it is certainly not thinking you're going to do something for a group of people, or that a piece of technology is going to solve a problem for people, as if dropping a tech piece of technology is going to kind of solve what may have been a decades or centuries-old problem. Good technology is also not taking one thing from one field, you know, putting different window dressings on it, and then applying it to another field. So that's, that's also, that's also certainly not a good technology. So I mean, I actually militate against giving kind of a definition of what good technology is, I think we know sort of what the shape of it could be. You know, technology that involves a lot of consultation with people that are going to be using it that are going to be supporting it is pretty important, probably one of the most important things, ensuring that something that is used actually has applicability to the group of people that are going to be its data subjects or its, you know, its users, or even people who are not users but are still affected by its use. That seems to be more morally good. So I think it's possible, you know, about that, I think it's possible, but we really need to start from a set of principles that really prioritises groups who are going to be affected by the technology. Now, how does feminism really get us there? I mean, I think it depends. That certainly depends on which sort of feminism you you're taking, right? Thinking about this, from sort of a Black feminist tradition, kind of in the tradition, Patricia Hill Collins, Kimberly Crenshaw, thinking about the kind of way in which one's positionality is really important to thinking about how something may be used, how one fits within the matrix, you know, matrix of domination, and how people in particular different subject positions may be affected by particular technology is incredibly important and, you know, probably the central question on how technology is being used. Thinking about it, not even starting from the position of technology, at DAIR we have as a saying that AI is not inevitable. And that's very intentional, because a lot of the ways in which AI development is happening is that many of these actors, typically large institutions, like industry labs, or government labs, or these small boutique AI firms, they're starting for the position of AI - the people who they hire into the positions are people who have expertise in ML engineering or product management or XYZ - and if you're starting from the perspective of the needs of community, in what it means to care for, especially the most marginalised in the community, then it doesn't make sense to start from the technology, it makes sense to start from the needs and how to care for the most vulnerable of a particular community. So that I think feminism guides our work and guides what it would mean to develop technology in a morally acceptable fashion.


ELEANOR DRAGE:

Okay, so feminism asks us to situate ourselves and it advocates this ethics of care, right, where technology doesn't just do nothing, it has to do something, it has to work in somebody's favour. It always works in someone's favour, but whose favour is it working in. So then how do these principles that you're thinking with, do you think that they have any place in AI ethics frameworks, which are now just dominating the AI ethics landscape, we have the EU AI act, UNESCO has a big framework. Everyone has one PwC, you name it, Rolls-Royce. So where do these feminist principles sit in relation to these frameworks? Is there any hope for them? Or are they just going to always be a kind of pointless window dressing? Do you think they can be impactful?


ALEX HANNA:

I mean these frameworks are, I mean, they're proliferating just intensely, right. And it's, you know, I haven't even fully read the EU AI act or the UNESCO recommendations, but I've read many of the other ones that came out, prior, around 2020, including you know, including the Montreal declaration, the Toronto declaration, the Google AI principles, which I was intimately familiar with, because of my work at Google. And the things that I really don't see a lot of these principles talking about is I think they start from a very, I want to say they start from a very Universalist place. And they start from a very, I want to say, ideal place, kind of in the sense of ideal, in the sense of ideal theory. I'm not going to talk too much about that, because I'm going to quickly get out of my depth. But when you're definitely starting from a place of universalism, and generalities, and, you know, if you're the UN or you're, you know, a large body like the EU, then one is going to have to do that, because you're trying to make it applicable to a large, broad set of situations. But, you know, the thing that really, you know, the thing that I think that's really offered by feminism is that it does allow us to call attention to particularities and understanding that particularities are really where the rubber meets the road. So, for instance, Google has six or seven incredibly vague principles, right. You know, one of them is don't perpetuate bias, or unnecessary bias, one of them is uphold the principles of scientific excellence. Another one is, you know, don't, you know, do things that are, you know, involved in surveillance that would contravene international norms? And so, you know, like, these may sound good in theory, right? But what do all these things mean? So if you're going to break down Google's AI principles, and I'm going to look these up, because I want to, I want to like, I want to, I want to, I can't recall this off the top my head, but I want to look it up because I think getting the language right is important to hear.


ELEANOR DRAGE:

While you're looking it up, so Kerry and I work on this industry partnership with a big tech company, and we've therefore had the pleasure of going round and asking engineers, what do you think of this language? What do you think of this idea of not perpetuating bias? What does it mean to not perpetuate bias? What is bias? Which systems are at risk of becoming biased and which are not? And nobody agrees, nobody knows where bias emerges, the answers that people gave were not shared. So one answer was shared by one or two people, but not more, which shows that there is no real understanding within an organisation of these things. But what people do understand is: which group does this technology benefit? And which kinds of customers are the most likely to be negatively impacted? Or which customers are the most likely to get a good deal out of the system? There's ways of rephrasing it, and I think the language of you know, ‘don't perpetuate bias’ is so meaningless particularly - and I'm interested, you know, what your experience of this is - but bias is obviously a mathematical term. And so, it means something to computer scientists, it means that it can be net neutral or even net positive, and a system needs bias to function, to make decisions. So, you know, I don't know whether you found the questions yet, if you find them then tell us what they are and what's a better alternative.


ALEX HANNA:

Yeah, I mean, this is absolutely correct. And that's, you know, I saw the same thing at Google, talking to people who did stuff around education of the principles within Google, even their understanding of those principles was very lumpy and poorly understood. And, you know, even the term bias, you know, bias has this mathematical term, and also has a psychological connotation, right? Where, especially in the US and the US business contexts, bias has become a standard for any kind of, it's become an individual's standard for any kind of structural ‘ism’, whether that's sexism, racism, homophobia, ableism, transphobia, and so on. And so, you know, this is, you know, this, this is very much the language of corporate America, it's rooting out this individualist bias. And that's the turn of the way that these principles are often written as sort of thinking about an AI technology as an individual agent. You know, so, in finding the Google AI principles, ‘be socially beneficial’, like, what does that mean? Socially beneficial is quite the dodge. And it's, you know, it's, it's the same way in which the word good can be a dodge. And Ben Green, has this paper about the idea of AI for good or technology is good, and it being, you know, being a complete dodge, because there's not a definition of good or how that's political. The second principle is to avoid creating or reinforcing unfair bias, and we just had this conversation about bias. But even within the sub-definitions on the site, you know, they don't even talk about they don't mean or say anything, you know, in this way, it sort of reflects the sort of cemetery, at least in US anti-discrimination law, where you basically cannot bias in either kind of way, right, you can talk about bias, but you don't talk about power. And so, in talking about bias, say, if you had a system that disadvantaged men it would be just as unjust as one that disadvantaged women. And so these are terms that go really undefined and unspecified. And then they kind of go on and on like this, being built for safety, being accountable for people, which, you know, none of these, you know, these things are very vague. And there's really no way in which there's an operationalization there. So, back to the original question, you know, is there a way that feminism may set a sort of set of principles that are, alongside these kinds of, you know, these kinds of AI principles or technology principles, I think the task of feminism is to do something quite a bit different, right? It's to problematize these in such a way, which, you know, acknowledges that it isn't a kind of a universalism, there are kinds of particularities that we need to be attentive to, and that we need to be attentive to power in ameliorating these inequalities in power.


ELEANOR DRAGE:

Yeah, absolutely. I think it's really Ria Kalluri who said in that Nature article, don't talk about bias, let's talk about power and shifting power. It's a much more evocative term, and I think people know what they mean by it. And I was thinking recently, and so in different languages, I speak a bit of French and I say to my French friends, I look into your de-biasing and AI and bias is not really a word that they use that often the idea is, is that you have to repeat it, and then they say Okay, okay, I get it. Whereas if you say power “pouvoir” Yeah, they know exactly what that means. So, I think that that can be a really effective way of pushing people in the right direction. And you said that - you talked about this idea of neutrality in Google trying to hold this value of neutrality, you know, don't be biased, be neutral. Don't be bad, be neutral. So why does that not work? And how does that relate to what you've said is systemic and structural racism at Google, which is not necessarily just about people, this is also structural. So I'd like you to explain to people what you mean by that, and what it means for a tech company to have a whiteness problem, why that's not - what do you mean by whiteness? And how does that relate to race neutrality, this other form of neutrality that they purport to uphold?


ALEX HANNA:

Yeah, so. So these are all very kind of connected when you talk about neutrality, or tech being neutral. First off, you know, tech isn't going to be neutral, it's going to involve whatever types of, you know, kind of dispositions or organisational ways in which these are reflected. So I don't like the way that often the way that people say, you know, tech is going to reflect the biases of their individual creators, because that may or may not be the case, what's probably more important to talk about is how piece of technology is going to reflect what an organisation is meant to do. You know, so if you're an individual engineer, as an individual engineer, you might be quite anti-racist. Or you might be a feminist or whatnot, whatever, and you might have hold a set of values. But if your organisation that you work for is intended towards profit maximisation or, you know dropping bombs somewhere over there, then you know that technology is going to look a lot more like what the priorities of that organisation are. And so you know, that piece that I wrote that you're referring to about kind of about how tech has a whiteness problem, one of the main things I want to get across in the piece is how I think we often, you know, I wanted to adopt this kind of idea of an organisation as our unit of analysis. And this comes from a number of sociologists of race including Melissa Wooten and Victor Ray, in the way that they talked about how certain kinds of practices embed whiteness. And in this way, you know, whiteness, if we think about whiteness, we're thinking about white supremacy as the belief as well as the organisational practices that maintain the people of the white race or ethnicities that compose white races as superior to people of other races. Our organisations maintain that in their practices of hiring, software development of advancement and so from that perspective when we say tech has a whiteness problem, it's taken doubly so, and thinking about this neutrality thing and this whiteness thing are two sides of the same coin. You know, the earlier work of sociologists arrays like Eduardo Bonilla-Silva, has, he's written about how, you know, racism, there's this idea of colorblind racism, this idea, you know, that in this new post-racial era after Obama in the US, no one sees race. You know, we just judge people on the content of their character. Of course, you know, it didn't take too long after Obama for Trump to come to power to really reveal the ludicrousness of such a statement, and really highlighting, you know, like, if you're thinking about something being neutral, or you thinking about this racism occurring without these kind of individual racists then, you know, then that is sort of mirroring the same things about neutrality, you know, we're not talking about race, in our, in our software, you know, this works for everybody and we have this you know, but in that way, these kinds of default whiteness becomes instantiated throughout. And so if we talk about tech organisations, if the same people are saying tech is neutral, I think we should be pretty suspicious, because these are often the same kinds of proponents, or rather people who are implicitly maintaining white supremacist structures in the workplace, especially the tech workplace.


ELEANOR DRAGE:

Yeah, that's really interesting. And that idea of race without racists, and colorblind racism is something that we see a lot in AI. And in new technologies, you always go back to old debunked science. And so for example recruitment AI, that Kerry and I do a lot of work on, you have, I think, actually pretty well-meaning people who run some of these AI software companies, trying to make hiring more inclusive, and increase diversity on behalf of corporations. So it's like, we can do this tricky work, this really hard work for you. We can take away your hard work and make it easy because AI is colorblind, but in its colour blindness, it also makes your company more diverse. So it's a really weird relationship between ‘I don't see race, but yet I do see it because I'm making it more diverse’. So can you tell us then how this relates to automatic gender recognition? And you've written another extremely cool paper, I’m very envious of your really cool papers, with Madeleine Pape, and Morgan Scheuerman, on automatic gender recognition, and we also spoke to Os Keyes in a previous episode, who's amazing, and we love a lot. Who says that gender recognition is, “bullshit”. So how does it relate to racial and colonial control, because I think people will find that quite surprising. And why is it important to situate new technologies in relation to imperial, colonial and racial histories?


ALEX HANNA:

Yeah, I mean, great. I mean, great question. Oh, hello here’s my cat.


ELEANOR DRAGE:

Oh the cat oh my god!


ALEX HANNA:

Yeah, so this paper, you know,that Morgan really led on, they came out of his work on a lot of his work on annotating and looking at data sets that code for race and gender. And, you know, part of the argument in this paper is that we label this process of trying to see someone's essence as this kind of process of auto-essentialization, which is akin to what we define as this way of, you know, trying to find this kind of essence of somebody via this view of this virtual substrate. And this bears a family resemblance to Simone Browne’s concept of digital epidermalization, in which she's taking this concept from Frantz Fanon, this idea of epidermalization, where someone is reading race on to you usually, you're reading blackness onto the Black African individual. And so, Browne takes this as kind of a digital reading of race. In this way we kind of also - so it's related to seeing the true kind of nature of one's gender. And so we tie this back to colonialists, you know, understandings of gender and the way that in other European conquests, many third gender, or non gender, or agender people were erased or coerced into a European gender binary. And how many of the kinds of tools of gender classification replicate that, whether that's measurements that are taken of individuals to misrecognise them, to masculinize them or to cast them as inferior. And so Maddie is fantastic. She's a feminist Science and Technology Studies scholar. And a lot of her work has focused on how individuals especially in sports are cast as being more masculine. A fact about Maddie is that she used to be an Olympic runner, and she, you know, was competing at the same time when Caster Semenya was starting to get their start in sport. If you’re not familiar with the case of Caster Semenya, she’s a very dark skinned woman who tested for a kind of a higher testosterone rate. And then other female runners. And so all kinds of, you know, awful things were done to Custer, they made her go through this very embarrassing sex testing. And were in effect, masculinising her in a very violent way. And so we're seeing these sorts of things in which time that and the way that Black bodies, especially Black female bodies, become masculinised. And so we draw this, you know, in the paper, we draw the examples to the way in which Joy Buolamwini who had done one of the the kind of first studies on AI bias and facial recognition, had a spoken word piece that was describing how many very famous Black women would be misrecognised as male or masculine by these automated gender classification systems. So the fact that you know, automated gender recognition is - I agree with with Os Keyes - “bullshit”, it's also, you know, racist and perpetuating these colonialist readings of the face, and of non-white individuals.


ELEANOR DRAGE:

Yeah, I think it's so important to remember that, you know, even though it is, you know, “it's just a tool”, and you know, that that can't be true. And partly because these histories of really what is, in that case, a kind of torture of people, drawn out over many years. That comes to life in these new technologies. And we needn't be so surprised, I think. We just need to be more attuned to it. This needs to be at the forefront of discussions and innovation, and no longer something that seems shocking - and oh, wait, how could I possibly think of that, you know, that seems so far away, or whatever, those histories live on really, history is the wrong word for them. I want to ask you on that but as we're running out of time, I want to give you a massive congratulations for your appointment as Director of research at DAIR. Could you tell us more about DAIR, and what projects you're planning on taking on, this is an incredibly exciting place to work. So tell us more.


ALEX HANNA:

Yeah, absolutely. And thank you so much for your congratulations, I'm very, very, very excited, especially to be working with my former boss again, Timnit Gebru. And so, you know, DAIR really emerged as a desire to ground AI research, and really any discussion of technology from a perspective that would integrate and highlight the needs of people who were being harmed by technology, or were, you know, in vulnerable positions via different kinds of, you know, where they are, or are, within, you know, the matrix of domination, whether that's the global South, or in a diaspora in a Western country. And so, you know, we start from four principles, 1) thinking about community, not exploitation, 2) really understanding what it would take to have a community grounded approach to research rather than one in which people perish you and then they extract what they need and then they jump back out. 3) Comprehensive and principal processes, you know, ways in which we compensate people for their time, their labour, and their expertise, pragmatic research things that we can use, and 5) really, you know, a good balance for researchers who are involved. You know, there's some AI labs where they've said, our researchers work 90 hours a week, and well, that's not a way to actually build up people. it's a way to really destroy people, right. And so, we start from those habits or principles, and some of the work that we've been working on lately, as you know, we're focusing on really getting started. And some of the things that we'd hope to do by the end of our first year is 1) really have a good team in place that has the expertise that we want. That's a mix of people who have community-based research backgrounds, as well as sufficient AI expertise. 2) We also want to have a set of processes involved that are really respecting communities in which we're engaged with, and have a set of programmes that we can help that we can establish to support people who are doing on the ground work, whether that's advocacy, or, or research work. And then I think in terms of substantive things that we're looking to do or continue, we have continued work focusing on data sets more broadly, thinking about how we ensure that data sets are being developed, or kind of principles of data sets are with respect for not only data subjects, people in those datasets, but also the people who are downstream from those particular datasets. Timnit’s collaborators published the data sheets paper more formally more recently, and I've been working on some, some work with a legal scholar, understanding the legal dimensions of different large scale data sets and kind of mechanisms for accountability. We're also looking at particular types of data that can be used to support particular communities. In particular, there's research from our fellow … Sofala, that's looked at neighbourhood change in post apartheid South Africa. And in that context, her work has really focused on using computer vision models to identify where desegregation is happening in South Africa, where it's stalled, and how that work can support decision makers and policy makers within South Africa. We're also focusing on a project that is trying to understand how social media is being used and harmful to particular anti-government individuals in conflict areas. And so, you know, if this is happening in the US, you know, this is going to be mostly in English, but these are places like …. in Morocco, and places that don't have a really good language support. And so we're orienting and thinking about a project structured around supporting communities who are, you know, maybe under not only government attack, but attack from, you know, other ethnic groups or, or ethnic minorities. So, yeah, so, so this is, you know, there's a lot of things on the horizon.


ELEANOR DRAGE:

That’s an extraordinary amount of projects.


ALEX HANNA:

It’s a lot of projects. At the same time, these are definitely long term kinds of considerations and, you know, really establishing, you know, a stronger organisation that lives up to our values. I think that's, you know, number one on what we're trying to do.


ELEANOR DRAGE:

That's really exciting to watch an organisation that is distributed, hence the name DAIR, Distributed AI, and trying to build a community that you didn't get before in your previous employment. So everyone is watching, it's really exciting to see what you guys do. So if you're listening, head over and check them out. Thank you so much for speaking with us today. I'm so sad that this has come to an end so quickly. And we really hope to speak to you again soon.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.




142 views0 comments

Comments


bottom of page