In this episode we speak to Abeba Birhane, senior fellow at Mozilla, about how cognition extends beyond the brain, why why we need to turn questions like ‘why aren't there enough black women in computing’ on their head and actually transform computing cultures, and why human behaviour is a complex adaptive system that can’t always be modelled computationally. We hope you enjoy the show.
Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). She recently finished my PhD, where she explored the challenges and pitfalls of automating human behaviour through critical examination of existing computational models and audits of large scale datasets. She is currently a Senior Fellow in Trustworthy AI at Mozilla Foundation. She. isa interdisciplinary researcher and my research interests sit at the intersection of cognitive science, AI, complex science, and theories of decoloniality. She is also an Adjunct Lecturer/Assistant Professor at the School of Computer Science at University College Dublin, Ireland.
Image Credits: David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0
Reading List:
Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., Bao, M. (2022). The values encoded in machine learning research. 2022 ACM Conference on Fairness, Accountability, and Transparency.
Birhane, A., Ruane, E., Laurent, T., Brown, M.S., Flowers, J., Ventresque, A,. Dancy, C.L. (2022). The forgotten margins of ai ethics. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency .
Birhane, A. (2021). Algorithmic injustice: a relational ethics approach . Patterns
Birhane, A., (2021). The impossibility of automating ambiguity. Artificial Life
Birhane, A., Guest, O. (2021). Towards decolonising computational sciences. Women, Gender & Research.https://doi.org/10.7146/kkf.v29i2.124899
Birhane, A. (2019). The algorithmic colonization of africa. Real Life
Transcript:
KERRY MCINERNEY:
Hi! I’m Dr Kerry Mackereth. Dr Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us, and we’d also so appreciate you leaving us a review on the podcast app. But until then, sit back, relax, and enjoy the episode!
ELEANOR DRAGE:
In this episode we speak to Abeba Birhane, senior fellow at Mozilla, about how cognition extends beyond the brain, why why we need to turn questions like ‘why aren't there enough black women in computing’ on their head and actually transform computing cultures, and why human behaviour is a complex adaptive system that can’t always be modelled computationally. We hope you enjoy the show.
KERRY MCINERNEY:
Great. So, thank you so much for joining us here today. Just to kick us off, could you tell us a bit about who you are? What do you do and what's brought you to thinking about feminism, decolonization and technology?
ABEBA BIRHANE:
Hi, thank you for having me. It's great to be here. My name is Abeba Birhane and I'm currently a senior fellow at Mozilla in trustworthy AI. And what brought me to the topics of feminism and decolonization is a bit of a long story because by training, I’m a cognitive scientists, my PhD is in cognitive science. Even within cognitive science, I come from a somewhat niche area called embodied cognitive science and you know, systems science, inactive cognitive science. So, these, these approaches to cognitive science are, you know, somewhat niche, because they push back against the traditional understanding of cognition which is somewhat reductive, which is somewhat simplistic, where traditionally you would focus you know, on the brain or on the individual person to figure out, you know, what cognition is or to, to model to build a model of aspects of cognition. Whereas, with these new approaches, the idea is, you know, we’re more than our brains, the cognition extends beyond the brain. So, your interaction with others, your environment, your, you know, your whole ecology, determines elements of your cognition and impacts who you are. So, the idea is to, to incorporate, you know, your, to see your historical background, to see your culture, to see the context that you are in, to see your history as part of what makes you who you are. The idea with these cognitive embodied approaches is to incorporate these if you are modelling cognition, for example, as important elements that impact who you are. So it's been great kind of having somewhat a holistic approach to cognition, but what you find often missing from these even socially-oriented approaches to cognition is that you know, critical thinking such as, you know, there are inequalities reason society, some people have privilege just you know, based on the skin colour that they are born in, or that you know, there are uneven power dynamics, everywhere you look in society, these are elements that are missing from, from the cognitive sciences in general, not just embodied cognitive science. So, that's why I turned my attention towards feminist and decolonial studies because these fields do a great job explaining and elaborating, you know, existing and past inequalities and injustice in society. And we have to incorporate, we have to acknowledge these injustices, we have to incorporate these facts into our theories and into our modelling. So that's why I now find myself, you know, reading and focusing on feminism and decolonial theories to kind of enrich the already rich theories and approaches in cognitive science.
ELEANOR DRAGE:
We’re very excited to have you answer our three good robot questions: what is good technology, it is even possible and how can we work towards it?
ABEBA BIRHANE:
Great questions, what is a good technology? I guess. So, the answer to that will vary depending on who you ask. If you ask, you know, 100 different people, you will get 100 different answers. So I'll answer it my way of course. So to me good technology is one that acknowledges that any technology or any automation automatically automates and exacerbates social inequality and power dynamics that I've just been talking about. So good technology starts with that acknowledgement and works towards first of all, not automating and not exacerbating those current and past inequalities, but also good technology works towards the objective of not maximising profits or not necessarily maximising efficiency but maximising justice, maximising equity. Currently you find so much technology that's not really needed, that shouldn't really exist, or it exists to make some people richer, for example, much of facial recognition technology technology or emotion recognition or gender prediction or anything like that, that is the kind of technology that has no use, that can't do much good, where negative aspects of it outweigh any positive benefits it might have. So, good technology a i said, something that automates inequalities but one where the objective is helping the most idsenfranshiced of society because when we create technology there is an uneven distribution of benefits and harms, people in power often get the most benefits while the people at the bottom suffer the most harms. So good tech must offset that. Good tech might do what Ria Kalluri calls shifting power from the most to the least powerful, so something that gives ordinary people or disenfranchised people, more agency and more power. But also good technology is somewhat anti-capitalist, even though that's difficult to imagine at the moment, but we have examples - if you take for example, the Māori community of New Zealand, they have shown that you know, that you can you can build technology that is, you know, that is anti capitalist. So, they have collected their own voice data, they cleaned the data, they built their own various speech technology, machine translation, where the primary beneficiaries are the Māori community itself, where there is no financial interests involved, it's just there to serve the community. So, these are things I consider good technology. And can we have good technology? Yes, as I said, as the Maori have demonstrated. Yeah, so, how, how do we work toward it? Yeah, we envision, again, I lean on the scholar Ria Kalluri where, you know, at the moment, it might be difficult to envision, you know, revolutionary or liberatory technologies but we can envision, say, for example, what technology that liberates the most, you know, the most oppressed might look like for, and say, imagine that technology, what that what kind of technology that could be, you know, like in 20 years or 30 years and then work our way backwards from it. So it's possible to envision and work towards it.
KERRY MCINERNEY:
Fantastic, and you do so much amazing work kind of trying to push a lot of these technologies towards justice. So myself and Eleanor, so many people in this community, like really, really admire you for everything that you do. And one thing that you've done is that you've been a really vocal critic of the way that certain kinds of pseudoscience are manifesting themselves in AI technologies. So could you tell us a bit more about what pseudoscience is, how it manifests in AI, particularly in the field of computer vision and facial recognition, and why it's so dangerous?
ABEBA BIRHANE:
Well, thank you for your kind words. Yes, it's true, we have so much pseudoscience and unfortunately, it's on the rise. And, again, computer vision probably is the worst, you know, one of the main fields that's advancing pseudoscience in AI. So, let me go back to what I said about systems science and embodied cognitive science at the start of the podcast and let me reiterate the fact that you know, human cognition, human intelligence, human emotion, social interaction, all these phenomena, you know, the embodied cognitive science field or systems thinking would call these complex adaptive systems. So, essentially, human behaviour is a complex adaptive system, which in short, is that is is they call it non-compressible, which means that you cannot compress it with with data or with with models because these systems such as human behaviour or social systems are non-predictable and then determinable because, there are infinite ways of behaving there are infinite ways of being, which means that essentially human behaviour is not predictable is not something that can be fully or fully captured with data, and not something that can be fully automated. This means that, as we are building models that try to you know, capture or predict human behaviour, the best we can do is capture, like a snapshot of a moving target, maybe capture a little bit of, you know, human behaviour, that's, that's the best, that's the best we can do. That's the best outcome out of you know, out of modelling, but when it comes to things such as emotion or or intelligence, what we find is that, these are not phenomena, these are not things where you can have clear data, where you can, we can clearly define and say, you know, this is what I'm modelling these are things that cannot be read, for example, emotion is not something that can be read of images of faces or facial expressions or or any bodily output. So, these are internal behaviours that cannot be captured by external expressions. So any attempt to capture to gather data about emotions, for example, is just futile, because these emotions are not something as I just reiterate, something that can be captured with data that can be modelled. So, this is, you know, these are theories that have been debunked in the 1950s 60s 70s where people have tried again and again and shown that, you know, any internal human behaviour is not something that can be clearly understood and theorised about based on outer expressions. But now, in this has been debunked as pseudoscience as I said in the past, but now because computer vision which is one of the most you know, one of the computer science fields where we are seeing so much advanced advancement in technology What, what we now have is kind of bringing back this debunked pseudoscience through various computer vision techniques. So we have forgotten that a lot of what we are trying to do such as, you know, predicting emotion, or predicting gender is actually theoretically debunked. But now we have, because we have state of the art machine learning models in huge volumes of data, we don't even go back to revise, to revise and to remember the debates in the, you know, the discussions that have happened in the past, we just automate, we just like, resuscitate and repeat pseudoscience with now state of the art machine learning models and huge volumes of huge volumes of data. So we're basically building models that are, you know, impossible. But because, you know, we have shiny models, we don't go back and question that theory, we don't go back and ask, you know, is the thing that we are trying to predict actually predictable is it something that can be captured by data, but we don't do that. And there is, you know, one of the problems within computer vision actually generally in machine learning is that if you have huge amounts of data, then eventually you will get accuracy or you will get at whatever you are trying to get at this is a huge fallacy of course.
ELEANOR DRAGE:
Yeah, absolutely. And you put that really clearly, the way that our amnesia for the things that we know to be untrue, that we know to be debunked are always rearing their heads and new technologies. And also Rosi Braidotti was talking about that in another of our podcasts. My next question is about what you've written about the need to decolonize the computational sciences, including machine learning, and even focusing on variances of women of colour, particularly Black women, and how do you me and other research institutions, commit harm against them through these patterns of exclusion or hostility when tokenization? So how can research institutions meaningfully engage their cultures, so that women of colour and specifically Black women can easily partake in computational research - whatever equally means there, maybe you can critique that as well.
ABEBA BIRHANE:
Yeah, so this research comes more from personal experience, so that paper was written from personal frustration, rather than, you know, rather than, as one of my, I don't know, research topics, because - not only me, but many women of colour around me within my circle, you know, you find again and again and again, face systemic exclusion, face double standards, for example, you have to be, you know, twice as good to succeed in, you know, in computational sciences, because your work is judged by a much higher standard. Say, for example, I write something and I put it up, I open it for the public, my work gets much more scrutiny than, you know, say, say my non-Black men, female colleagues, and not just me, it's the case for many Black women. There’s also much less sympathy for our experiences and for these double standards that we go through. There is also a tendency to kind of push these interactions, the racist and white supremacist and sexist ecology as part of a something, you know, that's outside the merit of science that the computational sciences need not concern themselves with, where the general tendency is that it's your work that speaks so it doesn't matter what gender or race you are from. But the fact is that it does, as I say, if you don't look like, you know, the stereotypical intellectual or if you don't look like someone, you know, from the department, you know, implicitly or explicitly, people are gonna call standard questions, your intelligence, your capacity, your capability. But these are things that as I said, kind of, you know, the field or people within power want to see as the merit outside the merits of the field. So all this contributes to the hope that it is not just individual behaviours that need to change. For example, you know training towards diversity, you know, changing individual minds might contribute a little bit, but it's that ecology and ecological change that we need. It's acknowledging the fact that these fields from the get go have been exclusionary, that we need to, you know, we need to change the structure. For example, in computer science, we ask, you know, Why are girls not coding? Or why are girls not into computer science? So it's about flipping that question and asking, what are computer science departments doing to accommodate the needs of, you know, say, Black girls. So it's about raising these questions. It's about changing the ecology. It's about changing the representation. Yeah, and it's about not just like, you know, bringing Black women into the field, but also making sure that we build the infrastructure, in order for them to stay, because it's easy to get in. But if the infrastructure, if the ecology is constantly …. and if it's something that constantly pushes you out, if it's something that constantly questions your capability, if it's something, you know, that's constantly making you feel like you don't belong, you are unlikely to stay. So it's about acknowledging, recognising these factors, and changing the ecology, you know, in a way, that that brings in black women, but also creating an ecology that nourishes and welcomes the contributions of black women.
KERRY MCINERNEY:
Absolutely. And I really, you know, love the way that you talk about it as flipping these questions, because so often, for Black woman, for women of colour, the question is almost framed as no, how are you lacking so that you're not succeeding in this field, as opposed to, you know, what is wrong with your department, that they can't hire and retain the amazing black woman and women of colour who work in this field? Like, that's garbage. And that is very much, you know, it's a you problem. I think, yeah. Work like yours is really crucial, or certainly, and I think the fact that it's still often just not even seen as being a problem. You know, I've tried to talk to colleagues about the fact that, you know, why don't you hire Asian women as professors as lecturers, you know, into any kind of permanent position. And it's just, it's just not even something people see as being an issue. And so I think, just the simple conversations of saying, Hey, we have to flip those questions. We have to be able to have these honest conversations, is the only thing that's going to change the ecosystem. I think, really. I also want to ask you, oh, sorry.
ABEBA BIRHANE:
I agree, I agree.
KERRY MCINERNEY:
I always want to ask you about your work on colonisation and AI because we've had some other fantastic podcast guests like Karen Hao and Michael Kwet, who've been looking at how the widespread deployment of AI might be reproducing or replicating all the patterns of colonialism and domination. And so you've written about the algorithmic colonisation of the African continent. So could you explain to our listeners, what algorithmic colonialism is, and how it differs from previous forms of colonial activity?
ABEBA BIRHANE:
Yeah, great, great question. Yes. So in the paper, in that paper, I begin by outlining the similarities between traditional colonialism and digital colonialism. So with traditional colonialism, it's much more obvious and easy to see. It's like it's the physical, you know, invasion of nations, and then taking the resources and then, you know, taking away the autonomy of the people, you know, making, making them lose their language, making them speak English, and then eventually, you know, ruling according to the invaders, culture, language and customs. Whereas with digital colonialism, it's much more nuanced. What you find is, technology plays a central role in invading nations. So it's state of the art technology that's being imported - in the paper I focus in, on African nations, where it's brought, as you know, beneficiary technology or technological advancement or, you know, again, state of the art AI, but what you find is that what's important is, you know, the customs, the norms, the values and the objectives of mainly Europeans and Americans, mainly the US, and then you are normalising those standards, those norms, those cultures within African nations as a way to be, as a way of doing things. And this is the much more nuanced version, but also you find almost a physical invasion as well. So currently, I have a brilliant person I'm mentoring, she's researching undersea cables in Africa. And her research is opening up my eyes, because, you know, there's only a handful, actually just a few big corporations such as, you know, Facebook and Google and others that are laying these physical cables, physical undersea cables, that for for internet connection in in Africa, and what you find is that, you know, there is no, there is no regulation, there is no, it's almost like the wild west where these corporations go in lay the cables, control the infrastructure, control everything control the norms, and Africans have an option of contesting but also because we don't have the resources, it's also difficult to reject you know, to kind of oppose the oppose these these infrastructures, these developments as well. So what you find is that we really are just you know, under under the mercy of these these big corporations so So again, going back to my original point previously, you know, it was invasion through physical force but now you find its invasion through technology, its invasion through these you know, these physical you know, undersea cables for example. So, yes, so, you see the resemblance you see you see the similarity there.
ELEANOR DRAGE:
Yes, sir. Interesting. And we also had Michael Kwet talk about these silent forms of colonialism that go unnoticed, and why people aren't paying attention to those kinds of things. It's really fascinating, fascinating and great to have some really specific examples of the way that that's happening. Because I think that people are kind of aware about what's going on, and that it's not really legit, but to know exactly what kinds of technologies, what kinds of pipelines, what kinds of network infrastructure and how that is spreading around is really useful. I wanted to ask about this kind of problem-solution model to solving problems that is the dominant form in engineering, and the humanities were kind of all over the place. So we don’t have one way of problem solving, but I can see how when there’s a problem you want to find a solution and solve it.
But the only way to grapple with algorithmic injustice you say, through a relational approach. So can you explain what that is and why it differs from this problem-solution model.
ABEBA BIRHANE:
Yeah, sure, yeah, it's difficult to say anything for certain about the field because usually fields when you go into say, whether it's computer science, you know, cognitive science, neuroscience or anything, when you delve into them, you find that they are much you know, there, there is much more multiplicity and diverging diverging of values and you find that it's difficult to say one thing about about the field that everybody you know, would agree on. But you can almost be sure with computer science training, from the get go. It's a field that is like, that is geared towards thinking towards the mentality that you formulate something into a problem that way you can find a solution for it. This is the standard. So, I mean, this has been great, because this a lot of engineering, a lot of technological advancement has come out of neatly defining problems, that way you can focus on that specific problem and then you can find a solution for it, but, but this has also been a problem, because certain things, especially, when you are dealing with the human condition, there are certain things that cannot be neatly formulated into a problem, and some things should not be formulated into a problem period, for example, you know, you, you see things such as, you know, for example, with with the rise of frictionless technology, you see friction for being formulated as as a problem, when, in fact, friction is the human condition, part of the human condition, we need friction in our daily interaction, to make sense of the world, it's through friction, we realise our, you know, common ground, we realise our differences, and we realise, you know, our own values. So, again, friction is really, you know, part of the human condition, not something, not something that should be formulated into a problem, not a problem at all, and not something that should be solved. Because without friction, you know, you just, you just lose, you just stop to make sense, you just lose your sense of being. So, what I'm trying to say is that certain things are, do not lend themselves to be formulated into into a problem formulation, but some things like such as friction, or even like human cognition, should not be framed as a problem that needs to be solved, because they are not problems, because they are just part of the human cognition, or part of the human condition. And going back to your question about relational ethics, again, part of the problem with approaching ethics in a problem-solution framework is that, you know, a lot of the ethical dilemmas, a lot of the ethical issues that you come across on your on a daily basis, or even even in research is not something that can be neatly summarised into a problem, either, it's not something you can formulate into a problem, but it's something that requires understanding, it's something that requires, you know, constant discussion, it's something that through discussion, it's through an in depth understanding, you can grapple with, you know, as I said, at the beginning, is, you know, societal injustices, or historical inequalities, or power symmetries, and it's the first step is understanding and acknowledging these, and then then you can talk about changing again, the ecology, not just you know, finding a quick solution or a quick fix or finding a neat, neat solution, but rather bringing about … changing attitudes, you know, or educational methods or, or various ways change of changing the ecology. Because, again, formulating ethical questions as problem -solution really is kind of, is a really reductive and narrow way of thinking. And you are only you know, you might be focusing, say for example, I don't know irregularities in data sets, for example. So, things like this, maybe you can narrow it down and you can you can, you know, do various you can go through various techniques and maybe “fix it” - I say fixing in quotation mark, maybe those those kinds of really, very, you know, refined questions might find a solution through the problem-solution framework. But a lot of the issues we find within AI ethics, such as, you know, the exacerbation of inequalities through datasets or models, these are not things that can be … that can't find a quick solution because these require no adjusting - not just changing the datasets, not just changing the model, the parameter or the weights, but something that requires ecological change something that requires change within society. So instead of formulating them into a question, which finalises them and brings them into a conclusion, we need to leave them open so that there is continual discussion and continual conversation about these topics, because at the end of the day, they are also moving targets. You cannot nail down an ethical question and define it once and for all because society is not like that, because society is constantly changing, because society is a moving target. So these questions will always remain a moving target. That's why they need to remain an open class where there is continual discussion. And, yeah, I hope that answers your question.
KERRY MCINERNEY:
So brilliantly, and I think, you know, what comes out so beautifully across like, just our whole discussion with you today, are these themes of kind of the embrace of complexity, the need for friction and tension, and how these things are enrich human life, they don't make it worse or more difficult, or they make it more difficult in some ways, but it's a productive difficulty. And I do think that this resonates with so many different feminism's like not only the emphasis on tensions, but also the emphasis on slowness and the need to kind of engage us with these huge ecologies of change, and do that in a systematic and thoughtful way that it's not going to be this quick fix. So thank you so much for everything you've explored with us today. It's such a pleasure to talk to you. And yeah, we hope to be able to talk with you again soon.
ABEBA BIRHANE:
My pleasure. My pleasure. Thank you so much.
ELEANOR DRAGE:
This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.
Comments