In this episode we talk to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams.
Sareeta Amrute is an anthropologist who studies race, labor, and class in global tech economies. She is currently investigating sensation and social movements in the Indian diaspora in a book project called Securing Dissent: Activism and Cryptography in the Indian Diaspora. She has received a fellowship from the Russell Sage Foundation to support this scholarship. Her recent book, Encoding Race, Encoding Class: Indian IT Workers in Berlin, is an account of the relationship between cognitive labor and embodiment, told through the stories of programmers from India who move within migration regimes and short-term coding projects in corporate settings.
Reading List:
Amrute, S. "A New AI Lexicon: Dissent" https://medium.com/a-new-ai-lexicon/a-new-ai-lexicon-dissent-2b7861cad5ff
Amrute, S. "A New AI Lexicon: Pleasures" https://medium.com/a-new-ai-lexicon/a-new-ai-lexicon-pleasures-1de4bd8a115
Amrute, S. (2016) Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham: Duke University Press.
Sundaram, R. (2010) Pirate Modernity: Delhi's Media Urbanism. Routledge.
Kimmerer, R. W. (2020) Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge and the Teachings of Plants.
Nemer, D. (2022) Technology of the Oppressed: Inequity and the Digital Mundane in Favelas of Brazil.
TRANSCRIPT:
KERRY MACKERETH:
Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.
ELEANOR DRAGE:
Today, we're talking to Sareeta Amrute, Affiliate Associate Professor at the University of Washington who studies race, labour, and class in global tech economies. Sareeta discusses happened when Rihanna and Greta Thunberg got involved in the Indian farmers protests; how race has wound up in algorithms as an indicator of what products you might want to buy; how companies get out of being responsible for white supremacist material sold across their platforms; why all people who make technology have an ethic, though they might not know it; and what the effects are of power in tech companies lying primarily with product teams.
KERRY MACKERETH:
Thank you so much for joining us here today. It's such an honour to be able to chat to you on the good robot podcast. So just to kick us off, can you tell us a little bit about who you are, what you do and what's brought you to thinking about gender, feminism, race and technology?
SAREETA AMRUTE:
Thanks so much, Kerry. It's an absolute pleasure to be here. My name is Sareeta Amrute, I'm an anthropologist, I was trained as an anthropologist. I'm currently speaking to you from New York City. That's the traditional home of the Lenarpe and the Canarsee people. And I always like to begin by honouring their sovereignty on this land and the histories of enslaved people, formerly enslaved people, immigrants, and indigenous cultures that make the Island of Manhattan such an amazing space to be speaking to you from. So my work is largely about the way that the IT industry as a global capitalist phenomenon takes up and remakes people and places. And so my angle on this question always comes from this key question of how large abstract things like technologies and technology industries are embodied in particular spaces, times and the physical body of people who make the technology. So that's my angle, and I get that very strongly from feminist writings on embodiment. I especially am inspired by science and technologies, feminism's on the one hand. So I really like to think about what it means to make a piece of code as a material thing. And I also like to think about who gets to make pieces of code and in what way. And on the other hand, I'm very much inspired by postcolonial feminism, especially Black feminist thinking, Dalit feminisms and indigenous feminisms. I kind of grouped those together as sort of decolonial feminist thought, which always centres this question of whose voice, whose perspective, is not in the room, in any room, even in rooms that say they are feminist. It’s this idea that we always need to be open from critique to critique from below or from the outside I think is very, very important in my thought process, and in my practice.
ELEANOR DRAGE:
Thank you, that was really beautiful. And I always think about Gayatri Spivak, for those of you who don't know, she saw another very famous philosopher who talks about “learning to learn from below”, which is a beautiful and extremely difficult thing to do for many of these major institutions in the west. On that note, another incredibly difficult thing to do is rethink what it means to create good technology. And it's something that - people always come on, and they break down good into its different component features. So what can you then add to our debate about what is good technology, our billion dollar questions? So, what's good technology? Is it even possible? and how can feminism help us work towards it?
SAREETA AMRUTE:
Yeah, I love this question, because it seems like a hard one, Eleanor, but it's, for me, it's really simple as a simple answer. So for me, good technology is a technology that enables sustaining social, biological and economic relations. So in that sense, there are lots of things that can be good technologies, it really is a question of how they enable those sustaining relationships. So certain forms of agriculture are good technologies. And this really classic example from the philosophy of science: a water pitcher is a really good technology because it gathers people together. It provides them with something that they need to to live, which is water, but it's a pitcher. So it's also a kind of social technology, because it pours out, it gathers together and pours out and it's meant to be shared. And of course, that's an easy one. I mean, mostly on this show, we're not talking about water pitchers, we're talking about algorithms, or automated systems. And so what's really interesting to me about this question of whether a technology that sustains social, biological and economic relations, whether that's possible, I think there's really two ways to answer that. And both of them kind of get away from evaluating a technology qua technology as good or bad. And think about that question of relationality. What kinds of relations does a technology entail? So one way to think about it is how people reuse and remake stuff that starts out that isn't very sustaining. There are lots of examples around the world of this but two of my favourite one is from David Nemer’s work on Brazil, in which he talks about how residents in the favelas in Brazil actually think about the cell phone as a technology that's necessary to them, but doesn't quite work in a way that sustains them. And they spend a lot of time taking them apart and remaking them. And another one I really like is from a much older work, it's called Pirate Modernity
where he just takes this idea of piracy away from the questions of copyright and rights holding and into how people remake the technical world around them in a way that sustains them. And then just to bring in another Robin Wall Kimmerer’s work on ecology, moving from models of scarcity to abundance in the way that we think about the ecological, our non-human counterparts in the world, how do we sustain those relations? That those are all forms of good technology, in my opinion, so not only is it possible, but it already exists. The thing is, we don't often look to those communities in order to understand what's already happening all around us. We always think of communities of favela-dwellers or the urban poor in cities like Delhi, or indigenous communities and their relationship to the land, we always think of them as victims or passive recipients or, you know, sometimes we think of them as people without technology. But in fact, if you just poke that assumption a little bit, you'll see a constant repurposing of worlds to make them sustainable. I think the second way to think about that question of is it possible is to think about these questions that, to me, are very much feminist questions to getting to this point about how feminism can get us there, to think about who benefits from a certain arrangement of technologies and societies. To think about who is not yet included, and the circle of who benefits. And to push on that question to try to bring in different histories, alternative histories, different produce different arrangements of technologies, to expand that circle of who's benefiting from them. And I mean that not only in terms of humans, we also have to think about our non-human counterparts, how they can be included in some of these technological arrangements. And I think what is similar to both of these approaches to good technology is it gets us out of imagining how a thing could be built as purely good or purely evil. Let's take an example of something that we probably think is a pretty much a bad idea. And it's been in the news a lot, which are algorithms to review resumes. Okay, we know now that not only is the use of these algorithmic technologies widespread, but that's extremely biased. Because it's relying on past data of people who performed well in the job. Okay, that probably seems like a terrible idea. And for now, we should probably just put a pause on those systems, we should stop using them. That doesn't necessarily mean that there's no place for automation and thinking through job candidacies but you would have to build the tech the tech, you'd have to build that algorithm in a completely different way, with completely different values. In fact, you could build one that would surface for you. People whose names have accents on them, for instance, that's one of the things that kind of immediately disqualifies you at this point. You could build a tech that surfaces them and asks the person reviewing applications to have a closer look at that particular cluster of people. But that would require a serious rewrite of the technology itself.
KERRY MACKERETH:
That's really fascinating. Thank you. And I really love all the richness of the different perspectives you brought to that answer. And what you're saying at the beginning really reminds me of what another of our wonderful guests talked about, Maya Ganesh, when she talked about good technology can be hacked, it can be opened, it can be enjoyed and be remade, and it can be pleasurable for the people who engage with it kind of going beyond sort of the initial moment of - she talks about that iPhone unboxing, right, - going beyond the iPhone unboxing to kind of really dig into the hardware itself, dig into the software and remake that to work for a much wider range of people. I want to turn now to talk a little bit more about what you were discussing when you said what brought you to the topics of gender, feminism, race and technology which is your ethnographic work which focuses in part on the experiences of South Asian diasporic workers and the IT industry, but also on South Asian diasporic online activism. And so something that you've mentioned is that members of the South Asian diaspora utilise technology in ways that interestingly complicates the notion of good technology. So you highlight, for example, how technologies that allow people to communicate across distance are really important for connecting people across the diaspora together, but it also opens people up to risks of surveillance. So could you tell me a bit more about your work on how technology is used for activism, the impact this has and the risks of using technologies in this way?
SAREETA AMRUTE:
Yeah, I love this question. And Eleanor, before we started, you were talking about the diasporic East Asian Twitter space and how it's a kind of really valuable source of information. So in this current book I'm working on I'm thinking about how, you know, there was this earlier moment when we thought about the relationship between social media and social movements as extremely deterministic as if there could be such thing as a Twitter revolution, organised solely on Twitter, and fermented by a social media company. I think we've moved way past that point. And now we're at a point where we understand that social media platforms are an unsaid, essential component of all activism today, but in fact, they're one of many different strands. And there's a kind of really complex interdigitation of these platforms and what activists do on the ground. So I've been exploring that topic in the case of India and the South Asian diaspora more broadly. And I'm looking at this complex space in which what happens in India affects what's happening in the diaspora, and very much vice versa, we really do live in this very interconnected space. There's lots of examples that I'm thinking through. But one that's so very clear is the example of the farmer’s protest. So as you as your listeners may know, there was a series of laws passed, farm laws, which produced a huge protest movement in India itself that was very much physical. It required that people occupy spaces in the capital city of Delhi, there were tractor marches and so on. But thus, that same protest culture was picked up all around the world. So there were tractor convoys in Canada, for instance, there were car convoys in Seattle where I spend a lot of my time. And a lot of the coordination and media attention to the issue of the farmers protest happened through social media channels. And those channels were both a vital lifeline to get media attention to what was happening in India, but were very dangerous too. So to just give you this very, very famous example, those of you might remember that there were a series of celebrities who were tweeting about the farmers protests, one of whom was Rihanna. And then also Greta Thunberg, the climate activist. Now, when Greta Thunberg tweeted about the protests, she also tweeted a link to an activism toolkit that had been composed by activists in Vancouver, British Columbia, and also in India. Unfortunately, that activist toolkit wasn't sufficiently secured. And the IP addresses of people being active in India were part of that Google document and Google's, the company shared that information with the Indian government, who then promptly arrested those activists. So that's a real example of the importance of online communication, but also the way in which it opens people up to a lot of vulnerabilities. And I think this is a problem that people on the ground are sort of trying to work through in real time. And one of the biggest issues there is when you're trying to respond to something that feels like a crisis, there often isn't the time to slow down and make sure everyone's safety is at the heart of the practice. But I think, speaking of good technologies, and how we can get there, one of the biggest lessons I've learned through my work is that it's very much worth it to move at the speed of trust, as some of my interlocutors say. So even in the midst of organising and campaign building, making sure you're working with the affordance of the technologies you need to use in a way that puts the protection of your members first and puts that over what's often the reigning ethic of all these technologies, which is an ethic of sharing, right? The protocols are automatically set to share as much as possible. You really have to work hard to militate against that. So that's some way of hacking what is not a good technology and making it good, making it fit for your purposes in the world.
KERRY MACKERETH:
Well, I really love that kind of moving at the speed of trust, because I feel like these platforms, like you said, they're designed to share but also the temporality is so instantaneous, and you think of Google Docs, and you think of Twitter and like things like Google Docs, of course, is a great community feel about that being able to contribute and participate in knowledge sharing, but then also how and with whom that knowledge gets shared, you know, quickly spirals out of your control. So I love the idea of kind of going back to those community networks or going back to forms of networking and communication-sharing with one another that moves that bit more slowly, that moves at the speed of relationships, rather than at the speed of something already algorithmically defined. And so I actually also want to ask you speaking about kind of algorithmic forms of sorting about some other research you've done, again, linking to your work on South Asian diasporic workers in the tech industry, which argues that race itself functions as an algorithm that's used to sort and classify populations. And I think this is a really important and really provocative idea, because I think while the AI ethics community talks a lot about how technologies themselves might reproduce racism, it doesn't really necessarily think about the classification processes in the first place, which are then enabling these technologies to have these negative effects. So could you explain for our listeners, what you mean by racist algorithm, and why you think this framing is really important?
SAREETA AMRUTE:
Yeah, I think there's been a kind of large literature on thinking about race itself as a technology over time. So if we think about the development of race as a category, over the long arc of the, let's say, 18th and 19th centuries, it is pretty clear that the genetic idea of heritable traits that is associated with characteristics emerges from a particular kind of science. It emerges from eugenicist genetics, in combination with colonial techniques of population control, that's where it really comes from. And that race science gets used to justify the the existence of slavery, criminalization, various forms of colonialism. So race over time becomes a technology through which populations get categorised, sorted and classified as having particular traits that make them fit for particular kinds of labour or even being in the world that make them bioavailable, you could say it that way. And so I really became interested in this question of what is happening with this category of race in the current moment when we of course, are still very much inheriting these older notions, these 19th century notions of genetic determinism. But overlaying on top of those is an algorithmic imaginary, which I think really is very different. So how do we think about how that amount of algorithmic imaginary is affecting race as a category? Now, I should say that this, I am not arguing anything about the facticity of race. As a category, I'm talking about how it's made productive in particular moments of time. So in the current moment, if we think about what algorithms do, they sort and they classify, but they're not particularly interested in finding out an absolute truth about a subject, it is really about making correlations between a certain set of behaviours, and a certain set of predicted outcomes. So in that sense, when I say that race is algorithmic, I am trying to suggest that race is undergoing in some ways this transformation. So in other words, it's not that important to figure out what the truth is of, let's say, Asianness, but it's more important to make a correlation between being Asian and having certain kinds of proclivities, which then can be used to make predictions about what someone who seems to have those proclivities might buy, very consumer oriented, or what kind of worker they might be. And so the some of the effects of this kind of algorithmization of race or an algorithmic imaginary applied to race, is that some of the worst effects acts of racial thinking - racism, white supremacist attacks - that we've seen very lately, they sort of exist at the end of the long tail of a set of predictions, and no one is really responsible for them. They exist out there, in their long tail, they might even produce a marketplace. So we know that Amazon sells white supremacist Nazi material because there's a niche market for that material. That is quite different. This is a kind of new variation on the theme of racist technology, because in fact, it allows for all those variations to exist, it doesn't have to settle on any one of them as the truth about race, they can all exist out there with their own demographics and their own patterns of behaviour that can be met by a market. Now, I'll say since I did that work, and I wrote that piece about racist algorithms, I think there has been a little bit of shift in the degree to which people are willing to accept those longtail variations, I think a lot of that was caused in the US by the movement for Black lives, which very much insisted that what seems as if simply a variation and practice is actually abhorrent, and we will not accept it anymore. So I think some of the excesses of algorithmic thinking around race have been curtailed. But it's still very much part of the way that race is treated within tech companies. It's just something that exists, if it can exist often in this discourse around personalization, if it can be marketed to it should be as long as that marketing is within the legal limits of what's allowable in a particular regime. And in that way, I think companies really alibi themselves out of having to be responsible for some of the things that are unleashed across their platforms because it's, it's seen as external to the operation of the firm, even though the firm itself is producing and catering to this kind of thinking all the time.
ELEANOR DRAGE:
Yeah, I mean, the consumer marketplace can really tell you all you need to know about how race and even personality emerge online. I think that’s the case with the new One Million Impressions Database, which was created by a group of researchers who wanted to show how we judge a person’s personality based on looking at their face. I can’t understand why they felt that this needed to be demonstrated using AI. I guess AI is a popular way of making an old idea appear new or a bad idea relevant. These researchers manipulated pictures of people to make them appear more trustworthy by changing what people look like. Obviously the system judges people on their gender, race, age, and weight. So the system imitated a bigoted young white male with a low BMI. The researchers said that they intended for it to be useful for science but I’m not sure what science it’s useful for. But anyway, the point is I couldn’t see the point of it, and then realised, oh, it’s probably being used for commercial purposes. The system’s way of manipulating images could be used by companies wanting faces for their websites that attract customers. And those ideas about which faces are appealing reinscribe associations between race, gender, weight, age and personality.
SAREETA AMRUTE:
Maybe sell stuff, maybe the idea is that it could be used to pick up verbal cues and emotional cues of people who are being evaluated for things. I think that's another huge market. There's a huge, huge market in emotion recognition algorithms that claim to do things like, based on the quality of voice of a prisoner in a phone call, to be able to determine recidivism rates. I mean, we are in a world now, where claims that we might have thought had been utterly debunked are definitely returning again. And I think it's related to the fact that we have sort of thrown the doors open on what can and cannot be correlated. This is all about correlation. Causation is a little bit off the table. If something can be correlated, then that's sort of enough because it means that it can be deployed. And it's only much later downstream that when we see what, what the pattern of backs of that correlation, actually is that we then realise retrospectively, oh, that those correlations exist because of a very long history of power. And the way power operates on different bodies.
ELEANOR DRAGE:
I guess that kind of takes us on to AI ethics which has been heavily critiqued - these big frameworks that we have around from UNESCO, the EU AI act. And for many people, it's quite obvious that these frameworks don't really have much impact on the ground. But in the AI community, I think we still, and you know, we hope that there will be some kind of impact from these really big, expensive, time consuming frameworks and regulatory principles. Now, you've pointed to the fact that they must be more contextual, which is a kind of problem when you know, it's UNESCO, or the EU and these are big agencies, and they're trying to tackle many different kinds of systems. And you've said that ethical practices need to be grounded in the differences of particular bodies and particular situations. So what does that mean? And how can this make ethics frameworks more effective?
SAREETA AMRUTE:
Yeah, this is a really sticky question. So I'll start off by saying that I think even technologists have ethics, because ethics may be unspoken, they may be automatic, semi-conscious, embedded in the way that they approach making things, but they're there. And so I think one thing that's a real task for us that hasn't been done adequately, is to really understand what the practical everyday ethics are of technologists, and and then to understand what else is going on at the margins of those worlds. All tech companies are actually some, to some degree diverse. They are. They may be dominated by white men who graduated from a handful of universities across the UK and the US. But those aren't the only people who are in these worlds. And so I think one of our primary tasks is to notice what else is going on in all of these spaces, whether it's the EU, the UN, a big tech firm, a small tech firm, who's on the margins of these spaces, and what are their practices, though those could be practices around divisions of labour in a shop, who has to do the back end stuff, who gets to do the front end? Who's doing content moderation? Who's doing catering? Who gets to sit in the C suite? They can also be practices around what are the other things that people are invested in? Why do they come to work every day? And I think we have to build incrementally from there what we want our ethics to look like. I've really advocated thinking of ethics as a kind of form of attunement to those sorts of differences and worlds, and then using those attunements to try to build. I'm not sure what the role of policy frameworks are, I'm going to be completely honest, I think that we can't do without them, because that's the way that governmentality actually works. If we want to hold power brokers accountable, we have to have a policy to point to. They serve a sort of indexical purpose. And that way, I think they're there to be pointed at, but the real work of change is going to happen through one on one conversations, building power through groups within and beyond organisations and kind of slowly building the pressure from there. So I think the policy has a role to play, but I think it's a codification of what's happening and practice. And it's useful because it can be pointed to in order to hold groups accountable. But it's not sufficient, that work, in the sense that it doesn't stop at the policy once the policy has been written. We know this, there are so many examples. But just to give you an extremely, extremely concrete example of this, both Twitter and Facebook at different points in the 2000s added caste as a category in their hate speech moderation policy. Okay, that was a huge win. That took a lot of activism, a lot of closed door conversations to make that happen. But it was not until very recently, I think this year or last year that Twitter added cast as one of the choices in their user interface drop down menu if you're trying to report hate speech. So there was a huge gap. And I'm not going to remember the exact dates but I think more than a 10 year gap between the passage of a policy on including cast as a hate speech category for Twitter, and then actually being able to report caste-based hate speech on the platform if you are experiencing it. So it's sort of two rounds of intense intervention with the company to get us there. And the question should be obvious why, why did it take so long? But that's kind of the nature of change, a policy gets written, and then it gets implemented much later. And it was really important that you can check that box on the UI, because that's the only way that data about the amount of hate speech directed against Dalits, oppressed caste, people is actually circulating can be tracked by the company itself, which could lead to further policy changes down the line. And it's just also a question of human dignity. If you're experiencing a caste based hate speech, and you pull down the drop down menu, you've been told that Twitter has a policy against it, and you simply cannot report it. That's, that's actually a terrible feeling, it makes a person really feel as if they're not seen or recognised or even matter in any way.
ELEANOR DRAGE:
I'm guessing that's why it took so long for Twitter to have that on the drop down menu. Because otherwise, they'll be getting an influx of people making these complaints and that will add to the data about racial hatred or the site?
SAREETA AMRUTE:
Maybe, or it's just one of those things where someone makes a policy at Twitter, and the policy team in effect is actually not that powerful within the organisation, the real power might be on the product teams. And so they can't actually follow once they've done the policy, they can't really follow through to make sure the policy’s are then reflected across the company. So that is an AI ethics problem, right? Because it's about the values, the values that the company holds to.
ELEANOR DRAGE:
Yeah, the power definitely is for the product teams. My first jobs were in tech doing not product stuff. And it was really obvious that the product team was at the very heart of the company, and they were the people doing the exciting work creating the thing that we're selling, and everybody else is peripheral to that. And you know, it is what it is, but that really speaks to my experience. So we also know that you're working on a really exciting syllabus project called AI From the Majority World, what's it trying to do? And why is it really important that we begin there?
SAREETA AMRUTE:
Thank you for asking about that. So the term Majority World comes from a Bagladeshi photographer, writer and activist named Shahidul Alam, who really proposes this phrase over similar phrases like the developing world, or even the global South, because he wants to emphasise that all those terms treat people living outside of Europe and the US as peripheral and passive. And I think we really need to start there, because the idea of the Majority World tells us that these worlds are going to be complex, they're going to have power brokers in them, as well as people who are relatively voiceless and everybody in between. And for me, in order to really get at these questions of AI and ethics, especially from a feminist perspective, we have to resist a frame of victimhood. We have to understand these complex relations and hold people in these worlds to account not just between what we think of as of the West and the Rest, but also within the majority world as well. So the syllabus project is a project I'm working on with STS scholar Ranjit Singh and designer Rigoberto Lara Guzmán who are just phenomenal collaborators. And we're trying to build a really robust set of readings and approaches to thinking about the majority world in AI and trying to centre the experience of people living outside of Europe in the US, and also to centre their worldviews, how they think about these problems and their solutions. So it's quite big. It's not really meant to be taught as a free standing course. But the idea is that people can interact with it and engage with it in many different ways and pull from it what they need. And it also will have a Zotero library attached to it that is open so people can add their own citations to it and I'm really excited. I think it should be out by the end of this summer and I'm looking forward to sharing it with everybody.
KERRY MACKERETH:
Wow, that's so exciting, such an amazing and important project. And actually, Ranjit was on this podcast, he was one of our first guests and we're probably not meant to have favourite episodes, but it is one of my favourite episodes, because his work is just so fascinating. Thank you so much for coming on the show. It's really been such a pleasure getting to talk to you and yes, we're hoping to be able to chat to you again soon.
SAREETA AMRUTE:
Thank you, it was a wonderful conversation. I appreciate it.
ELEANOR DRAGE:
This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.
Image: Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / CC-BY 4.0
Comments