In this episode we chat to Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry.
Karen Hao is an award-winning journalist now covering China tech & society at the Wall Street Journal. Prior to that, she was a senior editor at MIT Technology Review, where I wrote about the latest research and social impacts of artificial intelligence. She is also a Harvard Technology and Public Purpose fellow and was an MIT Knight Science Journalism fellow.
Her work won an ASME Next Award in 2022 for “outstanding achievement for magazine journalists under the age of 30.” Her former weekly newsletter, The Algorithm, was named one of the best newsletters on the internet by The Webby Awards, and an AI podcast she co-produced called In Machines We Trust won two Front Page Awards. In 2020 and 2021, her pieces on the forced dismissal of Google’s ethical AI co-lead Timnit Gebru and Facebook’s addiction to and funding of misinformation were cited by Congress (here, here, and here). In 2018, her “What is AI?” flowchart was featured in a museum exhibit in Vienna. She has guest lectured at MIT, Harvard, Columbia, Cornell, NYU, and Notre Dame. My work is taught in universities around the world.
Reading List:
What is AI? https://www.technologyreview.com/2018/11/10/139137/is-this-ai-we-drew-you-a-flowchart-to-work-it-out/
How Facebook Got Addicted to Spreading Misinformation https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/
AI Colonialism series: https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/
Transcript:
KERRY MACKERETH:
Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.
KERRY MACKERETH:
Today, I’m speaking with Karen Hao, a prominent tech journalist who focuses on the intersections of AI, data, politics and society. Right now she’s based in Hong Kong as a reporter for the Wall Street Journal on China, tech and society; before this, she conducted a number of high profile investigations for the MIT tech review. In our interview we chat about her series on AI colonialism and how tech companies reproduce older colonial patterns of violence and extraction; why both insiders and outside specialists in AI ethics struggle to make AI more ethical when they’re competing with Big Tech’s bottom line; why companies engaging user attitudes isn’t enough, since we can’t really ever ‘opt out’ of certain products and systems; and her hopes for changing up the stories we tell about the Chinese tech industry. We hope you enjoy the show!
KERRY MACKERETH:
Great. So thank you so much for joining us here today. So just to kick us off, could you tell us a bit about who you are, what you do, and what's brought you to thinking about the topics of gender, race, and colonialism and technology?
KAREN HAO:
Yeah, I'm Karen Hao, I am a China, tech and society reporter based in Hong Kong for The Wall Street Journal. But this is a new job. So previously, I was at MIT Technology Review for three and a half years as an AI reporter and editor. And I was covering the impact of AI on society. And through the course of my coverage, I was, when I was looking at just how AI was really transforming the way that we interact with one another the way that society operates. And all of the sort of foibles and not so great things that have happened along the way, it made me start thinking more deeply about what exactly is going wrong with the way that this technology is being built. And through that kind of diagnostic process landed in, in thinking in more about colonialism and the colonial undertones around AI development, and also
the ways that we could get out of that cycle, which come- tie very much I think into, like feminist theory and how that has been like a really big driver in shaping technology development, to be more equal, to be less colonial, to actually be beneficial for a greater and more inclusive population.
KERRY MACKERETH:
That's fantastic, and we definitely want to ask you more about that. But first, we want to pose our kind of billion dollar questions to you, which are, what is good technology? Do you think it's even possible? And how do you think that feminism helps us work towards it?
KAREN HAO:
You know, I think I feel like I'm gonna answer the good technology question with like, the definition of these terms, but I really do think it's just about technology that is in service of people. Which is kind of, yeah, like, I feel like that's a little bit of a cop out answer. Because, you know, like, Duh, but. But I do think that it is as fundamental as that and that, when I write about technology critically, what I'm really critiquing is when technology is actually hurting or harming people. So the reverse of that is what I think is good technology is like technology that is actually helping enhance people's lives, helping them reach their full potential, respecting their culture, and respecting the diversity in society. All of, all of those elements, I think, kind of fold into this bigger umbrella question of what is good technology.
I do think it's possible, I think it is very hard because so much of technology is built on this culture of looking at it's, it's built on this culture of scale. And when you start thinking about how to build a one size fits all solution for a broad range of people, that is inevitably when you end up tailoring the technology to one group of people, the dominant group, and then end up harming or like in the best case scenario, and up sidelining, in the worst case scenario end up harming, the minority or minoritized groups in society. And so I think it's possible in that you don't actually have to like, there's nothing about technology that says that you can only ever build technologies with scale at its root, like you can build technologies that aren't, you know, aggressively meant to try to be this one size fits all solution, you can be much more personalised, much more tailored to smaller communities, smaller cultural groups. And that's when I think you start getting closer to building technology that actually works for people because you have much tighter feedback loops, you're designing for a much more specific set of constraints. But the reason yeah, the reason why it's so challenging right now in the way that we conceive of and develop technology today is because technology has become, technology development has become so intricately inseparable from this idea that it has to be scalable across billions of people around the world. And that's kind of, I think, one of the root challenges we face.
KERRY MACKERETH:
That's really fascinating. And I feel like that's a bit of a weird tension in so much of like contemporary, like AI development and advertising is that on the one hand, there's this idea of like, oh, you can scale it up anywhere at the same time, it's these products are claiming to, like hyper personalise, and to sort of tailor everything, just for you as the consumer. And I find those two narratives very odd.
KAREN HAO:
Yeah, I mean, I, it's, you're, you're completely right. And that, like so much of AI is marketed as this personalization tool, but ultimately, what is driving the AI engine in so many areas, softwares, is, is an averaging of a population to then try to assess what an individual might like, or might want, or might use, whatever it is. So ultimately, that personalization is actually just based on a population average. And that's why when you're building technology across so many different types of people, types of cultures, you end up just finding something that doesn't work for like half the people because, because, you, the averages are for the mainstream, like the average is only aligned with with the mainstream with the dominant group.
KERRY MACKERETH:
Absolutely. And for our lovely listeners, we have a fantastic podcast episode with Jason Edward Lewis, who also talks about these issues to do with scale, and the importance of trying to develop technologies which are really responsive and grounded in what local communities want and need. And so I'd highly recommend that you check that out. And I think this actually brings us really nicely to some of your recent writing for MIT Tech review. Although again, congratulations on your new job. That's very exciting. So you recently wrote and published a series on digital colonialism. So could you first tell us what is digital colonialism? Why did you write the series? And what has the response to the series been so far?
KAREN HAO:
Yeah, I think so. I would say, rather than define digital colonialism, I would more specifically want to define AI colonialism because I think digital colonialism is a much broader umbrella. But what AI colonialism to me is, it's basically an argument that global AI development today is really repeating the patterns of colonial history. And what I mean by that is that back when we think about like European colonialism, it was about these colonial empires that were going to other places to take lands that was not theirs, resources that were not theirs, subject populations that were different from them based on these really racist ideologies, all for the service of building up their own capital, building up their own wealth and improving their, their society at the at the detriment or the disenfranchisement of like whole other swathes of peoples and cultures. And what we're seeing with AI development today is essentially a repetition of a lot of those themes, even if it's not as egregious or as violent as past colonialism. But the empires today are, like massive tech companies like Google, Facebook, Microsoft, that are the only entities basically that exist in today's society that are capable of developing an AI technologies based on deep learning specifically, which is the type of AI that uses massive amounts of data, uses massive amounts of computational power, and therefore massive amounts of electricity and money to create. And these empires, these new AI empires, are going to populations to claim data that is not theirs, which is essentially the resources that were not theirs. And then, so they're extracting those those resources that are not theirs. And then they're using it to create algorithms that are then subjecting those populations again to their ideology, the company's ideology of what AI should be, how AI should be used, how it should affect these people, to the point where – and there's such a huge power gap, in the same way that empires and colonised peoples have this huge power gap that you have these entire countries, these entire communities within these countries that have no seat at the table, no say for how this technology should be developed, because they simply don't have the resources to play this game. Not to say that they don't have any agency but they - it's the same, it's the same trajectory that we saw in the past where there were very few small handful of actors that got to make the rules of the game, that got to perpetuate these like really racist ideologies, that got to extract and reap the rewards of that extraction. And we're we're seeing all the same things paralleled with AI companies.
KERRY MACKERETH:
Absolutely. Thank you so much for that clarification, I think it's gonna be really helpful for everyone who's listening. And so what has the response to the series been so far?
KAREN HAO:
I think the response has been good in that I think it really resonates with a lot of people, both within the AI research community, like there's definitely a growing number of scholars that I mean, so much of my series was built on work from scholars within the AI research community that have been talking about this and the growing number that have been talking about this. So I think it really resonated with them. Because one of the challenges that I think AI researchers have is that they don't necessarily have the tools or the platform to find these like real world concrete examples or characters and that’s sort of the journalist toolset. So I was trying to find like, people, communities, case studies to supplement like the theoretical frameworks that they've already created within the academic community. And I think it's also really resonated with people globally outside of the US that have sort of felt these things, looking at the way that the US and US companies have so dominated the global AI development trajectory. And I think it's made them think more about how they might be able to, how they might be able to change that from within their own communities, within, within their own contexts. The series has received a very diverse readership, and different people, depending on where they're coming from, have had reactions about the way that the content has resonated with them.
KERRY MACKERETH:
And so in the series, you explore, you know, a wide range of examples of AI colonialism, but also resistance to AI colonialism. So this includes the rise of digital apartheid in South Africa, the economic exploitation of data labelers in Venezuela, resistance practices by gig workers in Jakarta, and Indigenous data sovereignty practices in Aotearoa. So what kind of drew you to these four examples? Were there any examples that you're really interested in looking at that kind of fell off the radar or you weren't able to look at? And also what distinct insights do you think that the four you ended up exploring offer you about the nature of contemporary AI colonialism?
KAREN HAO:
Yeah, so I think, what's interesting is, when I conceived of the idea for this series, it was actually based on an example that I was hearing that ultimately didn't end up being in the series at all. It was it was roughly two years ago, I was at, like a really, really small workshop or conference type thing. And someone stopped me to tell me about this phenomenon that they were seeing in their work that they were really concerned about. So they worked in, in AI in the healthcare space, or like in the NGO slash healthcare space, on the African continent, and they were seeing this phenomenon where all of these like UK and US based companies were coming into Africa as like AI healthcare companies, and were, genuinely wanted to do good by improving access to health care through these AI technologies. But his concern was that in the process, they were also doing quite a lot of harm, because there was no like, legal infrastructure in many of the countries that they were operating in to actually support the users of these technologies if something went wrong. And there was also no like legal framework to dictate like what, how these companies should operate in terms of the data that they were collecting, how they should store that data, protect that data, all of those things. So, he, he had, there was this like, tension of like, all these companies are coming in - and it already felt very colonial to me at the time that you know, like the US and UK are coming into Africa, they like are finding this data, and they benefit from it, as you know, as private companies that are that need the data to develop their technologies. And that need test cases essentially to prove that their technologies work in order to then market it to more developed economies. Like, all of that all of those dynamics just felt like very fraught to me and I was like, I am sure if this is happening in the healthcare industry must be happening elsewhere. And that was like the very initial conception of the series where I was like, I'm going to divide this up into like different industries that deal with sensitive data and see what I can find. So I was like looking at healthcare, I was looking at education, I was looking at facial recognition. And, and a whole bunch that I'm sort of now forgetting. But as I started looking more into this general theme, I was trying to find like, what are the, what is the existing scholarship on this stuff, and actually, one of my colleagues at the time that I had sort of enlisted to help me do this research, she was the one that then was like, wait a minute, I think this is what, this is what people, when people say data colonialism, this is what they're talking about. So then once we had that keyword, then it sort of like opened up the floodgates of all of the literature that had already been written about data colonialism. And when I started digging more into it, I was, it made me think I should refashion the series not as a story for each vertical, but as like a narrative arc that kind of walks the reader through the, the - process is like the wrong word. But I can't think of a better one, like - the process of colonialism. So like, the first story, I was like, I want this to be about extraction, data extraction, and like racial control. The second one, I want it to be about exploitation. And the third one about resistance. And the last one about liberation, like, how do I bring readers through this journey of like, what is it like? What are the examples that we can point to, to prove this argument that AI colonialism is indeed happening? And how do we then exit out of this, as you were saying, like the the series tries to address both. So then once I had that arc, kind of locked down, then it was about essentially continuing to do research to figure out what stories out there actually fall into these categories. That would be like a really great entry point into exploring this broader theme of resistance, or this broader theme of liberation. And it really was, it was just like, reading and talking with a lot of researchers about what they were hearing and seeing, what initiatives or stories they, and also with journalists, actually what what things they had seen within their contexts around this, this, like colonial and decolonial theme. And over time, I was able to narrow it down to the to the four stories that ultimately ended up in the series. But it was yeah, it was sort of like the the slow mulling and consolidation of like two years of just like background research before it actually kind of concretized into these four discrete stories.
KERRY MACKERETH:
Oh, It's really special to hear about the whole process behind, I think, the stories and the articles that people write because, you know, I think for every thing that we see in an article is, you know, hundreds of other decisions about what goes in, what goes out. And also for our listeners, we love to attach a kind of relevant reading list,
either of works by our guests or curated by our guests on our website. And so if you're interested in reading more about AI colonialism and data colonialism, we'll of course attach Karen’s articles, but also a range of other books and articles and resources for thinking about this topic.
So I wanted to change tack slightly and ask about some of your other amazing investigative work into Facebook. And why you think that Facebook's responsible AI team is an insufficient solution to the huge social and political problems caused by companies and platforms like Facebook or Meta now. So what do you think is kind of the problem with responsible AI team trying to meaningfully prevent or mitigate these forms of misinformation.
KAREN HAO:
Yeah, I think, to answer this question, I want to actually bring in another company which is Google, the sort of falling apart of the Google's ethical AI team and my piece came out around the same time. And to talk about both of them, I think, bring some interesting insights. So with the Google's ethical AI team, you had this instance where you had a co-lead Timnit Gebru, who was very much not of the system like she was brought in from like, very much as as like an outsider who was like really deeply an expert in ethical AI issues that was then brought into this like Big Tech environment to try and like be a stopgap for, for Google's AI projects. And what you had there was, she did her job so well that Google got really pissed and fired her. And that was an instance where she didn't she wasn't willing to play the game, she wasn't willing to be part of the system. And that was supposed to be the job. But Google had wanted someone who was willing to play the game and was willing to actually hold back their criticisms and their critiques when it was beneficial to Google. In Facebook's instance, you had the exact opposite happen, where I was profiling this guy, Joaquin Quiñonero Candela who he was very much of the system he grew up in the tech world, his career was was was deeply rooted in like this practical AI work, first at Microsoft, then at Facebook, and he'd actually led Facebook's transformation into an AI company. So he was sort of like a well respected, well regarded leader within the company that that then chose to move on to this responsible AI work. And you would think that if an outsider coming in doesn't work and is ejected by the system, then maybe an insider trying to then work on turning the ship around might be like a better methodology. And again, it didn't work at all, because ultimately, both because of his own, I think, blind spots or his own gaps in understanding how to work on AI ethics issues, as well as the company's
infrastructure that sort of kept him in check. And like the the incentives that kept him in check, that ended up not allowing him to do what he thought he was doing, which was to refashion Facebook, redirect Facebook in a more productive direction. So in both cases, ultimately, what the flaw was, was that these people are beholden to these companies. And these companies are beholden to their profit lines. And if you ultimately orient AI ethics work around that, then it's not going to work, because so much of AI ethics does actually go against profit motives at a company. And it's if you're not, if your ethical AI team is not allowed to actually go against that, then it is definitely bound to fail.
KERRY MACKERETH: That's so interesting. And I think, you know, it's another reason why I think I'm a little bit sceptical sometimes of one of the ways in which ethics I think is packaged and kind of like sold to people is this idea of like, oh, consumers are more interested in ethical products now, and you need to avoid the like negative fallout from creating, like unethical products. And like, of course, these things are true and important, but ultimately, you know, there has to be a bigger question of like, what incentivizes and drives a company to make its products more ethical, you know, and it can't really be the profit line, because there have (sic) to be some kind of sacrifice somewhere.
KAREN HAO:
Yeah, and the thing is, like, one of my huge pet peeves when companies say, Oh, we're when they focus on the user, the user, whatever, companies like Facebook and Google, ultimately, it's not just the users that they're impacting. They're impacting everyone. It doesn't matter if you use Facebook, or you don't use Facebook, you are somehow being impacted by their algorithms. And that is the huge gap when Facebook's Responsible AI team kept being like, Oh, well, you know, we're very user centric and we spent a lot of time talking, talking with our users. Like that is not the - that's that is too narrow of a scope to be thinking about ethics issues, like you need way more stakeholders to be engaged, to engage with and engage with you than just the people who choose to opt into your products because, you know, like it, I basically don't use Facebook anymore, but I still live in or I'm still a US citizen. I don't live in the US anymore. I'm still a US citizen. And American politics very much still affects me and the way that Facebook's, Facebook spreads misinformation and impacts American democracy that affects me regardless of whether or not I choose to be on the platform on a day to day basis. So, yeah, it's, um, it. But that's the problem. Like if you, if you are oriented towards your profit line, of course, you're only going to think about your users, because those are the people that are that are giving you more profit.
KERRY MACKERETH:
Yeah, and I think this is just such a crucial point, and one that we don't really talk about enough. So thank you so much for bringing it up. Because, you know, people talk about opting out and refusal, but to completely opt out of these systems, as you've said, you can not be actively using or getting the benefits from something in any capacity and yet still be suffering from its ill effects. Like, you know, I'm thinking, for example of going to protests and how, you know, people were saying like, Oh, well, if the people who are protesting don't want to be identified with facial recognition technology, why are they posting photos of themselves protesting on Instagram or on Facebook? And you're thinking, Well, the problem is, it's that I could choose never to take a photograph of myself at a protest and put it online, but someone else can so easily take a group shot or a photo, and they've put it online, and then suddenly, that's so out of your control. I think, you know, how these networks now operate, it's so decentralised.
KAREN HAO:
Exactly, it's the same exact thing. I think that's a great parallel. Like I Yeah, ever since I started covering AI ethics, I get really paranoid about people taking photos of me in public. So I'm constantly trying to trying to opt out of the fact that technologies now can identify me anywhere I go, if I if someone just like clandestinely, or not even, like maliciously, but just like takes a photo, and I don't notice is the same thing with these, these these companies is like, you can try your best and do what you do to completely reorient your life around avoiding the effects of the systems, but it's just not possible. And therefore, these companies need to be thinking more broadly than just the users of their platforms.
KERRY MACKERETH:
Exactly. You know, and I do always say to my friends, this job makes me very paranoid, working on kind of tech ethical issues, I don't know if you have the same experience (laughter) And it also makes me the biggest hypocrite because again, you know, I say, oh, you know, these systems, etc, etc. Then I go on Netflix and say, Oh, thank you for (laughter) recommending food shows to me. Yeah, you know, is it cake? Or is it not? That's exactly what I want at the end of the day.
KAREN HAO:
Yeah, that's exactly what happens. I mean, like I worked for so long, I used Facebook, and I use Instagram and WhatsApp and whatever, in my personal life. Now I'm like, trying as much as possible to only use it in my professional life. And it's even that is frustrating, because ultimately, what I really want to do is just like delete everything altogether, but I'm still very much attached through for work purposes. And it does feel very hypocritical to keep harping on these companies, when they are the things that then connect me to sources that helped me do my job and all of these things. But it does speak to the power of of these, like, even when, you know, you desperately want to disconnect from these companies. And they're there is such a strong, there’s strong economic reasons, there’s strong other reasons to be on these platforms. So it's just even more incumbent on these platforms to make sure that their services do not really wreak so much havoc on the people who use and don't use them.
KERRY MACKERETH:
Exactly. You know, it's such a sort of liberal individualist choice model as well. So it doesn't matter how much I opt out of these platforms, my mum will always post photos of me on Facebook, whether I like it or not (laughter) so they’re out there already. Just to wrap us up, I want to say again, another congrats to this new post that you've started on putting on China, tech, and society for the Wall Street Journal. And so we're very thrilled for you, but Eleanor and I also thrilled that you're the one going to be covering this important topic for The Wall Street Journal since reporting around China and tech development often falls into quite scare-mongering tropes, and reflects strands of techno-Orientalism, which is something we discussed in some of our other episodes in the podcast, such as episodes with Michelle N. Huang and Anne Cheng. So to finish, we just want to ask you, what are your kind of hopes and priorities around your new role?
KAREN HAO:
My number one hope and this sounds very basic, but I just really want, I personally want to understand what exactly is happening in technology in China, in the tech industry in China. Because when you, when I covered AI for three and a half years, China was always the elephant in the room. Everyone wanted to know what China was doing, and no one knew what China was doing. And so the effect was that people, because it's like such a big deal to talk about the global AI industry and not mention China, people would mention it but then like, fall into these, as you said, these tropes, where they there's not any substance behind what they're saying, but like, no one knows any better. And so we're just like repeating, Oh, yeah, we think this is happening. And it just like drove me crazy that I also like could not say anything of substance to be like, this is what is actually happening. So that was like, the biggest reason why I was like, so determined to make this pivot over because I really want to finally be knowledgeable myself, and then hopefully through that, I also inform the rest of the American and UK public about what is happening so that they can make more informed decisions. But I think the secondary goal is that so much - because there's so, the visibility is so low into what's happening - so much of the coverage is not focused on people. It's so focused on government and so focused on like, companies, but you don't actually really get a sense anymore of like the actual human stories, the human impact of like the the personalities that develop these technologies, the people that are affected by these technologies, which in the US and UK because there are so many more journalists that are examining the tech industries there you have so much more humanization of the industry and the impacts and we just don't really have that in China and so that that was like a huge goal for me is a figure out what is actually going on and be like humanise, humanise it so that the narrative naturally complexifies.
KERRY MACKERETH:
Yeah. And I really love that idea of this sort of natural complexification of that narrative. But yes, thank you so much for taking the time to chat to us. We know you're extremely busy, but it's been such a delight to be able to talk to you. So thank you again.
KAREN HAO:
Thank you so much Kerry for having me.
ELEANOR DRAGE:
This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.
Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0
Comments