top of page
Search

Emily M. Bender and Alex Hanna on Why You Shouldn't Believe the AI Hype

 In this episode, we talk to Emily M. Bender and Alex Hanna. AI ethics legends and now the co-hosts of the Mystery AI Hype Theatre 3000 podcast which is a new podcast where they dispel the hype storm around AI. Emily is a professor of linguistics at university of Washington and the co-author of that stochastic parrots paper that you may have heard of, because two very important people in the Google AI ethics team allegedly got fired over it, and that's Timnit Gebru and Meg Mitchell. And Alex Hanna is the director of research at the Distributed AI Research Institute known by its acronym, DAIR, which is now run by Timnit. In this episode, they argue that we should stop using the term AI altogether, and that the world might be better without text to image systems like DALL·E and Midjourney. They tell us how the AI hype agents are getting high on their own supply, and give some advice for young people going into tech careers.


Professor Emily M. Bender is a Professor in the Department of Linguistics and an Adjunct Professor at the Information School and the Paul G. Allen School of Computer Science and Information at the University of Washington. She is the Faculty Director of the Professional Master's in Computational Linguistics and the Director of the Computational Linguistics Laboratory. Her research interests include technology for endangered language documentation, computational semantics, and methodologies for supporting consideration of impacts language technology in NLP research, development, and education. She is a member of the Value Sensitive Design Lab, the Tech Policy Lab, and RAISE. Her public scholarship is centered around supporting public understanding of language technology.


Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. She also works in the area of social movements, focusing on the dynamics of anti-racist campus protest in the US and Canada. She holds a BS in Computer Science and Mathematics and a BA in Sociology from Purdue University, and an MS and a PhD in Sociology from the University of Wisconsin-Madison.


READING LIST:


Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi-org.ezp.lib.cam.ac.uk/10.1145/3442188.3445922


Scheuerman, M. K., Hanna, A., & Denton, E. (2021). Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-37


Paullada, A., Raji, I. D., Bender, E. M., Denton, E., & Hanna, A. (2021). Data and its (dis)contents: A survey of dataset development and use in machine learning research. Patterns (New York, N.Y.), 2(11), 100336.


Scheuerman, M. K., Pape, M., & Hanna, A. (2021). Auto-essentialization: Gender in automated facial analysis as extended colonial project. Big Data & Society, 8(2), 205395172110537.


KERRY MCINERNEY:

Hi! I'm Dr Kerry McInerney. Dr Eleanor Drage and I are the hosts of The Good robot podcast join us as we ask the experts what is good technology is it even possible and how can feminism help us work towards it if you want to learn more about today's topic head over to our website www.thegoodrobot.co.uk where we've got a full transcript of the episodes and especially curated reading list by every guest. We love hearing from listeners so feel free to tweet your email list and also so appreciate you leaving us a review on the podcast app. Until then sit back relax and enjoy the episode.


ELEANOR DRAGE:

 In this episode, we talked to Emily M. Bender and Alex Hanna. AI ethics legends and now the co-hosts of the Mystery AI Hype Theatre 3000 podcast which is a new podcast where they dispel the hype storm around AI. Emily is a professor of linguistics at university of Washington and the co-author of that stochastic parrots paper that you may have heard of, because two very important people in the Google AI ethics team allegedly got fired over it, and that's Timnit Gebru and Meg Mitchell. And Alex Hanna is the director of research at the Distributed AI Research Institute known by its acronym, DAIR, which is now run by Timnit. In this episode, they argue that we should stop using the term AI altogether, and that the world might be better without text to image systems like DALL·E and Midjourney. They tell us how the AI hype agents are getting high on their own supply, and give some advice for young people going into tech careers. We hope you enjoy the show.


KERRY MCINERNEY:

Brilliant. Thank you so much for joining us just to kick us off. Could you tell us a little bit about who you are, what you do and what's brought you to thinking about feminism, gender and technology and Emily, maybe you can start.


EMILY M. BENDER:

Yeah. Hi, so excited to be here and I'm excited to do the podcast swap. That's really fun. My name is Emily M. Bender. I'm a professor of linguistics at the University of Washington, where I run our professional master's program in computational linguistics. And basically my path into this was a very wise member of our advisory board, Dr. Leslie Carmichael in 2016 said, Hey, you should probably have something on ethics in your curriculum. And so I went looking for somebody to come teach us. And of course. Everybody's busy. And so I realized that if it was going to happen, I need to do it myself. So I organized our 1st graduate seminar on what was then called ethics and natural language processing. I now call it societal impacts of language technology. And that is how I started thinking about this together with a wonderful group of students and have had the privilege of continuing to do so since then.


ALEX HANNA:

Hi, I'm Alex Hanna. I'm director of research at the distributed AI research Institute. I think my path into thinking about societal impacts of AI was more like, I am a sociologist by training and I was doing things actually around doing some language modeling on my own for my dissertation, especially around protest event data, realizing more and more the kind of uses of these data for militarized surveillance especially of protesters and so really had a kind of a, crisis and confidence of what the kind of uses of this technology would be.


ELEANOR DRAGE:

That's why you've got your amazing podcast that we'll talk about later. But first, what is good technology? Is it even possible? And how can feminism and other pro justice ways of thinking help us get there? Shall we maybe go first with Alex? And also, can you tell us, if you have one in mind, what is a good technology that you can think of on the top of your head? And earlier I was talking about whisks, but I like, Laura Forlano talks about blood sugar monitors, and when they're working well and have the cap on, are there any kind of bits and pieces you can think of as good? As well as a bigger answer on good technology more broadly.


ALEX HANNA:

Yeah. I mean, The thing is, this is funny and I'll try and I should have re listened to the first time I was on your podcast so I did not repeat myself, but if I do repeat myself, at least that means I am consistent. And I think thinking about technology and with kind of feminist premises is helpful. One thing it's I think, helpful to do is to historicize and to think about, we think of technology as shiny computers and whatnot, but technology can be our pencils and our papers and whatnot. And going back and thinking about ... and there's another cat.

Euler's here. So what has been, what are the kind of factors that have led to development of certain kinds of technologies, right? Certain kinds of technologies arise as the process of capitalist production.


We might think them, of them as innocuous or harmless now, but may have had some pretty awful, origins, right? I think there's some technologies I think of very recently in which the conditions of their creation would align with a set of feminist principles. And so one example that I love to use is some of the work that's done by Te Hiku Media and the work of Of indigenous te reo Māori folks in developing language technologies that work for their communities. And folks there and one of the folks that I've been able to speak with in person is Keone and he's in. His, and he's described it as, we're not that interested in like publishing papers and necessarily going to open AI and saying, integrate this in your product. It's more like, how can we make sure that the community has ownership of these technologies and they have, machine translation and automatic speech recognition of stuff in that language, but it's also that language and English.


So because people to tend to switch between the two, especially in automated speech recognition. How is that being deployed for that community? And how is the data and the way that it's gathered being done in concert with community principles? So I love to use use them as an example.


EMILY M. BENDER:

I'm fighting with Euler for access to the mic here. Hopefully the purring is not too loud. So I listened to many episodes of your podcast and heard lots of common threads in people's answers and thought, can I say anything new? And I think what I'm going to say is consistent with what Alex has said and with what many of your other amazing guests have said, but I don't think it makes sense to talk about technology as good or bad. And I don't mean by that technology is neutral because it absolutely isn't. But what I mean by that is I don't think it makes sense to evaluate technology outside of its context of development and use. So what else is talking about historicizing things. And so we can talk about, does, in context, a specific technology, was it developed in a way that was beneficial and to whom? And how were the processes around it designed to identify and mitigate harms and things along those lines? And I think that makes for a better conversation because there's no such thing where somebody could say, okay, I have determined that for once and for all, this technology is a good technology. Therefore, you are free to use it however you like without care and worry and you need to think about, am I making good choices here? Because it's always embedded. In terms of a good technology in that sense. So something that is situated in its context of use. I think I want to give a shout out to metadata and knowledge organization systems from old school library science.


And I think that is super important technology because so called information that is dissociated from its provenance becomes useless. And the more we are swimming in a soup of that kind of dissociated information the harder it's going to be to have functioning systems of many other kinds, including public health and democracy, et cetera. Shout out to library science and the sort of hard work of figuring out how to... frankly, I find metadata standards to be among the most boring things in the world to think about, but they are hugely important and I'm really grateful that people work on them.


KERRY MCINERNEY:

Amazing. Those are wonderful examples. Now you're, and also your cat is precious and loves attention so much. Hi.


ELEANOR DRAGE:

Can I just say, Kerry, you sent me a message saying, lol, your eyes lit up when she said library science, you big nerd.


KERRY MCINERNEY:

It's a really healthy working relationship.


ELEANOR DRAGE:

I used to love that. And I think we should make AI boring again. So this is really great.


KERRY MCINERNEY:

We were working on bringing in some of the wonderful insights from the Good Robot podcast into English A level language, syllabuses and English schools. And Eleanor was just so passionate about these syllabuses and was talking with all the other grammar people who are also deeply passionate about this. And I really felt like when you walk into the wrong room at a convention, and you're just like, you all seem wonderful, but these are not my people.


Anyway, we wanted to talk specifically about your wonderful podcast. Mystery Hype Theater 3000. No, I got some of those in the wrong order.


EMILY M. BENDER:

Right words, wrong order. Mystery AI Hype Theater 3000.


KERRY MCINERNEY:

Perfect. And so we just want to kick off by asking, what is AI hype? Why did you choose to focus your podcast on it? And why does it matter?


EMILY M. BENDER:

Alex, you've got a great definition of hype. So you should do that part.


ALEX HANNA:

Hype and tech hype itself is a kind of excitement around a certain set of tools and it's oriented around of like a fear of missing out and if you don't get on this, you're going to be left in the dust and then everyone's going to modernize and in front of you and I think where I added, it was like my, my, my definition is a definition by comparison, which is like hype is different from bullshit, and here's riffing on Harry Frankfurt's definition of bullshit, like the bullshitter intends to deceive, they really don't have any investment in kind of the truth value of the statement, whereas the hyper is more interested in how do you heighten this kind of technology? How do you heighten excitement around it? Mostly for sales. So it's distinct from that. And the, and then the bullshit and hype are also different from snake oil, which is the kind of term that is used by Arvind Narayanan & Sayash Kapoor and their work, which is pulling the wool over your eyes. Whereas I think the hype artist like does get high on their own supply. And they have a certain kind of investment in it and they're willing to really promote it at any costs.


And then as for the podcast it's not like we decided to do a podcast and then chose to focus it on AI hype. It's an accidental podcast. So we had both been involved in a lot of online, usually textual discourse tearing down AI hype in, tweet threads in blog posts, sometimes in the mainstream media.


And one day I came across this 60 minute read blog post by Blaise Aguera y Arcas at Google that was just so full of AI hype that I felt like it needed tearing down, but there's no way I was going to have the energy to do it like word by word textually. And so I got onto our group chat and I said, Hey this thing needs to be given the Mystery Science Theater 3000 treatment and Mystery Science Theater 3000 is this TV show where the heart of it is this, nevermind the backstory, but somebody is forced to watch bad science fiction movies and survives it by doing running comments. Terry over the movie, which of course is scripted and it's hilarious.


EMILY M. BENDER:

And it makes these bad movies really fun to watch. I sell that with quite a lot of confidence. I knew the concept. I actually hadn't watched the show until I watched one ahead of ours. So I knew what I was doing, but Alex is a big fan. She's I'm in. And Alex knows something about Twitch streaming. And so we decided to do just a one off Twitch stream, tearing down this blog post and one hour, which turned into 40 minutes because of technical difficulties wasn't enough.


And then we did two more to get through that blog post. And then by then we'd started something. And so we kept going with the Twitch stream and then ultimately managed to bring our wonderful producer, Christie Taylor on board and turn it into a podcast. So accidental podcast. I would like your listeners to know that they should not expect high production values in those early episodes.


ALEX HANNA:

Please bear with us.


ELEANOR DRAGE:

It's fine. If they go used to our podcast, it's pretty okay whatever you're doing.


EMILY M. BENDER

Yeah, we were pretty, pretty rocky at the beginning when we were producer-less. The reason for it was basically just to a certain extent as an outlet for us to express our frustration.

And one of the things that I've really loved about it is that it started building a community and it's people who often feel isolated when everyone around them is caught up in the hype and the FOMO, as Alex was pointing out, and they're looking at one of these things going, but that makes no sense. And they feel like they're the only one.


In their circle, who thinks that? And so part of the purpose for me now of the podcast is to build that community. So people know they're not alone and they're not crazy for thinking that all of this stuff is nonsense. And here's some specific details about why.


ELEANOR DRAGE:

And for listeners, if we're not needed anymore in AI ethics, Kerry and I can do ASMR videos on Twitch. Which is actually the career that I always wanted. What would you say are the two technologies that don't deserve your hype? Maybe, let's switch things around. Emily, do you want to go first? Which two technologies or elements of technologies do not deserve our hype?


EMILY M. BENDER:

All right so number one is large language models specifically large language models used to spit out synthetic text. Language modeling is a useful technology. It's part of speech to text systems, it's part of spelling correction systems, it's part of machine translation systems, but all of that is using the technology to choose a likely string out of a list of candidates produced by some other process that was ultimately starting with a person speaking or writing.


The LLMs as they're used now are spitting out text that is nobody's ideas, not anybody's meaning. Nobody said that. And that is useless. And it is way overhyped. So that was easy. Number one. And I think number two is the collection of technologies that promise you convenience based on surveillance.


So we can get you whatever you need right away. Just, make sure we always know where you are. Or we can keep you safe. Just make sure you've got this camera on your front step and share the data with the police. That collection of technologies I think is way overhyped. And shouldn't be being used at all, but instead it's being sold as this like positive, happy thing.


ELEANOR DRAGE:

We love a moratorium on the good robot. Here's for hoping. Alex?


ALEX HANNA:

Yeah. If we're going to talk about large language technologies, I also think text to image technologies are really just absolutely horrible. Just for many reasons. And I think people are very gassed on what they generate.


But they do, there's just so many harms that emerge from them. In the process of ingestion. And the images that have been hoovered up to produce them, especially from artists and creators. And I did this great event with Karla Ortiz at the San Francisco public library in which, she's 1 of the plaintiffs on a case that's that's going against the creator of Stable Diffusion on the way that these technologies have really deprived many artists, visual artists, a livelihood the data sets as well that are used to create these things. So 404 media in collaboration with some Stanford folks published a report.


About there is CSAM child sexual abuse content in the data set. And prior to that Abeba Berhane and Vinay Uday Prabhu also published on how there's a bunch of non consensual porn and awful stuff in those data sets. And so those data sets just from the jump are pretty pretty awful. Then they get used to, encroach on particular markets to take the jobs from existing visual artists and designers. Recently like Nicki Minaj came out under criticism, so I think to promote her new album, because I think some of the promo images were AI generated and everybody's like this isn't entertaining. This says you're not willing to pay real designers for your art. And on the other side of the output, non consensual defects and non consensual porn it's just from top to bottom, text to image generation is something that's hyped up, but just has monumental harms embedded in it. So that's one that is my obvious answer. Narrowing in into other overhyped technologies. Gosh, there's so many to choose from.


Cause I'm thinking about some work from a fellow at DAIR Adrienne Williams, she was a charter school teacher briefly and then became an Amazon delivery driver. So 2 sites of massive surveillance. But much of the technology that gets deployed in schools is. Basically, ed tech is hyper surveillance and especially as public schools get defunded in the US and other places or stood up in the majority world as alternatives to having teachers and people who are, in contact with kids, they get stood up as replacements. The well known ones like Google classroom, but there's a bevy of smaller ed tech organizations that are predicated on student surveillance on students having to put a bunch of details about their mental health and their home situations. Students are not receiving adequate social support in schools. And yet these things are being offered as AI replacements to a whole safety net of what schools have historically provided.


KERRY MCINERNEY:

Yeah, you're right. It's really frightening. And I think it also, raises this question of what is education for? And I remember talking to someone who works in critical studies of education and, and they were talking about their feelings that this trope of kind of personalized AI learning was really ignoring, like maybe what the kind of broader societal function of our education systems are rather than being about just trying to shift a person knowledge wise from A to B.


For anyone watching on our YouTube, we have another precious cat appearance, Alex's cat, for anyone listening, you're missing out on a lot of wonderful cats right now. You're also missing out on Eleanor sitting in a big red chair that makes her look like she's on the Graham Norton show and it's about to get springboarded out if she doesn't tell an interesting enough story.


With all those kinds of different harms in mind what would you want changed? Let's say you could wave a magic wand and it would change something about the way that we talk about AI, the way that it's hyped up and shared on social media and the news, just from person to person, even. What would you want changed?


EMILY BENDER:

Can I go first?


KERRY MCINERNEY:

Yeah. Yeah.


EMILY M. BENDER:

I would like the word artificial intelligence to just drop out of the lexicon. And then when people are talking about probabilistic automation or synthetic media creation or some kind of a classification system, they have to say specifically what it is that they built, what's being automated. I think the discourse would get so much clearer.


ALEX HANNA:

Yeah, absolutely in agreement there. I think one of the biggest surprises in computer science was thinking about the way that a computer scientist talked about data and the way that data seemed found or scraped and not the results of so many different processes.

There's a whole book by, edited by Lisa Gitelman, Raw Data Is an Oxymoron. But the kind of way that data itself is approached needs to really have some critical consciousness around it.

If we could combine Emily's and mine, maybe it would be replacing every incidents of artificial intelligence with logistic regression, built on stolen data or linear algebra built on your stolen tweets or something. I don't know some way to de naturalize it.


ELEANOR DRAGE:

Thank you. That was delightfully controversial and exactly what we expected. So to end I have two quick questions. One is, what do we do about big tech? Should it be broken up? Should it be left as it is? And then the other one is what do you advise young people to do? Do they go work there? Should they not?


ALEX HANNA:

What to do of Big Tech. Oof, geez. It's It's hard, right? Because we're, I think the longer and longer we go on, we see that the state responses to Big Tech are inadequate and State actors and we see this in the UK with Rishi Sunak of the AI safety summit with Joe Biden in the U S and in, in Senator Chuck Schumer's meeting with different people.


And we did an episode on this, where we spoke with Justin Hendrix on talking about the different kind of AI insight summits that he's done, but it seems like the state is really hands off. They really want to be in partnership with big tech organizations.


There have been some pockets that have been nice to see. And we were talking in our group chat just about the FTC and the U. S. and some of the enforcement actions at the, with Lina Khan at the helm, which has been, addressing Amazon pretty directly as well as OpenAI.


But I also think that there's also a need for more mass mobilization around it. There have been some vocal elements, so for instance, the No Tech for Apartheid campaign has targeted Google and Amazon for their abetment of the genocide happening in Gaza right now.


And the way that Project Nimbus and the kind of technologies provided by these organizations are supporting that. And that's a pretty peaked kind of element of that, but it might act as to borrow a term from Rediet Abebe, one of the things that she talks about is the idea of computing being synecdoche. So it could be entree into different issues that you're focusing on. And so it's the flip of this. So the genocide can be synecdoche and the power of technology and the power of big tech and how can we reign it in and regulate it.

And so that's like what I think should be done with big tech. We need a movement, way of bringing some kind of governance to it. What do we say to the young people that want to go work at Google? I don't begrudge anybody that wants to go and work at Google.

Even though Google just had a bunch of layoffs a huge round of layoffs where they laid off what it seems. From the VP level down to, down to contract workers, although I imagine contract workers, got the shorter end of that stick. I don't begrudge people that go and work in big tech.


There's a lot of material reasons why folks go and work at big tech, especially if you come from a marginalized group, especially if you grew up working class or. Or and then at the intersection of being poor and black or brown or an immigrant. I don't begrudge folks for doing that, but also knowing that there's clear limits to that.


There's very little possibility of change from the inside. That's not a thing that, that really happens, especially as an individual contributor. People stay at their jobs in that position because they have become disciplined enough to play the institution's game. So you can go there, but have no illusions of what you can do internally.


EMILY M. BENDER:

Those are great answers. Actually, I want to start with the 2nd question so that I can remember what the 1st one was. Starting with the I, too, don't begrudge anybody who wants to, get access to the high paid jobs that are still there for now. Alex noted there was recently more layoffs in big tech and especially would love to see some of that centralization of wealth through salaries going out to communities that have historically had less of it. I am in favor of that. But I do think it's worth going in with your eyes open and not starry eyed. And they will try to make it seem like paradise, right?


The orientation for new workers and all the swag and, all this stuff. Be aware of that, but also build your community, which is another way of saying solidarity is key here. One thing that we do that our master's program for a couple of years running now, in our lab group we have an informal discussion about whistleblowing because the idea is that if somebody is going to be a whistleblower, which is one of the ways that sometimes change happens, that is much, much more possible if you go into it prepared. And so we ask students to, think ahead of time about what are their own personal red lines? Before you're in the situation, what are some things where you would say, okay, this is it. I refuse. I need to either get out or possibly become a whistleblower.


And so that you don't like, creep past that red line and then only notice it when you've passed it. We recommend the tech workers handbook so that people know their rights, know, what the legal structures are locally. And again, solidary organizing, like you got to find like minded people that you can talk with outside of the company supported software, right?


Yeah. Not on your corporate device, not in the corporate Slack but through your own personal connections. So you can say, Hey, this feels funny to me. Do you agree and that community, which is so often behind the scenes is so important to if certainly important to be able to make positive change and it's also just important to be able to resist negative things.


And I'm reminded right now of a beautiful essay I was just reading by Amy J. Ko, who just got an award from I think the ACM, I think she was made a distinguished member, and she wrote this really thoughtful essay on medium both about her own difficulty accepting recognition, but also this view towards how do we recognize the collective action that's required that the, when we celebrate the achievements of individuals, there's always a community behind them.


And how do we celebrate the achievements of communities? And I think that's a nice sort of flip side of thinking about the importance of building community. So if you're going to go into big tech, if you are going to channel some of that wealth into your community, great. Also build the worker community around you that you need so that you can be safe, that you can stay true to yourself, that you can make the decisions you want to make.

Having said all that, I still don't remember what the first question is. Oh, what do we do about Big Tech? I knew there was another part of it. So what do we do about Big Tech? I one of the things that's been exciting in my life for the past year is that I've had a chance to start talking with policymakers which is better than shouting into the void on Twitter.


I don't know that I'm necessarily having an impact, but I really appreciate the chance to be in the room. And one of the things that I'm seeing is that the most effective work is done by policymakers like Lena Khan at the FTC, who understand their job to be protecting the rights of individuals and communities, rather than supporting business, like you need government should be supporting business in the sense of keeping a stable regulatory regime so that business can be done. But beyond that the top priority needs to be the rights of individuals and communities. And I think that if we look at regulation through that lens, we make much better decisions.


And so I'm out there, encouraging that when I get these chances.


ELEANOR DRAGE:

Thank you so much. I would just love to say that, we've been so inspired watching you both work and also the way that you have interacted with each other and building those solidarity. It's been incredibly important and meaningful to us. And also, we do have lots of friends who, and two really good friends at DeepMind who we love a lot and we do benefit occasionally from their incredible free lunch. Thank you both of you. And we hope to talk to you again very soon.


EMILY M. BENDER:

Thank you so much. This has been a delightful conversation and I love your podcast.


ALEX HANNA:

Yes. Thank you so much for having me on again. And how I'm happy that both of my cats were able to have an appearance and please go to DeepMind and mooch off them as much as you can.


EMILY M. BENDER:

And here's Euler saying goodbye one last time.


KERRY MCINERNEY:

Oh my goodness.


78 views0 comments
bottom of page