top of page
Search
Writer's pictureKerry Mackereth

Hot Take: Does AI Know How You Feel?

In this episode, we chat about coming back from summer break, and discuss a research paper recently published by Kerry and the AI ethicist and researcher Os Keyes called "The Infopolitics of Feeling: How race and disability are configured in Emotion Recognition Technology". We discuss why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect. We then explore the ableist foundations of what used to be the most famous Emotion AI firm in the world: Affectiva. Kerry also explains how the Stop Asian Hate and Black Lives Matters protests of 2020 inspired this research project, and why she thinks that emotion recognition technologies have no place in our societies. 


Reading List:


McInerney, K., & Keyes, O. (2024). The Infopolitics of feeling: How race and disability are configured in Emotion Recognition Technology. New Media & Society, 0(0). https://doi.org/10.1177/14614448241235914


Yao, Xine. (2021) Disaffected : The Cultural Politics of Unfeeling in Nineteenth-Century America. Durham: Duke University Press.


Keyes, O. (2024) “Automating Autism.” in Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines. Oxford: Oxford University Press.


Kim, Eunjung. (2016) Curative Violence : Rehabilitating Disability, Gender, and Sexuality in Modern Korea. Durham: Duke University Press.


Schuller, Kyla. (2018) The Biopolitics of Feeling : Race, Sex, and Science in the Nineteenth Century. Durham: Duke University Press.


TRANSCRIPT:


DEEPYCUB:

Hot takes with the Good Robot. Hot takes with the Good Robot. Hot takes.


KERRY:

In this episode, we chat about coming back from summer break, and discuss a research paper recently published by Kerry and the AI ethicist and researcher Os Keyes. It's called "The Infopolitics of Feeling: How race and disability are configured in Emotion Recognition Technology". We discuss why AI tools that promise to be able to read our emotions from our faces are scientifically and politically suspect. We then explore the ableist foundations of what used to be the most famous Emotion AI firm in the world: Affectiva. Kerry also explains how the Stop Asian Hate and Black Lives Matters protests of 2020 inspired this research project, and why she thinks that emotion recognition technologies have no place in our societies. We hope you enjoy the show.


KERRY:

 Brilliant. We are back from our summer vacation. Hello and welcome back to our regular listeners. Welcome to any new listeners if you're here. So I'm Dr. Kerry McInerney. I'm one of the co hosts of The Good Robot Podcast and I'm here with the wonderful Dr. Eleanor Drage. How's your summer been?


ELEANOR:

It's been good. I am writing a book and so mostly have been either doing that or stressing about not doing that. Oh, by the way, they say that Kant never left his home, which is why his philosophy is not much good. But also he didn't like going on holiday. And it's the only thing that me and Kant have in common is that we'd rather stay at home and then go for a walk around the block than actually move about.


KERRY:

But didn't Kant also threw awful parties. Wasn't this one of his like trademarks? He had this huge social guide on like how people should and shouldn't behave at parties. Like he was just like seemingly the least fun person in the world.


ELEANOR:

Yeah, I'm also dour at parties. So we have that in common as well. Kant was just a real killjoy,


KERRY:

Kant's not like a killjoy in the feminist sense of being like, how do we disrupt like the operations of the ordinary, in this kind of state or mirage of happiness that actually really disenfranchises people who are like not happy because they're like systematically oppressed.


He's just I hate joy, like screw joy, like all we need is morals and rules and prescriptions. So anyway but yes, no, my summer was a weird one. My husband got do you know what Shingles is? It's like adult chicken pox, but don't get it basically. It sucks. So we watched six seasons of the BBC's hit show, Line of Duty.


And then I unwillingly watched a whole season or two of Lost, the sci fi TV show from the early 2000s that I had never seen. And it stressed me out so much. I was like, I can't exist in this state of suspended anxiety as I get, like more and more stressed on this island. So that was my summer.


ELEANOR:

Fantastic. I also think Lost is really anxiety inducing. I like watched it because I wrote a paper about it, and duvet in the middle of the day because I just couldn't face watching it at night. So yeah, I respect Will and his TV choices, but. I'm very much what about been watching, what's it called? Palm Royale with Kristen Wiig. It's really fluffy. I had a difficult couple of weeks and that was just a real lifesaver. , it's got Ricky Martin in it. It was delightful. So I am very excited to be asking you some questions about some really cool, that research that you've done.


For a long time we've been thinking about emotion recognition, this weird phrenological way of trying to use AI as something that can observe emotions as well as people. So can you just tell us a little bit about it, how you got interested in it and maybe the collaboration as well with Os, who's a terrific writer and researcher.


KERRY:

I first started thinking about emotion recognition technology or this like wave of AI technologies which claim to be able to sense human emotion.


Yeah, back in 2020. It was for two reasons. Like the first is I do think there was a particular hype wave at that time around these emotion reading technologies in the same way that right now, there's been this huge wave of hype around large language models and then around other kinds of generative AI, like image generation.


At that time, I think there was like a particular moment where there was just so much interest and whether or not these technologies could like actually accurately discern what we were feeling at any given time, and what they would do for a more kind of emotional or humanizing experience as a user with technology. That's something like I definitely want to come back to. But then the other reason why I was really interested in these technologies was because it was also, just following the wake of the Black Lives Matter protests in response to the murder of George Floyd, also in response to the Stop Asian Hate movement and like the rise of anti Asian racism during COVID, and so something that I was increasingly really interested in was the relationship between racism and emotion, because at that time, both in person and online debate about race and racism was incredibly emotionally fraught. And so we started seeing a lot of things being posted at that time, like how people were feeling about race, how they felt about racism with experiencing it.


Or, having to deal with kind of particularly for white folks, maybe who hadn't really thought about this a lot. So it was just like emotionally, this incredibly heady atmosphere. And so this is, I think, a way of trying to grapple a little bit with that heady atmosphere and saying okay clearly like emotion is not this kind of like neutral value free kind of experience.


Clearly we can see all these different ways in which like emotion itself is really tied to race and to power. And that must be the same for these technologies.


ELEANOR:

I've been reading a lot recently about how value laden objectivity is. So we think that to be objective is the opposite of being subjective and objectivity means no prejudice. It means no emotion and feminist philosophy has done a lot to say there is no objective way at looking at emotion, because indeed your emotions, your feelings are very much part of the objective act of looking at something. Why is it that in this present moment, people are seeking again to look at emotion in this way?


KERRY:

I think this history of objectivity is really pertinent here because one of the big selling points of these technologies is the idea that, a machine can read a human's facial expressions and the emotional expression in particular, more accurately than the human eye can. And we talked about this a lot in our previous hot take on the use of AI for hiring and how video AI software is sold on the idea that these tools, are able to observe things that we can't observe, and so they can hire more fairly and more accurately. And a lot of these tools originally had emotion recognition capabilities built into them where they'd say, oh, what we can tell if your candidate is genuinely excited, for example, to be applying for this job if they really want it, which first is super noxious because we all apply for jobs to get money because that's how we survive in a capitalist world. But second, I think it also attributes far more observational power to these tools than they have. And so one major critique that's been made of emotion recognition technologies is exactly, as you said, this idea that they are actually not these base standard emotions that are expressed consistently across space and time, and particularly across contexts.


So a lot of these tools are based on Paul Ekman's theory of kind of six basic emotions and their ability to read and understand them. I think we all know that the way that we feel is not always how we facially express these things. So this idea that if we're feeling happy or we're feeling sad, that we're always going to look like Paul Ekman's exact idea of what a happy or sad person looks like is, pretty patently false.


So as an example of this, I have a very dear friend who loves going to gigs and concerts. I love going with him. But he always just stands there like this with his arms crossed and doesn't smile at all. And he pre warned me about this before I went and we always go to Lorde cause I love her.


We both love her. And he was like, Oh yeah, before we went on Lorde's Melodrama tour, he was like, I don't really smile during concerts, but like I am enjoying myself. And I'm glad he pre warned me because otherwise I would have been like, does he hate this? But I think for many of us, some of us are very emotionally expressive according to these norms.


Some of us aren't and that's okay. But yeah, I do think it says a lot about this desire, not only to have an objective read on someone, but also to categorize people and their expressions into this pre codified set of emotions, because our computer vision technologies like don't do your well with ambiguity.


They can't make these subtle inferences. They need very clear categories to work. And that's just not how humans work.


If I can add to that really quickly as well, one of the things that Os and I do in this piece of research is we try to historicize this taxonomy of emotion and specifically why it developed and its effects.


Because something we see very importantly in the field of AI is a way of trying to look back and say, okay, how are ideas like intelligence and rationality created and used to racialize people to impose these colonial models of control to say that? The white person is like hyper intelligent and rational and like they can make these objective decisions, whereas people of color can't do that.


That's a really important history, but I think there's a second history that maybe is less looked at or people are less attuned to, which is how we were also developing at that time, this emotional repertoire or this way of saying there are right ways of feeling and expressing emotion. And if you don't do that, then that's a sign of your own lack of modernity.


And so in the paper, we draw very heavily off the work of the historian of science, Kyla Schuller. who wrote a fantastic book called The Biopolitics of Feeling, where she looks at what she calls the impressability sciences, which is this range of scientific activity, which is all about seeing how does the body sense and feel.


And it was deliberately used to try and place people in this evolutionary spectrum to say that actually people of color don't feel and express emotion the way that white people do. And it's a sign that they're less civilized. So one example she gives of this is specifically how, for example, Chinese people were portrayed as being insensate in different ways.


Or say they don't feel physically the way that white people do, and that makes them like really good laborers under capitalism. And so obviously you can see there's a big political rationale for this, which was, this is also the same time that we're seeing the mass expansion of indentured labor of Chinese and Indian workers.


So they, they had a big stake in kind of saying these are people who are just like born to work and born to work under really awful conditions. I think it's another sign of a, how all these scientific fields are very much not neutral, they're highly political, but also b, that we can see that emotion itself often portrayed as this really like happy clappy thing is itself again, also very much part of these scientific regimes of control.


ELEANOR:

Absolutely. And obviously there's this huge disability element to this as well. My best friend has bipolar and she's always on holidays and we went traveling together for a month and I'm pretty sure she didn't like any of it, and then at the end she was like, this is the best holiday I've ever had. And we were like, really? Because we didn't sense that at all from her. All of us experience this and I can tell when I come back from, a social event and I'm tired from smiling or whatever, and I'm tired because the smile I'm putting on so that I can exchange this code, this way of communicating with others, where I'm saying, I am enjoying being with you. But if we could express that in a way that didn't feel so exhausting physically then, that would be much better. And I think this really contradicts all that Ekman was trying to say about what it means to be human.


KERRY:

Yeah, exactly. And I think you raised a really important point there, which is, a lot of our social capital, the way that we move through the world relies on us being able to read and understand these social codes.


And there are a lot of people who, for various reasons really struggle to do that. And this very much intersects with disability, as you say, with race, with gender, with class. Because also certain people, particularly marginalized people, particularly women, particularly people of color, are also expected to do a lot of this emotional labor of allowing those situations to stay safe. Stay happy. So we talked a little bit about like the feminist killjoy, like one of these killjoy techniques that people like Sara Ahmed or Xine Yao talk about is actually like people just refusing to do that work or saying actually I shouldn't have to make you feel happy for me to stay safe, which is often what that equation boils down to which I'm sure is things with both experience when you're moving through the world is okay I have to put on the smile to ensure that I stay safe, everyone stays happy. And we're able to continue on and live alongside one another.


ELEANOR:

I have an anecdote about a researcher that I used to work with who I loved a lot.


She was brilliant and really funny humorous person, but she disagreed with having to smile in photographs that were sent back to our funder, which was the European commission. And it meant that there was like 15 of us smiling and then one person just not smiling and it made us everyone feel incredibly uncomfortable.


It just shows that if you don't conform to the way that people expect you to respond emotionally or show emotion in the way people want, because really what the European commission wanted to do with that picture is to put it in a booklet or something, and now that photo was unusable.


I think that's just absolutely fascinating that like the discomfort, the real, people were like, just smile, just what's wrong with you? Just smile, just this time. And people had that really visceral reaction to someone choosing not to give them what they wanted for the purpose of publicity.


And by the way, there was so many things wrong with the project. And I think that's the stand that they were taking is like, there was so many things that hadn't been resolved. So many funding issues, administrative issues the poor treatment of some people. And I think, it's to say, this is not all perfect and I cannot smile to justify it.


KERRY:

And I think that's really important. I think discomfort is the exact right word. And this is something Os and I talk a lot about in the piece of research itself, is that we're looking at how a lot of these emotion recognition tools are sold, not only as providing this more objective eye on a situation, saying we can read emotions better than you but also, again, that as a person, you must want to feel certain ways you might want your technology to express emotion and read emotions certain ways.


And that if you don't, then there's something actually wrong with you. And so this is like a classic example of what we see a lot on the tech industry, which is us having to change and fit our behavior to mold ourselves to how a technology is designed often on a white male and neurotypical person rather than saying, actually, what happens if we design our technology so they accommodate All the different ways people do or do not express emotion or let's step back and say, actually, do we want this in the first place?


Because, and this is again, another Sara Ahmed style insight, like when something causes discomfort, so like your colleague refusing to smile, like that person gets seen as the cause of the problem. Like the problem is that your colleagues is not smiling. Not that actually. Things have gone really wrong in the project and she's unhappy about them.


And we see the kind of same thing in the construction of certain emotion recognition technologies where people say it's not that the technology doesn't work, it's just that you don't smile in the right way or in the most racist iterations of it, your face is just not right and it can't see you, which was a massive problem in machine vision, particularly at this time, when a lot of these technologies were coming out.


Which is ludicrous because again, people's faces just look like that. And also, plays into this like much longer racist ableist history of saying, you have to express your emotions according to the way that white or neurotypical people experience emotion, or we're going to have to do something about it.


You're going to have to change that behavior.


ELEANOR:

So tell us about Affectiva.


KERRY:

Yes. So Affectiva was the firm that we focused on for this paper. There are a lot of people in this space, but we chose Affectiva because it was one of the most famous and because it actually came out of, in part Rosalind Picard's Affective Computing Group at MIT, which was in the 1990s, really spearheading a lot of this research around technology and emotion.


And so in this paper, we mainly focused on the autobiography of the, one of the founders and the now CEO of Rana el Kaliouby. I think the company has now actually been sold in the last few years. But at the time in 2020, when we started this research, like Affectiva was the big name in emotion AI and Rana el Kaliouby was again, a really big name in this field, and her personal story was really compelling. She came from Egypt, she'd gone to do a Ph. D at Cambridge where she fell into this area of thinking about emotion and technology, traveled to MIT, became an American citizen. And What we argue is like her story is, in one way it's like very empowering because like computer science is notoriously like a very white and male dominated field.


But on the other hand, when you read into the way that she developed these tools and the way that she thinks about emotion and technology, there's something quite ugly or noxious in that. And we particularly focus on the way in her autobiography that she imagines two groups of people. So the first is how she thinks about autistic people, and the second is how she thinks about Chinese people and her experiences in testing these technologies on Chinese people.


And that was partly just due to Os' personal interests. Os has written a lot on autism. They've written a piece called Automating Autism, which is fantastic. And we'll link it in the reading guide to this episode where they talk about how autism is imagined amongst AI communities.


And then my area is in part Asian American studies and so of course, I was interested when she had this whole specific section in an autobiography around, like the challenges they had marketing these technologies in China. But there was a second reason why we focused on these two groups and that's because both of them have been. Really stereotyped against or put down in certain ways, specifically relating to emotion. So in the case of autistic people, this idea of Oh, you don't read social situations well. You don't know how to express yourself according to these like neurotypical norms. And in the case of Chinese people, this trope of the so called inscrutable Oriental or this idea that Chinese people were that we don't show emotion and the way that white people do. And advocates from both communities argue that either these stereotypes are completely false or they involve, a huge amount of stereotyping on the idea of this is what good emotional So we wanted to interrogate them and Affectiva's approach to them in the paper.


ELEANOR:

Why has her story not been received with as much criticism as you would have imagined it to be?


KERRY:

So this is tricky. Part of it is that again, I think that people for obvious reasons, do not want to shoot down, like in the case of Affectiva, it was two female founders, Rosalind Picard and Rana el Kaliouby, who are probably, doing great computer science who are very well respected in this community.


And also again, like they're women in a field where there are very few women el Kaliouby does talk about all the prejudice and the difficult experiences she had, particularly as a hijabi wearing young Egyptian woman moving through this field. But at the same time, first like I think throughout the autobiography, Rana el Kaliouby is very much employing these this rhetoric of like


Like liberal individualism like becoming American becoming free like she’s really ino this particular idea of of what being American means, which I think is very interesting. It's very much told as this like self realization story. And on the other hand, I do think like a lot of it comes down to the fact that particularly with the way that the book treats people with autism or autistic people, which is pretty awful, like it's a really hard read.


Like I wouldn't recommend to be us buying or reading this autobiography.


ELEANOR:

So she chooses Chinese people and when we read that we're like, Oh, here we go. Choosing Chinese as the archetypal others, so can you tell us a little bit about how you responded to the way that she responded to that she treats Chinese people in relation to emotion. And also Os' work on autism.


KERRY:

Yeah. So if we actually trace it back a bit further, I'd say the kind of core rationale behind Affectiva's whole project, which el Kaliouby is incredibly transparent about throughout the autobiography, and is quite proud of it, is this idea of, she was inspired to start making affect recognition technology because of stories she'd heard about people who had autism. And she was arguing that ' I heard or understood from what people told me that autistic people don't feel emotion'. And so she invokes a lot of these like really harmful stereotypes that we hear about both autistic people and in certain circumstances about Chinese people that they're mathematical and they're cold and they're computational and they don't feel emotions.


And so from the very, very start, Affectiva is positioned as like a curative thing saying like what happens if we can use technology to try and quote unquote fix autistic people. And so Os draws on the work of Eunjung Kim to frame this as a kind of curative violence. So like from its very beginning, Affectiva takes the position of there's something wrong with this.


Wrong with autistic people and technology can intervene here. And I would argue this is wrong on a lot of bases. And the first, again, this idea that there's something inherently wrong about being autistic. It's very clear actually throughout the book that, no single really, I think autistic person actually speaks in any form in this book is something that Os pointed out.


Like it's always very secondhand through like researchers or family or friends. Second, the tools are based on the work of Simon Baron Cohen, who is not only the cousin of Sacha Baron Cohen, the actor, but is a very famous professor at Cambridge for his research on autism. But he's also very critiqued by a lot of autistic advocates for the way that he frames autism as being quote unquote, the extreme male brain.


So he, he takes this like very gendered, very ableist approach as he's saying if women are more emotional and relational and men are more rational and computational. Autism is an anomaly. We've gone too far the male way. And again, this is a widely critiqued position, particularly among gender studies scholars for the way it involves like this kind of very intense gender stereotyping.


But what's interesting is that el Kaliouby actually trains her initial model off Simon Baron Cohen's data set of emotions. From the kind of very beginning, we see it being built in this idea of not only is this curative in some way, but also like even in the actual technical side, it's being built from a database that, has its roots in this kind of very unappreciated approach to autism among people who are advocates within that community.


I think basically from there, it's like really difficult to think about these emotion technologies as being anything but trying to force people to feel in a certain way.


A point that like Os and I went to make was that it's not that Affectiva a treats say autistic people and Chinese people in the same way but there's like interesting stereotypes about emotion coming through so in one chapter of the book it's a much smaller focus than the focus on autistic people el Kaliouby says we were really panicking because we tried these technologies in China and they didn't work in China for the very obvious reason that, again, a lot of these tools are not going to work in any context and you can't expect to just roll them out and they're going to be fine.


And so then el Kaliouby argues that basically oh, the reason why they weren't working is because Chinese people were like putting on a face or they weren't reacting in the way that we expected them to when there were other people in the room, so to get them to react the way we wanted them to, we had to pull everyone out of the room. She links it to different social norms around emotional expression. Yeah, there are different social norms around emotional expression. That is not really a problem, but this links to this much longer history of stereotyping of, again, this inscrutable oriental or this idea that Chinese people always have a face on and this is something that other people can't read.


It's also linked to these ideas of communitarianism, these ideas of kind of not being like an individual. And so in the context of el Kaliouby's story about herself becoming this kind of fully fledged individual American citizen, it reads, a little bit jarringly.


ELEANOR:

And what would you like to see from emotion recognition technology going forward? or do you not want it to exist at all? What's your hot take?


KERRY:

So my hot take is I'm not going to get sponsored by an emotion recognition company anytime soon. Yeah. I'm incredibly skeptical about, whether or not any of these like, Face reading technologies around emotion should exist at all.


I just personally have never seen a good use case for any of these. I don't think they work. So groups like AI now, for example, have said that these are just frankly, pseudoscience, there's been a lot of calls for red lines around this particular application of technology just on the basis that there isn't enough scientific evidence to suggest that they work.


And so actually ERT or emotion recognition technology is the prime example I give when people are like, is there anything you just want completely banned? And I would say actually that, because there's no scientific basis, but also I just don't think there's a strong enough case for investment, which is if you're thinking about how to spend your money which is like to say that if there's any venture capitalists listening, which I doubt, but if they are like, this, I just don't think is really. an area that we should be investing into, it's important that we're directing our money and our time and our resources to technologies that we actually think are good and do good in the world, and there's only been one application, really, that has taken off that isn't very dubious, border control and policing for these technologies and it's marketing, which is, that's what affect even ended up selling into.


They dropped the whole fixing people with autism line very quickly and just immediately turned themselves into a marketing firm once they realized that was the way to be profitable. We're not even really seeing good applications of these technologies.


ELEANOR:

There you go. Venture capital, invest in oncology treatment and ML applications there.


Pack up and go home with your emotion recognition. Thank you so much, Kerry. And it's an amazing read. So congratulations.


KERRY:

Thank you. And I will link it as well on the website. So if you'd like to read it, it is open access, which in academic terms means you do not get stuck behind a very expensive paywall.


So you should be able to read it and yeah, definitely let us know. Have you ever seen a good use of this technology? Then I stand to be corrected. Let us know in the comments. And also, yeah, we really love to hear from listeners. So if there's anything you want to hear our hot take on, please do reach out to us via our website, www.thegoodrobot. co. uk. Or we're also on Twitter, Instagram, TikTok, a lot of different platforms. So check us out.


DEEPYCUBS:

 Hot takes with the good robot.

21 views0 comments

Comments


bottom of page