top of page
Search

Hot Take: Can AI De-Bias Hiring?

Welcome to our third episode of the Good Robot Hot Takes. Every two weeks Kerry and Eleanor will be giving their hot take on some of the biggest issues in tech. If you’re a graduate or a jobseeker, this is the episode for you because this week we talk about AI that’s being used for recruitment. That’s right, AI is being used to assess your performance in an interview. In fact companies are claiming that their tools can read your personality by looking at your face, and that this can strip away a candidate’s race and gender. We hope you enjoy the show.



READING LIST:


Drage, E. and Mackereth, K. (2022) Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”, Philosophy and Technology. https://link.springer.com/article/10.1007/s13347-022-00543- .


Bourabain, D. (2020) Everyday sexism and racism in the ivory tower: The experiences of early career researchers on the intersection of gender and ethnicity in the academic workplace, https://doi.org/10.1111/gwao.12549


O'Neil, C. (2016) Weapons of Math Destruction.


Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Durham: Duke University Press.


Walcott, R. (2019). The End of Diversity. Public Culture, 31(2), 393-408.


Hooks, bell. (2014). Black looks : Race and representation.


TRANSCRIPT:

ELEANOR DRAGE:

Welcome to our third episode of the Good Robot Hot Takes. Every two weeks, Kerry and I are gonna be giving our hot take on some of the biggest issues in tech. This week we're talking about Recruitment AI. If you are a graduate or a job seeker, you may have encountered AI used to do a video interview.


This means that AI is being used to assess your performance. But more specifically, a lot of companies claim that AI can strip your gender and your race from your candidate profile during the course of one of these AI powered video interviews. Do you think that this is legit or a bad use of technology?


Hmm. Well, listen to the episode and you can decide for yourself. We hope you enjoy the show.


KERRY MCINERNEY:

Hi everyone. Welcome to our latest Good Robot Hot Takes. Thanks so much for all the love you've been showing our Hot Takes format over the past few weeks. So today I'm finally here in person with Dr. Eleanor Drage, which is exciting cause it's very rare. We both travel quite a lot for work, but it's very nice to get to finally do this together.


So today we're gonna be talking about something which I'm sure has impacted all of you, certainly has impacted us, which is the domain of hiring and recruitment, but specifically how new technologies, and particularly AI enabled ones and getting employed in this domain. But just to kick us off, Eleanor, what is the worst job you've ever had?


ELEANOR DRAGE:

I've had a lot of bad jobs because academia wasn't my first job. I've had some kind of gross jobs. Like I was working at the Olympics putting my hand down these massive vats and declogging drains. And I had some pretty bad jobs working in tech, in incubators with small companies that were really going nowhere.


I thought I wanted to be a lawyer. My parents were really pushy and then had to bail on law school like a day before. They were really upset about that. So I feel like my career has been really messy and not very organized, and I've done lots of stuff I've really, really hated, like law internships.


KERRY MCINERNEY:

I think that's really helpful for people to hear though.


Cause I feel like, I know I grew up thinking like, oh, there's one thing you're gonna wanna be like, you wanna be a firefighter when you were four and you're gonna be a firefighter. So I think it's nice sometimes to hear how people fumbled their way around lots of different things and then ended up here.


ELEANOR DRAGE:

Yeah.


I was giving a talk recently, then they said, oh look, this is for people who have really aced the beginning of their career. And I was like, I have not aced the beginning of my career. It's taken me like nine years to find something that I remotely like. What about you? What's the worst job you've ever had? Don't say this one.


KERRY MCINERNEY:

Can you imagine? I'm like, you know what? Hanging out with this stink.. [laughs] , um, I'm trying to think. I've been pretty lucky with a lot of my jobs. I've had some very bad job interviews actually, like jobs I didn't get. There's probably a reason why I did not get those jobs.


So I can think of two off the top of my head. One which is entirely my fault, and one which I would say isn't my fault, and links a little bit more to what we are talking about today. So the worst job interview I had, which is entirely my fault, is I went to a dinner, which was being held by a big firm, which is what they do sometimes the court graduates cuz they want you to come and apply to their firm.


And I kind of had figured out a bit of the way through the dinner that I was like, look, I don't think this is a good fit for me. The corporate vibe doesn't seem right. And I was just in a stage where I was like, I just wanna go, I wanna leave this dinner. But I was like, oh, you know what, I'm eating here.


I hate food waste. So when the dessert came, I'm not proud to say 21 year old Kerry was like very bold, not like me, I'm very neurotic and anxious. So I put the whole dessert in my handbag, said goodbye to them, and then just left, bye, with my biscotti, and they were like, What are you doing?, so I obviously did not get offered a job there, but that was fine.


We mutually parted ways and I got a biscotti and it was very delicious. So that went very badly. But the one other one that went badly, which was like less of an issue on my part, but I think, talks to some of the broader things we're talking about today is I had a fully video interview with a law firm, where I had to answer some questions just speaking into a camera, I had been given them as prompts, and I found it so difficult.


I felt like I was just speaking into the void. I didn't have good answers to the questions. I had nothing to bounce off. You know how in an interview you can tell if you're really going up the wrong tree, and it was just, Really felt quite, not dehumanizing exactly, but it really, I felt quite demoralized by the end of that interview.


And obviously I did not get called back and good on everyone else. You like got a call back for that. It's fine. I love my job where I am now. I want to share that experience because I'm sure a lot of other people have also had that experience of being video interviewed or processed by AI and maybe not feeling like it was the best experience.


There's something really important about being treated as a person, feeling okay with who you are, feeling seen in your job interview, right? So that's what we're gonna talk about today.


KERRY MCINERNEY:

Yeah, absolutely. So, Eleanor, do you wanna just kick us off and maybe tell us a little bit about AI in the hiring space for people who don't know that much about this?


So maybe like how these tools are getting used and why AI powered tools are increasingly popular?


ELEANOR DRAGE:

So we were interviewing people at a big technology company the size of Facebook, so about a hundred thousand people. And we were talking to people throughout the organization about the kinds of technologies they use, whether they're using AI or not, and how that relates to diversity and inclusion.


So what we're trying to do is connect ethical technology with attempts to make the organization more inclusive. And along the way, we talked to lots of people doing law, doing recruitment, and recruitment is one of those industries that hasn't innovated in a really long time. Right? It's still very traditional. And now HR want to have a seat in the boardroom, and they're trying to justify how they're modernizing that kind of thing.


Meanwhile, hiring tools don't have to go through any regulation in order to be brought to market. And there are a few exceptions. Yeah.


KERRY MCINERNEY:

Illinois has an act about video interview tools, and I think I might be wrong, but New York's bringing in a new local law specifically also around video interviews.


But again, that's only one quite small use of AI in this space. And even though things like the EU AI Act assigned the class recruitment tools as high risk tools, it's still a little bit unclear how that'll get enforced.


ELEANOR DRAGE:

So even though you know it's exciting to see some movements in this space. It's still pretty unregulated, which means that the technologies that are being used to interview you if you're a graduate, if you're a candidate, if you're a job seeker, are likely to have not been through proper regulation because even though the EU AI Act, which is another big body of regulation, is going to regulate hiring tools because it classifies them as high risk, that hasn't been brought into effect yet.


Now, please do write to us if you've had any experience of AI hiring tools.


But can you use AI to de bias hiring? Kerry, what do you think?


KERRY MCINERNEY:

I mean, the chronic skeptics over here, but I think this is a really interesting part of these tools, right? Is that I think that a lot of us, I think, have had negative experiences either with recruitment or being in a job and feeling like, oh, actually people don't treat me like they treat other employees, or they treat me like I'm less than them, or there's just different standards for some employees compared to others. I had a wonderful friend, Dounia, who was a visiting scholar here at Cambridge, and she specifically used to look at women in academia and specifically women of color and the way that they felt often really micromanaged and really policed by their bosses.


And that this really affected the way that they went through these institutions to the point where a lot of them didn't wanna work in person or they didn't want to be very engaged with the institutions because they would just felt they were treated so differently. And again, I think that resonates with a lot of us.


I know certainly that's something you've experienced as a woman in tech and in academia. Something I've experienced here in academia. And so there's something that sounds really promising about the idea of a tool that can hire fairly, that isn't gonna see some of these things about us, that maybe a human recruiter would see, then immediately think, oh, she's not gonna be good at that.


ELEANOR DRAGE:


This is an example of AI being used to solve deep-seated difficult problems. It's really hard to make organizations more diverse, and so understandably people are looking to AI to solve that problem, right?


They're thinking, okay, maybe this tool that claims to be unbiased or to not see race and gender can solve this problem for us. And in a minute we'll explain how this is an example of techno- solutionism, which is this great term that describes how technology can be seen as the answer, as inevitable, as a way of bypassing all the difficult things it takes to make an organization more diverse, like childcare, like improving company culture or the organization's culture so people aren't discriminatory, there's equal pay, there's lots of different challenging things that take a lot of investment and it's understandable that people look to AI to bypass all of those problems.


KERRY MCINERNEY:

Exactly.


And so how are these tools actually claiming to do this? So we look at particular subset of tools. Again, we look at video AI hiring tools. And these tools claim that they don't see things like race and gender and other protected characteristics.


And in doing so, they create this level playing field. They strip back candidates to these mutual data points like their attributes, or in the case of the tools, we looked at their personalities. And then they assess people on that basis alone, which again, that sounds like a good idea sometimes. But at the same time, for me personally, I just don't think this is, A possible, or B, necessarily what people want. So to start with possible when we are existing in institutions that are fundamentally structured by systems of power, like gender and race, to somehow say that you can remove that person's complex life experiences and the complex experiences they'll have in the job just by seeing them as a neutral data point, I think it's just impossible.


But also secondly, I don't think we all want to be neutral data points. My experiences as a woman of color are hugely a part of my work and who I am.


ELEANOR DRAGE:

Let's have some examples of these tools, what kind of tools did we look at? Cuz we don't look at all hiring tools, right?


KERRY MCINERNEY:

Mm-hmm. Yeah. So we looked specifically at three different hiring firms. So we looked at HireVue, which is a longtime player on this market. I think they're established in 2004, and they've been going for a long time, very popular. They did hit a hiccup in 2021, where they actually had to, after the results of an audit by Cathy O'Neil, who wrote Weapons of Math Destruction.


Great book, definitely check it out. It'll be on our reading list. After that audit, they had to stop using their AI powered video function because it was shown to be, unreliable or pseudoscientific more on that later. But HireVue was one of the firms we looked at because despite that hiccup, it really was and is one of the big players in this space.


And we also looked at a couple of others. One called myInterview, another called Retorio. All working in that video AI space. But you might hear us speckle in a few other tools. Things like Censia, for example, which similarly tries to create, more equitable hiring. It's important to note that though, that there's a really, really big range of different kinds of HR tools that are AI powered being developed in the workplace.


Something for example, we're not really gonna touch on today, but that's really important, is workplace surveillance. The AI Now report has a really good section on this. I'm gonna link that in our transcript and in the show notes if you want to know more about that. We are very much looking at this kind of first stage, the first hurdle, so to speak, and why we think some of these AI tools at that first hurdle can't really do what they say.


ELEANOR DRAGE:

Yeah, exactly. So we're not saying that AI tools that do hiring are all biased. We were just looking at this claim that they can de-bias hiring by stripping away race and gender by only looking at personality. And we specifically look at video tools that use the Big Five, which is a kind of psychometric testing schema, so what are the big five?


Can I remember?


KERRY MCINERNEY:

I believe it's openness. And then conscientiousness, extroversion, agreeableness, and neuroticism. And the idea is that all of us rank somewhere on those traits and that they're a good indicator of what your personality is.


ELEANOR DRAGE:

Yeah. And we don't wanna necessarily say that this doesn't work.


Although I feel like for me, someone who, is occasionally extremely introverted as well as being quite extroverted, I'm not sure how well I correspond to the big five. It very much depends on the people and how I'm feeling, the type of day and, but what we are trying to look at is whether an AI can look at your face and allocate you a score for extroversion and conscientiousness and that kind of thing. And the way that these tools work is that they get people to annotate the data. They get a bunch of people to look at these images and videos and, but sometimes stills, so just pictures of candidates and ascribe them a personality score using the big five.


And then the AI is trained on that dataset. And then the AI system bases predictions about a candidate's personality on the training data. So it's not that it can detect personality, it just uses its training data set to predict how likely they are to give them a particular score for personality.


KERRY MCINERNEY:

Which is just nuts because it's like, look at me like with my big ass nose.


Does that make me like an extrovert or what? Like it's just so bizarre as a way of judging people. Yeah, and I think what's particularly disturbing, ironic is that this is meant to make hiring fairer. And yet if a human recruiter looked you in the eye and was like, I just think that person looked really neurotic, you'd be like, I'm gonna report you to HR, like it would just be unacceptable.


ELEANOR DRAGE:

Yeah. And one of the interesting things about the study is that we looked at the things that you might be thinking about as we're talking about this. In the old days where they saw pictures of criminals and they thought, okay, if you have this kind of forehead, you are likely to be a murderer.


And actually? I might share the screen of that slide. Yeah.


What is going on here?


KERRY MCINERNEY:

Yeah, of course. So one kind of wide scale claim, about AI tools is that they can see things that humans can't. Right. And I'm not trying to say that computers don't have certain kinds of computational skills that are more advanced in humans, but I think there's this mirage around machine vision in particular that somehow AI, because it's access to a huge amount of data, can discern characteristics about people including supposedly our personality from our faces, even if like a human recruiter couldn't do that. But actually what these tools are doing is reproducing these histories of racial pseudoscience. And we see that in really widely critiqued projects like algorithms that claim to deduce your sexuality from your face or criminality from your face.


But I think we also see it very much in hiring. So Katherine Blackford, who's here on the left, was a pioneer in the field of human resources and in the early 20th century, she made this brochure or pamphlet, which argued that you could tell if someone was gonna be a good candidate or a good worker based on their face.


And this sounds, again, extremely weird, extremely inappropriate, very much of the resurgence of physionomy and phenology that characterized the early 20th century in the us, and yet we're still effectively relying on the same idea today in these AI-powered tools.


ELEANOR DRAGE:

Yeah, so you've got this scary selection of pictures, on the right.


'If you're a good judge of character, I can make you a better one' says Katherine Blackford, the Science of Character Analysis. I know these days we're a little bit more specific about what we consider science. In fact, the word science has always been a sign of where power lies.


And in this case you have this white woman explaining that different kinds of noses and profiles mean that people have different personality traits . Now we're really skeptical about the idea of a face value interpretation of anything, of character, don't read a book by its cover, et cetera, et cetera.


But this was science back in the day.


KERRY MCINERNEY:

Yeah. And I think what's really disturbing about these kinds of tools is that they can re- legitimize forms of pseudoscience that have really long been discredited and one of the big reasons why there was such a pushback against phenology and physionomy, despite it being really popular in the early 20th century, was it's used by Nazis to profile Jewish people, to profile.


Other kinds of minoritized people in that context and to commit huge amounts of violence and genocide against those groups. And so there's a really violent history here that I think some of these tools are not grappling with, even if they are somehow really well intentioned, even if they are claiming that they're going to be able to somehow magically de-bias hiring, what they're ultimately doing is putting forward a set of false logics that draw on these much deeper dark histories of racism. And they also don't even work. Let's just start with that actually as well, which is just, you, I don't think you should be allowed to sell like complete BS.


Do you remember this Kylie Jenner like lip kit? Whenever.


ELEANOR DRAGE:

Yeah. Yeah.


KERRY MCINERNEY:

The lip liners, lipsticks.


ELEANOR DRAGE:

Are they good? Have you tried?


KERRY MCINERNEY:

I've never, I've tried them like in Boots where I like swatch them. Sadly. Then don't buy them cause they're really expensive.


But what really pisses me off is that Kylie Jenner for ages sold these lip kits being like, this is what gave me my big lip. The lips very obviously were created with lip filler. Right. And again, now I don't think they would deny that, but at the time they were marketed and sold as, this is how you can look like me.


And I just don't think you should be allowed to do that. Right? Like I think there was misleading advertising by a long way. And also probably influenced heaps of young women and girls to think like all they need to look like that is a product. When we think about hiring technologies, there's just such a fundamental mismatch between what HR professionals think they're buying in and what companies are saying that their products can do.


And I think that that itself also creates very, very harmful cultures.


ELEANOR DRAGE:

Yeah. It's about companies being honest about what they're selling. Is this proven science, as in, have there been papers written about it? Is there kind of consensus in the fields that they work? And if the answer is no, Then they just really need to make that clear to people buying these tools.


KERRY MCINERNEY:

Yeah, exactly. And , it doesn't make sense really that we are buying these tools and when we are like the beta generation, we're the ones who at the testing ground for a lot of these new tools.


And whenever something goes wrong, it's like, Whoops. But just to show how ridiculous and how spurious I think a lot of these tools are, and the claims we make about our faces are, Eleanor, you actually worked on a really interesting model or project based on that very idea, right? Trying to show the fallacious logics behind these tools.


ELEANOR DRAGE:

I love the word fallacious. It's great, isn't it? Yes. So I worked with some second year computer scientists at Cambridge. And I wanted them to replicate one of these tools and to see whether it did what it said on the tin. So they took one of these tools that was open sourced and that claimed to be able to detect personality from still images.


And you can see the blue line is the original picture that was taken of me and the green line is the score I got for personality after I changed the contrast and the brightness of the image. So that's a weird correlation between the image and the score that you get for personality.


And it just showed us that unfortunately these tools don't yet do what they say on the tin.


If you want to use this for yourself, we'll link it in the show notes and you can see it there.


KERRY MCINERNEY:

Yeah, and I'd really recommend giving it a try because again, this sounds very, airy fairy or ridiculous, like, oh my goodness, why would someone make that tool? Haha.


But the point is, we are talking about this because they're really widely deployed. Like HireVue is a hugely popular tool until 2021, and it was being used by so many different big firms to think about their entry level hiring. On that point, I also think that it's really. Important to acknowledge that the people who are most likely to be processed by tools are going to be people in entry level positions.


Yeah, there's gonna be people coming in at the bottom of the chain. Often it's things like fast food chains, for example, mass processing people with AI powered tools. The people getting processed by AI, they're not gonna be the CEOs, they're not gonna be the people who hold a lot of power.


It's gonna be people trying to come into the workforce for the first time being judged according to the whims of these algorithms and these machines.


ELEANOR DRAGE:

Yeah. And so this means that if you are a graduate, you are in a particularly vulnerable position. So what we are calling for is for graduates to. And know their rights in relation to these tools know that they won't be discriminated against if they ask HR to give them a human option.


And that on the other hand, also means that we can get the company to guarantee that if you don't want to go through one of these hiring tools during your recruitment process, that they won't discriminate against you.


KERRY MCINERNEY:

Yeah. So something that we're really working on then is thinking about what does a meaningful opt out option look like, and this is something that the Illinois Act, for example, calls for. It says candidates need to have a meaningful opt out. And like you said, they need to feel that they're not gonna get discriminated against. On the one hand, I think it's particularly important. For example, if you're disabled, you're neurodiverse.


If there's many other reasons why you don't wanna be processed by AI. On the other hand, I also think it's a point to know we all have that right? And if you just simply do not trust the AI tool that you are being processed with, you don't have to do it. And that really matters.


ELEANOR DRAGE:

Ironically, universities are now training candidates to beat the tools, to game the system.


And of course, if some universities are offering that, But not all universities offering that, it means that some people are in a better position to be able to handle these video interviews. Of course, you can imagine that universities with more resources and candidates with more resources, more time, computers at home can spend more time training themselves, and this means that it's even more unfair.


KERRY MCINERNEY:

Exactly. Yeah. And I also think that the whole thing about being trained to perform well in a video interview also throws completely overboard, right? The claims these video companies make about their tools, being able to see your personality because you shouldn't be able to make your personality that much better, whatever they're judging better to be.


Cause I still don't know how open they want you to be or whatever.


But the point that you can somehow change those core scores about yourself just by taking a few lessons without a smile, bless you a bit more, or to have a better lighting in your house, like that's wild to me. Yeah.


And also on a side note, I don't even think OCEAN's a very good measure for employability. Like I was talking to my sister-in-law who comes from a psychology background, she's now a therapist, and I gave a lecture for her class in University of Miami, and I remember her being a little bit confused because she was like, oh, why did these tools use OCEAN because as someone from like a psychology background, like that's not what we would use to try and assist someone's suitability for a job. It's not an adequate metric. And I think this is the issue that we see with a lot of these algorithmic tools and a lot of domains is that they just take a model and automate it.


They're not necessarily taking a model that works at automating it. Even the most up to date high end model, they're just taking something that's easy to quantify and turning it into a tool.


ELEANOR DRAGE:

Yeah. There's no evidence to suggest that hiring tools can do this in a bespoke way with each company.


It's difficult to imagine that the tool would change significantly when it was working with company A or company B to produce a different idea of the ideal candidate. But this idea of the ideal candidate or culture fit has long been debunked, right? I know one person who works in something like shipping and they take candidates to the pub and see if they also can ski and stuff, like this kind of hideous culture fit. And we know that it's a euphemism for racial, class, gender discrimination. I certainly wouldn't want to be a culture fit in that kind of company. It sounds awful.


So if you think about how these tools work, and how they collect the keywords, Of candidates when they speak and see whether those keywords map against the keywords that have been spoken by whoever is their ideal candidate. Then you just have people who talk the same, who use the same language, who are expressing themselves in the same way, and that's a very superficial way of judging whether someone can do the job or not, and might not actually be an indication of how valuable they are of what they can bring that may be different or unique to the job.


KERRY MCINERNEY:

Exactly. Yeah. And I think that, it's so apparent to us that culturally, there are such different ways that we express ourselves. There are such different norms about how we talk, what's considered appropriate and what's not considered appropriate. Right. My family is very broadly speaking, Cantonese.


It's partly why these clocks freak me out. I'm a very bad Chinese person, very diasporic, but it's very bad luck. I feel they have this many clocks staring at you, telling you your time's running out.


I feel like for Chinese families, they're very interventionist, they will tell you a lot of things that other family members probably would think are very rude.


And this is the same with job candidates, right? Which is in an interview, there are some things which I'm sure are very acceptable to say in one place but aren't to another. And yet at the same time, what happens though when these tools are scaled up across borders and you have a company like HireVue processing people in New Zealand or Australia or elsewhere, right?


Like I'm really skeptical that those tools are gonna be able to pick up on those cultural nuances or even just in how people vocalize things. Like in New Zealand, I can't remember the like technical name for it, but you end every sentence like this so it like goes up at the end. But every sentence, whether or not it's a question and it's something I had to train myself out of.


Cause people in the UK were always like, In that really snotty way, is that a question or a statement? And I'd be like, it's a statement. But even just things like that, would the AI think, oh, she's really uncertain of herself, or is she questioning things?


ELEANOR DRAGE:

Yeah, we just don't know because again, there has been no science.


KERRY MCINERNEY:

Where is the science? Yeah. But that's also the scary thing is that they say that these are really science backed and they make these like white papers and they make a lot of stuff that looks very sciencey and yet they don't really have like meaningful facts and figures to back anything up.


ELEANOR DRAGE:

Yeah. White papers sounds very official, like the government, white paper or very sciencey, and a white paper is usually a public statement about science, about in-depth research, and in fact, it's just a marketing brochure expressed in the language of so science, which is definitely misleading. Definitely one for you, the Advertising Standards Authority, if you're listening, we really think that the ASA, as they're known, should be equipped to crack down on this kind of absurd way that AI tools are now marketing themselves. So wait, let's go back to gender studies and critical race theory.


Who are the philosophers that inspire us when we come to thinking about what personality is, what race and gender are, and how they figure in the hiring price, how they figure in the hiring process, why they can't just be stripped away or removed.


KERRY MCINERNEY:

Hmm. That is a great question. I think some of the people who are really influential for me in this project were definitely people like Sara Ahmed, people who are really critical about the idea of diversity and how institutions weaponize that because again, I think a lot of these tools are being deployed to meet very well meant diversity, equality, and inclusion goals.


At the same time though, I think that this focus on hiring in say a lot of people of color without actually listening to people of color in your organization or understanding why they don't stay to me is really pointless. It just sets you up for more failure. And I'm not saying that all departments who use hiring tools are doing that, but this brings us to that techno solutionism point that you mentioned, which is simply that it's like trying to slap this kind of iron bandaid over what's like a much deeper wound and at least you address poor experiences at a place, you're not gonna start to bring about the kind of healing you need to see. What have you been chewing over from gender studies as brought to think about hiring in a particular way?


ELEANOR DRAGE:

The amazing bell hooks who is, a very famous, Black American scholar who talks about ideologies of colorblindness. Now that may seem a little complicated, but we've all heard someone say, I don't see color, and that's the idea that these tools are trying to bring to life. Just close your eyes, And the problem will be solved.


But the problem isn't solved by disclosing your eyes because people don't want to be unseen. They don't want elements themselves to be scraped away. We know it's not possible just to not see color. I love my mom, but I have heard her say that,


KERRY MCINERNEY:

we all have a relative, and you're like, oh.


ELEANOR DRAGE:

So what we want these tools to do is just to align with tried and tested diversity inclusion strategies, and really good hiring where you have an assessment day that the people who are running the assessment day are representative of the kinds of people that you are looking to employ. So, representation, being seen for who you are, knowing that someone's personality has a lot to do with their race, their class, their experience of disability, all these things that come together to make them unique and valuable. So we want these tools to, in some way love people for who they are and see them generously. And if you are an HR manager, you will know this. We trust your expertise.


We know that HR is very difficult, that people gain all this expertise over time. And so when you are considering whether or not to purchase one of these tools, think does it align with what I know, with my hard earned experience and try and overlook the fact that AI is something that now is just everywhere in the news that is really hyped, that seems to be able to solve all problems, but we know that you know better.


KERRY MCINERNEY:

Exactly. And that's something that we actually found when we talked to a lot of HR managers at that firm is that a lot of them were really concerned about these tools because they couldn't be generous. So they would say, for example, I can tell if a candidate's about to get there and they just need a little bit more time.


For example, that's what I found really distressing about that video interview I talked about at the beginning of this episode was because I had 60 seconds on the clock, I had to finish in 60 seconds. You could see the timer going down and it panicked me. And I don't think I said what I would've liked to say, whereas I think maybe if I had a human interviewer, they would've just sat back a little bit and they could have gotten a really good answer. And it's the same I think as well, with another HR manager we talked to who said, I've had some people get rejected by AI powered tools who had maybe slightly, quote unquote atypical CVs. So they might have say been in an institution like the Navy before or something like that.


Or they might have had children quite young. And then they had gone back to school as a mature student. And they felt like actually the AI was asking 'em questions like, what do you do after school? Or what do you do on the weekend? Or things that just weren't appropriate for their stage of life at all.


And it wasn't able to appreciate those kinds of differences in the parts that people had taken. And so what you were saying earlier, it locks us into these very predisposed ideas of who the ideal candidate is sometimes in very obvious kind of discriminatory ways. Okay. Do you ski on the other hand, into some of these like more insidious ways of saying you must have taken a certain kind of career pathway.


ELEANOR DRAGE:

That's really interesting. I didn't know lots of those things. I was told that a good new teaching method, I know nothing about teaching, but teachers out there, that if you're asking a question, you wait until people have their hands up, but then you wait a little bit longer for those extra hands to raise.


And I just wonder whether an AI tool could do that, they're not created by these cutting edge teachers, these cutting edge ideas. And I'd really like to see some more collaborations between hiring tools and experts in inclusive teaching.


That would be really nice to see. And we're not judging the people that are trying to build these tools, because I've been listening to them on podcasts for a long time, and I know that lots of them are really well intentioned, but obviously being well intentioned is not enough. You have to be building things that align with critical expertise, with the latest research, with real science, with recruitment professionals who really know what they're doing.


So this goes for all people who build AI tools. Just cuz you have an idea doesn't mean it's worth pursuing. Please go to the experts, to the people that have thought for a long time about what diversity is, what race is, what gender is, because those things are real academic disciplines. We've worked in this for a really long time.


I'm still learning. But it'd be really nice to see more of these collaborations.


KERRY MCINERNEY:

Exactly. And and again, like I wanna reiterate that point, Eleanor, that this is not about judging people. This is not about saying oh, you're a bad person cause you've made the tool that we don't think works.


But what we just wanna say at the same time though is that creating AI snake oil, creating products that do not work, that over promising hurts people. And I think it's also equally important that we make that clear. And I think that's why we're so passionate about this topic is not because we want people to go home in the day feeling bad about themselves.


But because we think there are series ramifications for individuals, but also more broadly for our societies, if we uncritically accept some of the things that these tools say that they can do, and it really, it sets us backwards, I think, in a lot of ways.


ELEANOR DRAGE:

Absolutely. So, If you wanna hear more, if you work in recruitment, if you're a candidate, anyone, reach out, please.


Our email addresses are everywhere, otherwise Twitter, et cetera. And we'd be delighted to talk with you.


KERRY MCINERNEY:

Yes. And you can definitely find us as well at the Good Robot website, which is www. the dot the good robot.co.uk.


ELEANOR DRAGE:

Sorry, what's our website address?!


KERRY MCINERNEY:

I really struggle to say our website. Um, yes.


So you'll find you googled the Good Robot podcast and you can also submit forms through there. You can subscribe to our mailing list. We are finally actually sending emails from the mailing list. Thank you to our very patient subscribers and we love to hear from our listeners. So also let us know what would you like for our next hot take to discuss.




62 views0 comments
bottom of page