top of page
Search

Melissa Heikkilä on Why the Stories We Tell About AI Matter

This week we chat to Melissa Heikkilä, a senior tech reporter for MIT Tech review, about ChatGPT, image generation, porn, and the stories we tell about AI. Melissa Heikkilä is a senior reporter at MIT Technology Review, where she covers artificial intelligence and how it is changing our society. Previously she wrote about AI policy and politics at POLITICO. She has also worked at The Economist and used to be a news anchor. Forbes named her as one of its 30 under 30 in European media in 2020.


READING LIST:



KERRY MCINERNEY:

Hi! I'm Dr Kerry McInerney. Dr Eleanor Drage and I are the hosts of The Good robot podcast join us as we ask the experts what is good technology is it even possible and how can feminism help us work towards it if you want to learn more about today's topic head over to our website www.thegoodrobot.co.uk where we've got a full transcript of the episodes and especially curated reading list by every guest. We love hearing from listeners so feel free to tweet your email list and also so appreciate you leaving us a review on the podcast app. Until then sit back relax and enjoy the episode.


ELEANOR DRAGE:

Today we're talking to Melissa  Heikkilä, a senior tech reporter at MIT Tech Review, about image generation porn chat GPT and the stories that we tell about AI. We hope you enjoy the show.


KERRY MCINERNEY:

Brilliant, well thank you so much so just to kick us off could you tell us a little bit about who you are what you do and what's brought you to journalism and writing about good or ethical technology?


MELISSA HEIKKILÄ:

My name is Melissa Heikkilä and I'm a senior reporter at MIT technology review where I write about AI and how AI affects us individuals and and our societies and I think I've been one of those freaks who's always known that they want to be a journalist ever since I knew what a journalist was I wanted to be one and I think for the first part of my career I was desperately trying to find a beat trying to find a focus and then I sort of accidentally stumbled into tech or I've always been really interested about tech and I think one of my first big professional stories was about an Instagram influencer back in 2013 when this was like a new thing but only really got into the tech beat around 2016 I guess or 17 and having writing about it ever since before joining Tech Review I was at The Economist where I wrote some tech stories and at Politico in Brussels where I covered tech policy it was a time when the EU in particular was really starting to like roll out the world's first tech regulations and the AI Act was still very much like a new thing then so I really got to see how a piece of regulation get started you know from the intense lobbying the speculation to the first draft and then in the negotiations so so yeah that's how I got into technical reporting and and then covering feminism gender and technology I mean something that's always been super close to me and I grew up in Finland but I'm half Chinese so growing up in a country which is extremely homogeneous and being always kind of othered and you know the only non-white person in newsroom and basically every Newsroom until fairly recently I think they're just topics have lots of personal experience about and I guess know a lot about and and really feel like I can bring value and and give a voice and experience to people who don't necessarily have that in mainstream media because of my personal experience and I think AI is like a great lens into that as well I feel like it's because everything is AI and you can talk about super nerdy technical things or you can talk about how it affects humans and people and how bias creeps into these systems.


ELEANOR DRAGE:

How amazing that you can stumble into something and then be one of the best in the field I mean that's extraordinary so with your overview of tech from a journalist perspective then, I'm really interested in your take on our big three good robot questions so what is good technology, is it even possible and how can feminism help us work towards it? What would you say to that as a journalist?


MELISSA HEIKKILÄ:

Yeah well looking at the past year in tech or an AI we've really seen the explosion of you know ChatGPT and this kind of language technology that millions of people have tried being able to try for the first time themselves and you know we've been thinking a lot about how is this technology going to change our lives or societies or blah blah blah but I actually think the more I think about it the more I think that this technology or like good technology is something that assists us in our day-to-day lives that like supports us in our decision making and our creativity helps us be more productive I don't know if should we can be more productive than we already are but you know like really augments us and sort of replaces us and and I think about good technology that I use you know like transcribing software or online translators we kind of forget that that's AI but it is like 10 years ago though that technology was completely impossible or like well or like a long way away and now we just use it every day and we forget it's AI technology so I'm hoping we'll land somewhere like that with this new technology that'll be just something that supports my decision making instead of you know AI language models becoming extremely powerful tools for disinformation or other terrible uses so I'm, I'm optimistic that AI will become very boring and I think feminism is super super important in that development like if we can especially take into account you know intersectional feminism and taking into account people like genders and races and classes and how this technology affects them and you know how do we reduce harms in these groups and you know be very inclusive in technology from the very beginning I think we have a chance of this technology becoming a really powerful helpful tool instead of a shit show


KERRY MCINERNEY:

I feel like what we all aspire to please please let this become a boring and a mundane thing rather than this sort of massively overhyped set of tools and technologies, an absolute disaster you know and I think that's one of the most challenging things I think about I see for you anyway as a journalist and then for us as researchers people who are trying to shape the stories and the narratives we tell about AI is how do we walk the line between really being adequate in our coverage of the dangers of these technologies while also not just trying to you know freak people out to the point where they feel really disempowered and unable to engage with these concepts anymore and that's partly why we started this podcast is we wanted to walk a line between kind of the Silicon Valley techno optimism and this drive towards relentless productivity which as Eleanor has kind of rightly pointed out to me numerous times, like why is the base assumption that productivity is always the best thing, versus yeah this extreme techno-pessimism that makes you think well I might as well accept every cookie that exists because there's no getting out of the remit of big tech, um so how do you find this what's your life like as a tech journalist are there particular angles or stories that you'll feel like you're often being pushed to explore or you know what kinds of stories and angles are you trying to platform or take?


MELISSA HEIKKILÄ:

I'm not gonna lie this past year has been mad, even a year ago AI was this super niche nerdy topic you know people would kind of raise an eyebrow like oh okay whatever and now everyone wants to talk about AI and ChatGPT like I'm in a cafe or a ski lift and someone's talking about how they use ChatGPT to write copy for their business you know it's it's kind of overwhelming and I'm kind of fed up with AI but it's also been quite challenging because now everyone wants to talk and write about it and and you're like okay well how do I add some value in the noise and you know how do I tell stories that haven't been told before and and you know tell smart stories as well that aren't just corporate hype in a way so it is a real challenge constantly but what I really appreciate about MIT Tech Review is that we aren't you know, we aren't told do this like they really trust our expertise and give us lots of freedom to pursue the kinds of stories I want so even though now for the past six months everyone's been talking about generative AI I'm kind of done with that and trying to find ways beyond that ... like looking at there's so much AI that's applied in our lives every day you know I want to think about how surveillance technology like what's happening there computer vision that's still a massive problem but everyone seems to have forgotten about that you know what's happening there or or how the public sector uses AI I think that's something I would love to pursue in the future but also looking at you know how now that we have generative AI and companies are rolling it out into products like what kind of effects that's having on people so looking at yeah the consequences of this yeah I mean there's just a lot to cover


ELEANOR DRAGE:

Yeah I agree the second order effects are more interesting than just the first order stuff and we get asked a lot of questions about ChatGPT and generative AI that were often not well equipped to respond to and it's a question of you know if people don't really know what questions to ask how do you give them a good response or give them information that's interesting and actually tells them something about the situation and what's going on in the world


MELISSA HEIKKILÄ:

hmm and I think a lot of like our job is is also to sort of be an educator like you know MIT Technology Review it has this sort of credibility and I feel like we you know we want to be the place where you can find a trustworthy reliable answer to a tricky technical question you know and that's why it's so important always always to say okay this is a good thing but you know we know it has these flaws or or even just highlighting these technical flaws that these language models have that people don't seem to be really talking about like I wrote a story a couple of weeks ago about security vulnerabilities and language models turns out it's super super easy to hack it for hack into them for outsiders to take control of your language model and then if you're using that in a browser you know from through Bing or whatever they can become extremely powerful scamming and hacking and phishing tools which is quite a scary thought considering that now millions of people are using them and we don't have any sort of tools to prevent these kind of risks.


KERRY MCINERNEY:

Absolutely and yeah on the one hand you know it feels like these kind of ethical issues have been massively brought into the spotlight and certainly I think Eleanor and I could completely resonate with you know just being almost very tired about talking about ChatGPT or having this like very what felt like a very niche AI ethics issue around large language models suddenly become you know a kind of dinner table talking point although your story about the ski lift did make me giggle a little bit because it's like you're like the living version of that Gwyneth Paltrow meme from her trial where she's half a day of skiing like what's the third order effect of ChatGPT ... but yeah, I mean something I though love about what you say about the MIT tech review is I think you're not only educating kind of the general reader but also I think showing journalists maybe what kinds of responsible stories we can be telling about these technologies and so what do you advise other journalists who are maybe starting to report an AI for the first time a lot of journalists I know are in this position because of ChatGPT like what kinds of practices do you think are really important or are there any things you think that journalists should avoid when they're talking about AI and related data-driven technologies?


MELISSA HEIKKILÄ:

Yeah I think it's great that so many newsrooms are picking this up and paying attention to this technology because that's like one of the most powerful tools we have to hold these tech companies to account because we just don't have regulation right now but it does frustrate me a lot when a lot of people you know haven't maybe had the opportunity or don't even have the time to think about these things in depth and so stories often get rushed and you know narrative you know it's quite easy to get the ready-made narrative from a tech company but one of the things and like buy into the hype that these technologies can actually do more than they can like they're actually quite you know simple stupid systems even though they look very fancy and have all like bells and all these bells and whistles but one thing that drives me absolutely mad is when people anthropomorphise AI systems, say, you know, this technology can see or say or like you know tell me to leave my wife that drives me mad because you know that's basically giving these tech companies free PR you're like oh this AI is so smart that it can tell me to do things about my love life and you know and then not go into okay how does this language model actually work like why is it do it why is it behaving this way why is it doing that and you know it can't actually tell you to do any of those things it's just generating text so so using precise language.


ELEANOR DRAGE:

I think is super super important and not over-hyping technologies and their capabilities one super annoying thing that you often see is like oh this AI can predict criminality or something like here's where crime is going to happen you know and it just can't do that you can't we haven't developed technology that can predict the future so that frustrates me we have found the same thing when we had reporting on our paper about how hiring technologies can't remove bias and a lot of reports actually was really good but then we had one newspaper say woke technology doesn't work and we're like oh God you know but you know we appreciate any headline at all so um beggars can't be choosers in your view, you know the biggest tech stories of of last year and what did you want readers to understand about those stories?


MELISSA HEIKKILÄ:

I mean the rise of ChatGPT has been a huge story and everyone has tried it at this point and I hope people have read at least one story that kind of goes into how these language models work what they actually can and cannot do so people have a realistic understanding of what they're getting themselves into when they interact with these models especially now that we're starting to see some data protection, regulators investigating privacy problems you know you probably shouldn't be telling these computer programs your deepest darkest secrets and your you know social security number and your postcode and whatever so yeah so so people have a sort of grasp of the technology they're using another big story I think has been around last summer generative image models were a big thing, feels like I read a long time ago but kind of what goes into the data sets of these models and and the kind of copyright issues you know because these AI models scrape internet and that includes our sorts well personal data but also a list of copyrighted images and we're now seeing lawsuits from artists and image companies against tech companies for using their copyright content so I think that's a big thing.


ELEANOR DRAGE:

Maybe we can skip to asking about Lensa because that's one generative image program -can you explain it to our listeners I actually had for context a male friend of mine be like send me 20 pictures of you and I'm like why and then he sent me all these pictures of him as an astronaut that this program had given him and I was quite surprised to see what happened to me and I guess you know like I sent him quite a few pictures the pictures of me that I would send to people like be looking nice you know with like some cleavage or like my hair done but I was quite surprised and shocked about the results so you explain?


MELISSA HEIKKILÄ:

Yeah yeah so last year this app called Lensa went completely viral it's an app that lets anyone create AI-generated avatars based on actual selfies of yourself and yeah lots of people were having fun playing with it like you could get a really hot photo of yourself as an astronaut or whatever but when I uploaded my images into it and as I mentioned earlier I'm a half Asian all my images were highly sexualized like super like pornified versions of myself you know I had full frontals and you know no nipples but you know massive boobs I just you know I look like a generic Asian anime character and I think that was like a great example of how bias creeps into these AI systems because they scrape all these images from the web and you know if you think of image models a lot of free content with a text and image pair is porn and um if you look into the data set that went into building the model the fuels Lensa you search Asian it's just porn which is so sad incredibly sad so yeah so it's quite disappointing seeing myself or like I wanted a really cool Avatar that I can use in my socials and instead I got these sexy chicks that didn't even look like me but then funnily enough when I that was when I generated images of myself as a woman but then when I generated images of myself as a man I got great images I got images where I actually look like myself I'm wearing clothes I mean I'm still at you know a chef or a doctor which are also kind of Asian types but whatever I'll let it pass and yeah I looked confident and assertive and you know wasn't modelled on a porn image


ELEANOR DRAGE:

But when you put your you put your face through the same program and you say that you're a man the images are more to your likeness?


MELISSA HEIKKILÄ:

Yes yeah crazy. I know I guess I guess it's how the developers have programmed it right like in the in the female filter you have like fairy tale or sexy or whatever and then the male one it's just professional smart or whatever.


ELEANOR DRAGE:

it's crazy so it's like a 10 years ago problem! Sorry, I don't believe you're allowed to bring that to market today!


MELISSA HEIKKILÄ:

I know but we haven't learned anything like I often think about the rise of social social media in like that 10 years ago and trying to find parallels to this day and I feel like we literally haven't learned anything all we have is the GDPR and and that's it.


KERRY MCINERNEY:

That's so disappointing because I am really shocked as well because I did use Lensa like many other people who are in the space and also got sucked in by the kind of digital hype. I'm also half Chinese and I also got some pretty terrible results - I got nothing I looked like me I got a lot of anime characters and also yeah just a lot of you know I feel like I felt very catfished because whenever I saw people wanted to talk use this they got these like fairy princesses who actually kind of at least looked like them, whereas I just got these like very generic multiracial looking people, with a lot of cleavage, so I was just like well you know what am I meant to do with this. But yeah like you said it's you know I feel like it's so easy for people to say like oh this is like a data set problem or this is like a stereotyping problem but you know I think like I want to say taking that step back and saying like why do we not have appropriate procedures for bringing these kinds of products to market and like why are we allowing people who are already I think you know very very vulnerable to these forms of being misread but also growing I think with the rise of various kinds of TikTok filters and just more and more forms of kind of digital you know facial transformation I think you're probably getting more and more vulnerable to seeing yourself look very very different online and then having to cope with how that affects your own self-perception and your relations with other people 100 and you know these images are you know probably online forever and then they get scraped into other models that are bigger and then that ends up being the the image of humans we see through technology which is a really really creepy and sad and disappointing thought.


Um no it's really fascinating and like I was talking with someone who's also a journalist but we used to be in the fashion industry in the beauty industry and is now writing a fantastic book called Pixel Flesh which is all about how digital technologies are affecting ideas of beauty and she was talking about how filters increasingly replacing for example beauty influencers use of makeup with them saying well actually I don't even use makeup anymore I just use a filter and I sort of ask becoming like increasingly unable to discern sort of what is the use of beauty products and what's like the use of AI I also want to ask you about a different AI image generator called Midjourney because I remember seeing this and I think it was really fascinating also hearing that Midjourney had blocked words like placenta, fallopian tubes, mammary glands, sperm, uterine, urethra, cervix hymen and vulva among others in the attempt to block uh the creation of pornographic content and so could you tell us a little bit about this story you reported on and what it showed about the challenges and trying to filter content and trying to create sort of better or more cool content?


MELISSA HEIKKILÄ:

yeah later I heard that also the word stepmom is banned... well that's terrifying slash I don't think I wanted to know why that was banned but yeah it goes to show how hard it is to do content moderation and well Midjourney says it doesn't train on porn right but you know they have this like it's just impossible to control how people use this and maybe there's a case for having like two different versions of these models where you can generate biologically accurate images but and then and then in the free one which is maybe more restricted but it is kind of sad that you know the gender biases go all the way to our internal organs because you just can't stop people from using these technologies to do, to create weird stuff or disturbing stuff this story was actually brought to my attention by two one biologist and her friend who were playing around with this technology you know I think it was International Women in Science day and her friend wanted to create an image of the placenta because her friend studies the placenta and just couldn't do it because it was banned and then we started digging into it and looking into what other words were banned and I mean they are banned because people are weird and they're trying to use the these things to create I guess pornographic or inappropriate or gory content and so it's kind of sad that a word like it's quite hard for me to imagine placenta porn but I'm sure that's someone's jam but it goes to show how hard it is to filter these things right because these models are built in fast data sets that are scraped from the internet and the more times like something appears in the data set the stronger the connection comes it becomes in the AI and it's really sad to see that the data has gone into these models just like has this extreme bias like the word like mammary glands instead of showing you a biological image of mammary glands you probably get boobs right or and but it's also really hard to to just stop people from not generating weird content or inappropriate content or unwanted content so it's a really really tricky content moderation question and we don't really understand how image making AR come to the conclusions they do we don't really understand these systems so tech companies like the journey they have they have good intentions right they don't want inappropriate content so they've just hot fixed it by banning certain words and yeah it's it's really hard I don't know what the solution is is it trying to come up with ways where you would have image generation systems with like a pro-section for scientists or researchers or educators and you might have like more scientific images in that data set so you could actually create fallopian tubes or the placenta and then a free version yeah I don't know but it's one of those fascinating things of a bias that just exists and we don't have any fixes to and is depressing.


KERRY MCINERNEY:

Yeah I mean but that's why I thought it was such an important you know piece of research because I think there's such an emphasis also in the - Eleanor and I see this when we work with industry in sort of bias-fixing, so okay how can we strip out forms of discrimination from a system how can we you know immediately make this a more fair and more equitable data set or a bit of technology and that's something that we you know work really hard to do is to say well you know yes it's really important to be trying to address and quantify and mitigate against some of the forms of bias that you're seeing emerging your technologies but when you're doing that you're also treating things like gender or race as you know these quantifiable easily identifiable characteristics rather than these much wider systems of power and I know as a journalist you must have a lot of challenges around how you communicate both the wider ethical issues around these technologies but also like the quite dense technical inner workings of systems so I'd love to hear a little bit more about you know how do you walk that line and how do you communicate from for example like technical approaches to addressing bias and AI systems for a lay audience?


MELISSA HEIKKILÄ:

By keeping the language super super simple like there's so much jargon as you as you know you know instead of using words like hallucinations you say they just make stuff up and just like really simplify the language and I guess try to keep it high level enough but still be very precise in the way and I guess like and understanding it yourself like if if I fully understand how it works then it's easy to explain and translate I try my best.


KERRY MCINERNEY:

no absolutely and I mean I do think that like this kind of translation work it must be just a huge amount of labor on your part as well in terms of having to be able to speak to so many different stakeholder groups and I think it's something MIT Tech Review does really well I think your work I think Karen Hao who have had on the podcast as well so for our lovely listeners please also check out Karen's episode because she's also fantastic and like Melissa looks a lot at these you know what Eleanor's called second order effects or sort of how these technologies are actively being used how they impact people's lives in very tangible ways but I mostly just want to say thank you so much for coming on the podcast it was really really wonderful to get to hear about you and your work and I've followed and really enjoyed your reporting for a long time so yeah it's a really nice to get the chance to chat.


MELISSA HEIKKILÄ:

Oh thank you so much for having me, I'm a big fan of the podcast so real privilege to be here.


ELEANOR DRAGE:

This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage


Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

80 views0 comments
bottom of page