top of page
Search

Giada Pistilli on good corporations, AI ethics and value pluralism

 In this episode, we talk to Giada Pistilli, Principal Ethicist at Hugging Face, which is the company that Meg Mitchell joined, following her departure from Google. Giada is also completing her PhD in philosophy and ethics of applied conversational AI at Sorbonne University. We talk about value pluralism and AI, which means building AI according to the values of different groups of people. We also explore what it means for an AI company to actually take AI ethics really seriously as well as the state of feminism in France right now.


Giada Pistilli is a PhD candidate at the Sorbonne and the Principal Ethicist at Hugging Face. They are a researcher in philosophy, specializing in ethics applied to Conversational Artificial Intelligence. Giada's research mainly focuses on comparative ethical frameworks, value theory, and ethics applied to Machine Learning (Natural Language Processing and Large Language Models). Giada is a Research Affiliate at Machine Intelligence and Normative Theory lab and the co-chair of the Ethical and Legal Scholarship working group of the BigScience open science project, that developed and deployed the Multilingual Large Language Model BLOOM.


Reading List:




Transcript:


KERRY MCINERNEY:

Hi! I’m Dr Kerry McInerney. Dr Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us, and we’d also so appreciate you leaving us a review on the podcast app. But until then, sit back, relax, and enjoy the episode!


ELEANOR DRAGE:

In this episode, we talked to Giada Pistilli, Principal Ethicist at Hugging Face, which is the company that Meg Mitchell joined, following her departure from Google. Giada is also completing her PhD in philosophy and ethics of applied conversational AI at Sorbonne University. We talk about value pluralism and AI, which means building AI according to the values of different groups of people. We also explore what it means for an AI company to actually take AI ethics really seriously as well as the state of feminism in France right now. We hope you enjoy the show.


KERRY MCINERNEY:

 Thanks so much for joining us here today on what is a very gloomy day in London and Cambridge. It looks like possibly a gloomy day on your side as well, but seeing you is very wonderful. It's brightening up our day already. So just to kick us off, can you tell us a bit about who you are, what you do, and what's brought you to thinking about gender feminism and technology?


GIADA PISTILLI:

Yeah. Thank you so much for having me. So excited to be here today. So I'm Giada Pistilli I'm a researcher in philosophy and I'm wrapping up my PhD in philosophy and ethics of applied conversational AI at Sorbonne university and I'm principal at Hugging Face, an AI platform. And uh, so I work at the intersection between policy, legal and ethics.


And I mainly do research in AI ethics. And we are really focused on everything that's linked to natural language processing, conversational AI, chat bots, visual assistants. And so, yeah, I've been thinking a lot about uh, how there's also this intersection between race, gender, technology. Of course, we're all aware of bias problem that exists today in technology especially, especially when it's linked to AI, cause we all know that when we have to train an AI model, so we're kind of taking a snapshot of reality. So one of the questions that I will always ask myself is, do we have to represent society as it is today with all its problems? Of course, linked to race, gender misrepresentation. And also there's the problem of overrepresentation and underrepresentation because those models are statistic model.


So, um, it's really interesting to ask ourselves the question of, do we have to represent society as it is today or do we want it to represent the society that we wish it was like in, in the near future, in five years, 10 years? So I think that it's not really a question where there's a set tool often because of course we all come from different sets of values.


So I really have also focus on value theory in my research. And so, yeah, I think it's interesting to kind of find a good balance between what we want to represent, and I'm really focused on use case applications. Cause every time I also talk about ethics, I say that, and this is kind of my theory and hypothesis, that we have different, uh, timeframes when we can apply ethics to AI.


And so we have mainly, This huge pipeline of AI development. And so we could apply ethics at the very beginning and the very bottom. So it would be like the development. So there's where we talk about bias, discrimination, overrepresentation, underrepresentation, and it's also linked to language. Of course.


Since I work on conversational AI, this is one of my main focus, and this is super true, especially when we talk about minority languages. So underrepresented languages such as we all know that there are really interesting in collectives and associations such as Masakhane or other also, uh, protecting the Maori language.


And so it's really interesting to, to focus on that. And then we also have the very end of the pipeline where it concerns the deployment phase. And so I think we would need two different analysis, especially philosophical and ethical analysis. To kind of see if we're going to focus more on the development phase and then thinking about data sets, et cetera, et cetera, architecture of course, too. And then you, we have the use case applications where we're mainly focused on then accessibility, transparency as well, explainability, and really the users. And those users, of course, are humans, so it's really important to refocus on the humans.


ELEANOR DRAGE:

Fantastic, thank you. You are principal ethicist at one of the most exciting ethical AI companies around at the moment, and we've been following the launch of Hugging Face since Margaret left Google, and it's kind of to me like one of these organizations like DAIR, which is Timnit Gebru's organization that sprang up from people who had had a really tough time in big tech and were either dismissed on really unfair grounds or bullied out. So, what attracted you to Hugging Face? And can you tell us a little bit more about the company, what it stands for, and how it was conceived?


GIADA PISTILLI:

Absolutely. So I think you're a hundred percent right. We have lots of people coming from big tech and then wanted to work, I would say a more of a human level company and so do dealing with, we've been saying, so we're talking about AI. This is kind of really crazy universe and really crazy technology that grows really fast.


So I think it's also important to refocus on the humans, as we were saying. And I think what Hugging Face does best is refocusing around humans. So with people that work there, our communities, since we're community driven and that's really what, so what attracted me, as I was saying, I really interesting on the users.


So it's really important to also grow, talk to people who use and are developing and deploying those kinda technologies. And then one also really important thing is that of course we're also value driven. And so that's also what really attracted me. And so how did that happen? Actually, I was working at big science, so it was this open science project and it was Really overwhelming at the beginning because we were over a thousand researchers wanted to develop underrepresented languages.


So we really wanted to be a project that focused on multilingualism and so from there and also having a really grounded like data, the other governance approach. Having this first experience also with Hugging Face, I had the chance to meet Meg and Yacine, other incredible people that worked there.


And I led the drafting of the ethical charter of the project. And then from there they actually asked me like six months after if I wanted to consider job opportunities at Hugging Face. And I was really excited because it's kinda like a dream team and especially knowing, yeah, I think. Of course, I mean, I don't have like 15 years of experience working in tech.


I only have like would say five, six years. But it's really rare, I would say, to uh, deeply care about the people that work in your company and given a particular tension about diversity and inclusion. So we, we have. Of course people dedicated to work on that , this is linked also to race, gender, et cetera, et cetera.


And so I think it was super important for me to make, to work in a company where they think about those questions, and they really try to, Inform everything they do based on the values that we share. Also, knowing that, of course, we all have different sets of values, so I think what they like about me and what I liked about them is that I have this focus on value pluralism the respect and acknowledging that there's no one who's superior to other ones, or one who's right and one who's wrong. But we all have this really big community where also, of course, the ML AI community is also involved. So this is kind a sharing, giving, and taking that really attracted me.


ELEANOR DRAGE:

Fantastic. Thank you. Well, it's so great to see some of these companies emerge, and I just hope that there's lots of other really ethical, really exciting pro justice AI companies that spring to life in the coming years. So can you tell us then, to you, as someone who sits in an organization that actually makes AI, what is good technology? Is it even possible, and how do your feminist values help you work towards it?


GIADA PISTILLI:

I think that's the most important question, I guess, especially when we're doing ethics. So I think that coming back to my kind of analysis where we have this huge pipeline, this kind of value chain also of AI development and deployment, I think it's really important to consider both of them and to have really a really focused analysis.


So ethical analysis that apply to both of them. And I think we. Probably, at least from my perspective, the kind of lack that we have today, also in the broader AI ethics community is that we usually tend to, uh, think about those two, uh, analysis kind of separately. And sometimes we instead just focus on the development phase.


And so of course we have, we go deep dive into. You know, considerations about bias, uh, uh, inclusion, accessibility, but just from a, maybe a dataset and the training data point of view, maybe sometimes also, uh, kind of forgetting what those technologies are for.


And so I think it's really important to refocus on asking the good questions and then from there, At least my approach is not only having a kind of a risk based approach where we try to anticipate all the risks that are associated to a specific technology, but also trying to lead by example. So I think it's really important when deploying an AI model or system, it's really important to also give good examples. It's just, I don't know. I've deployed this really interesting machine translation model. Here's how you can use it and here's when you can actually use it.


And so it's really important to say those are the limits, those are the intended uses, and that's how you can actually make a good use. So I think good technology or good AI needs to be linked to a good use and it can be just isolated its own in its own kind of universe where it doesn't get in touch with any people.


KERRY MCINERNEY:

That's so fascinating and I really love the way you frame it as saying like, yes, it's important for us to think about the risks of technology, but that's very much the bare minimum. And there's so much more that companies can be doing to model, trying to develop good technologies that themselves generate good practices.


And I also really loved hearing about your reflections on the Hugging Face as well, and one of our previous episodes with Sarah Franklin, so for our listeners, definitely go check that out, um, Sarah talks about the way that good technology starts with good and ethical research cultures and that, you know, if you don't have those good cultures, like you don't necessarily have a space where people are going to be asking those critical, probing, constructive questions that really need to be asked at all these different stages from development to deployment.


Um, but I wanted to ask you a little bit more detail about what this kind of looks like for you on the ground, the day to day, because you are in a really interesting kind of linchpin position where you are bringing together. Ethical and legal and technical expertise in having to try and find ways of bridging these three areas.


So what's that like? How do you manage to balance all those different kinds of sometimes competing concerns?


GIADA PISTILLI:

Yeah, about, well, of course first things first. I'm not alone doing that. I have an amazing team who really deserves to be on the spotlight as well, and I think it really takes a village to do also great work, and it couldn't be done without the help my research scientists that work with me, um, the tech councils who also work with me, policy directors, everyone. So all my colleagues are really fundamental to this work. We recently published one of the, uh, joint work that we did together, especially linked to, uh, we use as a use case example, the big science project, since this is where we really started working all together.


And so, um, I completely agree, everything starts when thinking about good technology aboutwhat can also be called kind of an ethical framework, which is also my specialty, I would say for now. So what we, what we wanted to show is that it was really important to think about ethical frameworks, but not standing them alone, because of course they can be interesting, they can kind of frame, it's, uh, always a starting point to reflect also ahead about the risks, as we were saying, but also what we want to project and what we want this technology to be and how we want it to be used. And so grounding on values, I think it's one of the most important things to do at the very beginning.


And then from there, the same values can inform other tools. There could be legal tools, there could be also technical tools. So I briefly mentioned model cards that are of course, part of this kind of an analysis framework that we wanted to, uh, that we wanted to draft. And then there's also legal tools that, uh, are, I don't know if you're familiar about, um, RAI licenses for instance, it's, it means responsible AI licensing. And so those are, uh, supposed to be open science licenses, but with an annex, where you can have use restrictions. And so those same use restrictions are usually informed by the values that have been put forward by an ethical charter. So, this huge and really interesting movement where those values going for, uh, also different disciplines, especially in the AI governance framework where everything could be defined as being stronger together, meaning that every time we want to put forward an AI system or an architect, then it's interesting to stand and start by setting up those values that are going inform in this kind of circle, uh, so legal tools and also technical tools. And so in this kind of really big movement, Um, it's interesting to see how, uh, those same values would are starting to be operationalized because of course, one of the main, I would say critics about ethical frameworks is that sometime sometimes they're too general, they're too stuck in theory.


KERRY MCINERNEY:

Yeah, absolutely. And I think hearing your perspective on this is so interesting because I feel like something Eleanor and I often are really working to do is to try and do this translation work between people who maybe work in different fields who actually have a lot of common ground and one's a lot of similar things, but might best be working in different lexicons.


Um, and, um, Geoffrey Hinton, you know, this kind of very large figure in neural networks and machine learning came to give a talk at Cambridge, um, a couple weeks ago. I can't remember the exact date now. Um, and you know, it was really interesting to hear his perspectives on AI safety. Um, but you know, something that I think, um, caused quite a lot of controversy, certainly among like, um, people in our field was thinking about his response to questions to do with interdisciplinarity and saying, you know, oh, well the only way you can understand a brain is if you build it yourself.


And this definitely made me a bit sad because I feel like I spend a lot of time defending engineers to other people in the AI ethics community and saying this isn't representative of like, how engineers think. This kind of mindset of there's only one way to know. You know, I think there's um, you know, such a growing kind of nuanced depreciation of some of these issues.


And I think, yeah, having companies like Hugging Face modeling different ways of doing this is really important.


ELEANOR DRAGE:

We spend a lot of time talking about tech and a lot of time talking about values in relation to technology. But sometimes we don't talk about feminism on its own or activism on its own. What grounds the ideas that we come up with at work when we think about technology? So you have a a lot of interest and a lot of involvement in activism in Paris.


It's its own particular thing, right. And I lived in Paris for some years and I absolutely love it and it has its own politics and set of concerns. So can you just tell us a little bit what's going on with feminist activism in Paris? Where's it at?


GIADA PISTILLI:

Yeah, so full disclosure I've been way, way more active before Covid and after Covid.


I dunno if it's also my, just my impression, but I fear that lots of specific political movements are kind of drained out. Of course, not talking about the strikes, not talking about the, like same things, same old or French culture as well. But yeah. So what's super interesting about feminism here in France, especially linked to activism, is that it's really fragmented.


I don't know if it's the same elsewhere, but it's really such a pity to see that, um, when joining. Like it's super hard to join forces and it's really hard to. Um, kind of agree on common important things where we stand. And so last time I checked they were still really pretty fragmented and there's also this really important value here called secularism.


But just long story short is that it also has been kind of manipulated by the far right and also conservatives where since the secularism, um, interpretation of French people, at least French culture is that no religious symbols are allowed on the public space. Unfortunately this also means the Muslim women and Muslim trans women, because of their religion, because of their religious symbols., Then we have all those misunderstandings and like the common part I would say kind of gets shadow banned. And it's, since I'm not, I mean, I'm not French, I'm Italian. And so of course, we also have a really complicated relationship with secularism because we call ourselves secular, but we're not, but I think it's such a pity to kind of forget what's important about feminism. And so, um, yeah, at least that's what I experienced and I was part of what they, what it's called more like radical feminists, where of course, Muslim women were allowed.


But I was, I was a minority at that time, and I'm talking especially between 2019-2020. I not, I, I really have to be honest, I don't know how it is at the moment, but maybe it's surprising but feminist movements are not that strong here in France and despite maybe what someone could believe.


So, yeah, I think it's mostly linked to their history and their relationship with religion.


ELEANOR DRAGE:

A shame how those debates around laïcité or secularism in France and feminism are kind of unchanged since when I was first learning about this stuff as a high school student. We've had a series of quite similar leaders in France that have done very little to move this debate on. I was also talking to you before about how, and this is a problem everywhere, of course, feminists who are feminists on paper have difficulty being feminists in real life or kind of enacting that feminism.


And of course, nobody is perfect. We all hopefully do our best. Um, but this is an experience I had in Bologna when I was doing my PhD in a department full of women and they had worked really hard to get to the positions that they had, got to get tenured and really slammed the door behind them. They were really mean.


And it very much reminded me of Sara Armed, she talks regularly about feminists on paper and feminists in practice. So what's your experience of that in Paris?


GIADA PISTILLI:

Yeah, so coming from a very conservative, also university, cause also we have to say that of course the philosophy department at Sorbonne is amazing. They had really amazing features. But the institution, because it's really old institution coming from the 13th century and the really attached to, or their importance, history, blah, blah, blah. Also, and I mean coming also from the philosophy department, which is usually really much populated by women and also trans women, I mean, we were not that many when I started. We were 40 people, but it was really diverse. But all our women teachers and professors, they, you could really see that they worked really hard, maybe extra hard to be in the position when they were, but because they work so hard to get where they are, then they want other women to work extra hard to be at the place where they are at the moment instead of helping each other. And so I had some unfortunate experiences with some of them. But I hope this is going to change with next generations, and I completely agree about activism, and I also want to highlight the importance of really rooting for other women, because sometimes we say it out loud, but in practice, We don't really do that, we could be our worst enemies. And that's, that's really sad. That makes me really sad because we already have all the problems that we are aware about and especially in the tech world which is really main dominant, but also in academia. I want to say I'm surprised about your experience in Bologna, but I'm not that surprised because once again, as you were saying, we're really, um, it's really easy to say that we're feminist and we're doing lots of stuff and we are supporting each other, but sometimes it's not really the case, and so I hope that future generations are going to be more empathetic and more supportive also of each other, and actually makes me think about the language question. I don't know if it's something that English speakers have, because you have a non-gendered language. But here in France, there was this huge debate about should we have inclusive language? And actually, of course, feminist movements were really rooting for that. And then at the end of the day was, I think two or three years ago. At the beginning, they decided that there wasn't an issue.


There was a non issue as the same French, un non-sujet. And so we, we got all shocked because even conservative universities such as Sorbonne, they were actually using for the official communications. I also, lots of my professors, they were using inclusive language and so the final decision from the academie francaise came out as a really big shock, and so there was also manifestations at the moment and like it's kind of sad to, uh, acknowledge the fact that we have to encounter the very unfortunate and unhappy episode to kind of gather around something. But at least there was this huge movement as well. And actually, I think six months ago or a year ago, They decided to come back to that decision and now they're still reevaluating if they wanna make it real, this inclusive language.


But when of course the first decision came out, no surprise, it was all white old dudes. So it's the same story all over again. Why they should be the ones deciding for our own language, and especially saying that language don't carry values and don't carry social construct. I'm sure we can agree that that's not true.


So I hope that in the really near future we also have step forwards in that direction as well.


ELEANOR DRAGE:

And just really quickly, language is, I know a big deal everywhere, but the French take it really seriously. I mean, the fact that people will protest over language, you know, it doesn't happen everywhere.


And it's super important because the academie francaise, is this building, literal building on the banks of the Seine . And it's, it's very beautiful. It's very grand. It's, it's sort of curved on, on both sides. And it's there where the integrity of the French language and here, I have many, you know, inverted commas, is defended according to this bunch of people who protect the French language, they are the guardians of it. Um, they are the gatekeepers of French language. But what's amazing about French is that it so exceeds, like every language, it exceeds the, the boundaries of this protective, um, force because people speak verlan this mixture of, um, of Arabic and French and many other influences.


French is heavily creolized throughout the world. Um, you've got these amazing writers like Césaire and, uh, Edouard Glissant that have played with the way that French has been integrated into different ways of speaking. And French is this enormous colonial force and the destructive force of the French language is still used to, um, impose power over Vietnamese communities.


There's this unbelievable, horrific document that I found while doing my PhD. That said, that I think it was written in 2007, which is really recent, that talked about the authority of the French language and how it was still being used, and should be used to influence elites in Vietnam.


So this is a massive issue, and for those of you who think we've gone completely off topic, large language learning models rely on language in order to function. There's huge debates over which languages and how a language should be developed in large language models. So this is a really important conversation because these tools are not just being developed in English, but should also reflect the way that people speak, both within languages and between languages, you know? What about bilingual conversations? What about conversations where people move between two different languages? I have a ton of friends who speak not just in one language, but will constantly flicker between languages. So how is that represented in these tools that, that we are building?


KERRY MCINERNEY:

Yeah, I think that's really interesting. And on a, you know, much less intellectual side note than, you know, your really interesting exploration of this Eleanor, as someone who is I would say bilingual and you know, has done a lot of work in French and English and really thought deeply about these questions of translation and of, and sort of multilingualism.


I will say, I'm always really. Intrigued slash slightly scathing of whenever there are novels, whenever it's like a character who's kind of meant to be like multiracial or, you know, existing in multiple language worlds. But then the alternate sort of the non-English language, all they ever say is things like my love or like mother, or they're like a very, to me as someone who grew up in a kind of multilingual household, a very strange use of how these, uh, different languages might actually be used. Like, definitely the only things I can say in like Cantonese or Fuji and the like, things that are usually very rude cause they're things that like my mom didn't want other people to understand.


So I can only have a very mean conversation with people. I think these issues around power and language are so central. To the discussions we're having right now about AI ethics and large language models. And for our listeners, I'd definitely encourage you to check out some of the other fantastic episodes we have on this topic.


We have one with Meg Mitchell, which I'll link in the show notes and also one with David Adelani from Masakhane, the organization that Giada mentioned earlier in the episode. Um, but I wanna come back, I guess to this question that we started off with, which was around thinking about kind of feminism and practice, thinking about the relations we have with each other.


Because I'm definitely really in agreement with you here and that the older I get, the less interested I become in people's kind of stated feminist commitments, even though I think those are very important and the more interested I become and just like how they treat people and how they exist in relation to other people.


Um, but I wanted to ask you about one particular kind of existing relation, which is thinking about woman only spaces. Right. And for Eleanor and I, this very much is a trans-inclusive understanding of women. And you know, it's very distressing, the way that sort of woman as a category is being used to enact so much violence.


Um, but I know that this is something that you guys have explored at Hugging Face and just wondering if you could share your experiences of them.


GIADA PISTILLI:

Yeah, sure. So also talking about safe spaces, I guess. So we had these really big conversation about, so for disclosure, we're a hundred percent remote company, so we also of course have offices, but most of the conversations that we have happen on GitHub or on Slack, which is our workplace. And so of course, Coming back again to communication and language. It's really important to have one inclusive language. So we have our DEI people that are really really deeply caring about the way we speak.


It's trying to evolve, especially coming for myself. It's not my, my first language English, so I always try and learn new things every day to be more inclusive and to be more respectable, respectful of everyone. And so they're really pedagogic I would say. So this is really amazing, but also coming back to the women, so the women topic, we had this really long conversation about if we should or not have an only women channel Slack, which is also private. And also coming of course for people who don't identify themselves as being women. And so we started having one where we were discussing all lots of stuff, also kind of sensitive stuff also. Uh, lots of, unfortunately, and it's really sad, but lots of my colleagues also experienced like people asking them, uh, if they're old enough to have, I don't know, interviews with journalists and we never say, especially our comms lead Brigitte, she's also a woman and she was just noticing, it actually happened just a couple weeks ago or a week ago, and she was like, that's funny, every time I try to suggest, like one of our male colleagues, nobody asked them if their, uh, I don't know what's their age and what's their expertise. They just acknowledge the fact they're a man, and they're good to go and they do interviews, but sometimes when she tries to suggest our women colleagues, then she has some time journalists saying, okay, but what's, what's their expertise and what's her age? And so we're like, what's, I mean, if she, if, if she works here and she, she already has the expertise, what do we care if she's 21, 26, 35, 45?


I mean, it doesn't make any sense. It's important to have this kind of a space, a safe space where we can also talk about our frustrations and sometimes things that we encounter. Like I have one foot on academia and one foot on the tech companies, and so, and also my other women colleagues will experience something like when, I don't know, giving conferences and having those really older male colleagues bringing you down or saying that you don't have, once again the right expertise. And of course, we all suffer of this kind of imposter syndrome, which is super sad and I don't really see it that often in our male colleagues.


Not only, of course, I'm talking about broader colleagues and so yeah, I think we had this discussion I think six months ago or a year ago, where we wanted to be private so at least we could like really exchange tips and be really supportive of each other and say, okay, that's what happened at that conference, and I feel really bad.


I think it happened less than a year ago, where was at this conference, and then, uh, talking about super intelligence and then that we should focus on more concrete, uh, like discrimination and climate change there was this male philosopher colleague who actually stopped me and interacted me and said that I was being irresponsible, that it was in the way of doing philosophy, that we should all be focusing on existential risk and super intelligence and blah, blah, blah.


And so I got so angry that I also kind of shared the, uh, the information with my colleagues and the were all very supportive and saying, because of course when it happens, you feel so so isolated. You feel so stupid and you feel like, okay, maybe I'm the one who's wrong, and you start questioning everything.


And so yeah, getting back to hugging place, I think it's really important to have this kind of safe space where we can share frustrations and giving each other tips. And how sometimes it could be really easy, like okay, next time you should just answer that or be more confident about yourself because you deserve it.


You did amazing work. And so I think it's amazing to have this kinda, um, space where you can share everything.


ELEANOR DRAGE:

Totally. Thanks so much for bringing that up. And it's easy for women to seem mean to men. We always seem like we're being mean. And then someone will say exactly the same thing as a guy and people will be like, yeah, that seems really reasonable. So it's just desperately unfair. And I really like the way that you're talking about these female spaces because it's something that seems to have been left behind and the separatist lesbian fantasies of science fiction or whatever, but all these beautiful- oh gosh, have you been to the, Hampstead Heath, the Ponds, the ladies pond at Hampstead Heath? No. There's these kind of special like women only zones, um, but it seems to be something that has really gone out of fashion. Or like men have moaned about being excluded as if, as if women-only spaces are somehow analogous to or symmetrical, the same as men only spaces, but it's totally an asymmetric history, and those two things cannot be compared at all. There's that argument of like, oh, well you are excluding men too, because, you know, like that is just such nonsense. It's the most ahistorical reading of what's going on. But anyway, we could talk to you about this for days.


But thank you so much for joining us. It was a real pleasure.


GIADA PISTILLI:

It's my pleasure. Oh, thank you for having me.


ELEANOR DRAGE:

This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry McInerney, and edited by Eleanor Drage.

43 views0 comments
bottom of page