In this episode, we chat with Anita Williams, online counter-abuse policy and platform protection specialist, about the new challenges arising in the area of online abuse and how abusers exploit platforms and systems. We explore the multiple and intersectional harms that can arise from new technologies, the ethical problems around data collection, the protections required for content moderators, and the need to build women’s experiences into new technologies.
Content Warning: This episode contains discussions of online sexual abuse, grooming, and child sexual abuse.
Anita Williams is a tech professional who specialises in online counter-abuse policy and platform protection. Her work includes envisioning all the ways in which people can abuse systems and building effective safeguards against those abuses. That process of identification, rectification, and process implementation also requires acute awareness and attention to disparities across race, sex, language, and localization.
Born into a Sierra Leonean immigrant family, Anita became interested in counter abuse issues such as labour and sex exploitation after witnessing similar problems pervade her community. She obtained her bachelor's degree in Justice and Peace Studies with a concentration in Gender and Violence from Georgetown University. Anita then proceeded to Google where she worked as a Legal Specialist on child sexual abuse investigations, counterfeit operations, and political advertising transparency policy. After concluding her tenure on the Google legal team, she began a Technology Policy postgraduate degree at the University of Cambridge, Judge Business School as a 2020 Gates Cambridge Scholar. Anita is now conducting a capstone research project with the UK’s Centre for Data Ethics & Innovation on artificial intelligence standardisation as a graduate researcher. Outside of her work, she loves horror movies, hot weather, yoga, and her dog Stevie.
Content Warning: This episode contains discussions of child sexual abuse and gender based violence.
What the Guest is Reading (and Listening to):
The Door by Magda Szabó
The Bluest Eye by Toni Morrison
Here is Where We Meet by John Berger
A Sound of Thunder by Ray Bradbury
#48 What is Moral Progress? (podcast) by Making Sense with Sam Harris
Our descendants will probably see us as moral monsters. What should we do about that? By Robert Wiblin and Keiran Harris
She Makes Money Moves (podcast)
WSJ's The Future of Everything (podcast)
Transcript:
KERRY MACKERETH (0:01): Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode. ELEANOR DRAGE (0.32): Today, we’re talking to Anita Williams, who's an online counter-abuse policy and platform protection specialist, about the new challenges arising in the area of online abuse and how abusers exploit platforms and systems. We explore the multiple and intersectional harms that can arise from new technologies, the ethical problems around data collection, the protections required for content moderators, and the need to build women’s experiences into new technologies. We hope you enjoy the show. KERRY MACKERETH (1:03): Content warning: this episode contains discussions of online sexual abuse, grooming, and child sexual abuse. KERRY MACKERETH (1:11): Anita, thank you so much for being with us today! Just to kick us off, can you tell us who you are, what you do, and what brings you to the topic of feminism, gender,and technology?
ANITA WILLIAMS (1:21): Absolutely, so I'm currently obtaining my Masters in Technology Policy at the University of Cambridge, in the Judge Business School. And before this, I was working in the online counter abuse space for just about four years. And at Google, where I worked, I was in the legal department where I worked on a number of topics ranging from child sexual abuse investigations to counterfeit and fraud operations, as well as most recently political advertising transparency and putting the systems in place to prevent people from abusing our online platforms, which is an uphill battle. But yeah, it's a, it's a industry and a space I've been really interested in, since a young age, really, for a number of reasons. And it's definitely popular in like the last five years with the rise of automation and predictive decision making and, you know, some worries of predatory online behaviour. It's definitely, kind of, a sector that's really booming, I guess, and so there's definitely a need for folks in the counter abuse and operations space.
ELEANOR DRAGE (2:32): Awesome, thank you. So drawing on that a bit further, our podcast is called The Good Robot, so we ask, what is good technology? Can we even have good technology? And how do we work towards it? Can you tell us what you think?
ANITA WILLIAMS (2:47): Yeah, no, that's a big question. Um, you know, good technology is contingent on both the systems that we deploy, but also the users. And I think a big conversation that I've been having with folks in industry and academia is, you know, can we really eradicate our platforms from/of abuse and violence and hate speech and things like this, because really, what these are are vehicles for just some of the nastiest parts of human behaviour, right, and the way we communicate and the way we think can be exacerbated and made more extreme by certain levers that are pulled without a doubt. But ultimately, I think our technology is only as good as our communities are, when it comes to making our online spaces safe and trustworthy. You know, so good technology or technology will be bad for as long as humans are, we're very clever at finding ways to break or bend rules, including social rules and abuse systems. But, you know, it's an endless activity. But there, there is a lot of hope in that, because we can always work towards more safety. And I think that evolves with our societal contexts. However, work that I have been dedicated to in the recent months, is figuring out how to create a balance of safety and not overly restrictive, legal, or regulatory regimes as well, right, because we also want people to exchange ideas and be able to challenge ideas fairly and equitably. And we also, you know, want to encourage innovation, and that comes through that tacit transfer of knowledge. And so we also, maybe this is a very American perspective, because I know, you know, Europe, definitely leads the charge in the regulatory energy for AI and other platforms. But, you know, there is definitely a balance that has to be struck in that and so more stakeholders that are involved in the self governance and kind of oversight space is definitely necessary to make technology good, right? We need people from industry, civil society, academia, you know, nonprofits and governments, like we need a very robust and comprehensive set of folks participating in the self governing space, right, like the Googles, and the IBM's, coming up with their AI ethics principles. We need more people in the design room of those things. So that there is a, maybe a lower yield of potentially paternalistic or overly restrictive regulation that could stifle the exchange of those ideas and potentially restrict expression under specific circumstances. So I think that's how we get towards good technology though I always wonder if good technology can exist objectively and practically, I think it's this kind of the beauty in the struggle of humanity is working towards good. And that's the best that we can do.
ELEANOR DRAGE (5:57): Can you tell us, as specifically as you can, what are the kinds of harms that result from the technologies that you were working with?
ANITA WILLIAMS (6:05): So, I think people would be really surprised to know the ways, just the crazy creative and worryingly so ways, that people try to abuse systems, you know, some of the instances I've come across, would be in in ways or on tools that you can't easily search for this information right and the purpose of conducting illegal activity online is to not be found. And so, you know, using certain tools like documentation tools that are very innocuous, or, you know, location tools that are very unsuspecting would host potentially terabytes and terabytes of something like child sexual abuse imagery, and that is something that is a challenge for many practitioners in the space is to figure out how to imagine all the ways that people can break a tool or use a tool maliciously and try to deploy resources and policies and automated processes, to, to address that. But yeah it's, it's really imaginative to see the ways that people can do this and I think it's something that is an ongoing challenge and requires both internal to the firm but also external relationships and reporting and, and surfacing this, these kinds of instances of bad behaviour.
And so the ways that I've been exposed to that have definitely been like platform based, but then you also have the, the larger technology development conversation in terms of accuracy, and facial recognition technology or voice recognition technology. And one example I usually come to is a discussion with my mother. So we're originally from Sierra Leone, and she has a West African accent. And, you know, I'm not going to argue how thick or light her accent is, because she will definitely get mad at me. But you know, devices like Siri and Alexa have a very hard time understanding her, right. And the funny thing there (maybe not so funny) is that they have, those devices have a easier time understanding my father, right? So then you have an issue of maybe decibel level or hertz frequencies, or bass and baritone and voice being depicted are detected. And her frustration is insurmountable when it comes to these things. Every time I'm home, she's telling me like, ‘Can you can you talk to the box for me, talk to the box for me’. And that's because she feels really disempowered by the kind of automated voice systems that she has to call help centres for not understanding her and always having to rely on another person, usually a male, which is my father in the house, to be understood. And so these technologies need to gain the access to the data necessary, through obviously consensual and ethical means, to make their, their recognition systems and targeting systems more robust, right, more robust and more accurate.
So in terms of robustness, it should be able to, it should be able to handle different conditions. And when it comes to picking up someone's voice or identifying their face, it should be able to, you know, be able to identify them in the rain and the snow and dark lighting, so on. But at the same time they, they need to be accurate and reach these benchmarks. And I'm sure a lot of folks who listen to this podcast have, you know, read the reports of facial recognition technology, not serving darker skinned women, as it does with levels of accuracy for lighter skinned men. And so that's that's an ism, right? That's it, that's a sexism, that's a racism, that's an intersectionality cross-section of an ism. And these technologies only follow the instructions that they're given, right. And if the instructions that they're given are based on data that is not enough, it's not representative, it's not nuanced enough, it can always improve but if there's no effort for that improvement, then that's definitely something that can yield to a lot of harm. And we've seen that in I've definitely seen that and child sexual abuse investigations in terms of you know, misidentifying the agents in the the imagery themselves, which can lead to a number of investigative fallouts, but also in terms of law enforcement and misattributing certain suspects to people who are wholly unrelated to their features or, or other identifiers because of a very low quality or low robust recognition technology and so on. That's definitely harm if it perpetuates biases and just unfortunate historical discriminations against people. But that's where you need the multi stakeholder involvement, and kind of ethical oversight of the ways in which these systems can improve.
KERRY MACKERETH (11:09): Yes, absolutely. So why do you think it's so hard to fix the kind of problems that you've identified?
ANITA WILLIAMS (11:16): Right, yeah. So it depends on what aspect of the problem you're talking about. And I think, you know, as I've kind of underscored, data, data, data is definitely a big cornerstone of this, right? So the availability of data is really important, because we want to have these developers be able to access very large repositories of information so that they can make the systems better. But that availability also rests on consent and transparency of the collection of that data, which can jeopardise proprietary practices within a company sometimes, and so there's that balance to strike as well, like, do we publicise our practices for obtaining data, if it's a unique to us practice, which then can undermine our competitive edge? And then maybe, if so, then we just don't get the data. And that's that. And so we continue on with a mediocre product. Or do we democratise this kind of data collection process, and be transparent about that, so that we can get more robust systems and it's not an either/or situation, there are trade offs, but there is a middle ground that can be struck. And then you also have the, within the issue of data, you have the diversity of sampling and the ethics of gathering data. So you know, there was a recent, maybe as of a couple years ago, exposé on how certain agents were trying to collect photos of Black people, darker skinned people, and went to homeless communities in order to gather those, those kind of feature databases. Well, you know, where was their consent? Did they know where this was going, whether they agreed to be a part of this experiment and, or not experiment, the data collection? And, you know, that's, that's definitely a worry. And I think consent is definitely a part of that process. But consent also takes a lot of time to gather from people, right, you know, you got they got to read what's going to happen, they have to understand what's going to happen, and they have to consent to it. But the rate at which these technologies develop, you know, sometimes they don't have time for consent, right. And that's also another balance that needs to be struck under data. So that's definitely an issue there. When it comes to fixing these problems, it matters to have the right people investigating these things at a speed that keeps up with the abuse that happens.
And that's why working in an operation space is transformative, because it really teaches you how to streamline and make efficient these investigative processes. But, you know, as we've heard in the news, over the past few years, the people that do this work, the people that, you know, view, just utterly horrific, violent and morally just reprehensible imagery and videos and comments, like the people who review this in the operation space, they need care, they need to be taken care of as well. And that in a business sense, is a resource, right? That's a resource that needs to be maintained, and sustained, and costs money to do so. And it sucks that it comes down to a numbers game but, but really, and sometimes working within the margins of profit and revenue don't always yield to the most comprehensive treatment of the folks who are doing this very socially necessary and sometimes legally necessary work. But it doesn't necessarily generate revenue on its own right to do so. It's not you know, a new product that people buy and can bring revenue to the corporation. It's, it's a liability mitigating investigative process, right. And so they're working within margins, and they're working with within financial upper bounds. And so I definitely think that there is a need for more reviewers, whether that takes the form of human or automated, obviously, we have to make sure the automated options are making really good and really accurate judgments and are calibrated to do so. But in order to get more people to do this review, you also have to take care of them. And I'm sure you may be aware, the contracting scheme within tech sometimes means that contractors who do the bulk of this legwork aren't given the kind of care opportunities they have to maintain their own sanity afterwards. And so that's, that's a huge difficulty to handling just the volume of proliferated abuse content, whether it's, you know, photos, or videos or comments that we have seen can lead to entire social movements, right? In one direction or another, whether we think they're good or bad. And, you know, how are 100 contractors in the bay going to handle the speeds at which this this proliferates? You know, as an example, and so, that's definitely a big problem.
And then, one of the last difficulties I think I'll highlight is the nuanced, the nuanced challenges of live streaming, and how that's brought the speed issue, all the way up to 10. Right? Like, it's not that all the way up to 10. Because now we have content that is being disseminated and just supercharged, in real time. And there, there are entire spill overs on, yes, criminal activity and implications of investigation. But also even, you know, maybe sometimes afterthoughts on like copyright and trademark and, you know, counterfeit items and practices being transferred on real time through live streaming, but are gone within a second of them appearing, right. And so that that's a real, that's a real struggle to capture. And I think that's a question that's still ongoing for, you know, platforms who their entire model is based on that, such as, like your TikToks, and Snapchats, and Instagram stories. And so, I know YouTube has done a lot of work on this for the past years as like the, as a very large video platform trying to capture this. But they have the advantage of having, you know, a stored video as well. And so I really wonder what the solution to live streaming things such as, you know, soliciting sex from minors, or minors who have access to these streaming platforms who solicit their own sexual services, and are then being preyed upon by people who should not be paying attention to that. Or, you know, online grooming or, you know, all these real time social exchanges. You know, there's not a very clear answer on how to detect that. I do know that there's a lot of effort going into automating this, because that's really the, it's both our tool and our terror, sometimes. Like our terror, because it's what has brought these, these issues to such a magnified level. But it's also a very helpful tool to meet the challenge where it's at. And so, you know, calibrating machine learning models to be able to have higher eye accuracy, identifications of skin tone of sexually developed organs and features or, you know, certain words that are being keylogged to identify grooming tactics, like these are all technologies and efforts that are being put into to try to address these problems. And you know, I wonder how close we are to at least bringing the horses back after they've already been set to, set to run and so I think only the next couple years will be able to tell us what's, what practice is going on in industry will really help alleviate this real time speed and volume issue. But also outside of industry, what may, what may help curb certain features from being used in the first place so that we can kind of put a lid on some of the worst of human behaviour that takes place online.
ELEANOR DRAGE (19:31): Can you tell us a bit about these teams that are in charge of doing this really important work, and how we should be better looking after them?
ANITA WILLIAMS (19:38): Yeah, that's something I've been thinking about a lot. And to the extent that I can comment on it, you know, I definitely think there's a, it's a human resource problem, right, like taking care of your employees in a way that, like I mentioned, allows you to still meet the bottom line and the yearly budget constraints that you as a department have, but also being able to expend the necessary resources so that there isn't constant burnout or turnover, or lack of vertical movement in these folks' careers, right. You know, we all reach ceilings in our career for different reasons. And some of that is intrinsic to the design of an organisation or a department within an organisation. And if content review is something that is kind of all that exists in your domain, and there's no opportunity for dynamism, such as creating policy, designing policy, or working externally, with law enforcement and NGOs, and, you know, publications, if all of those opportunities aren't afforded to you from like a career progress standpoint, then you're definitely going to get burned out, you're definitely going to leave these spaces. And then what that means is that there's a constant refresh of people who are being, you know, attracted or consigned to these, to these roles, that may not know what lies ahead of them, they may not know fully, what, what the impact of this kind of work has on them psychologically, but they just, they enter that kind of hamster wheel over time. And so the work that is done, may not improve, that could be a fallout of that. The, the level of scaling that needs to be done upwards to have the requisite input to make that happen, may not be there. So there's a lot of potential negatives of continuing the system, as I've come to known it, but also that is generally popular or practice at present. And, yeah, I just definitely think from a human resource perspective, as well, from a career advancement perspective, you know, these opportunities for self care and provided care by companies, this is very necessary to sustaining this, this workforce, and this really good and, again, sometimes legally required work to do.
But I do know that there is more than enough inertia and energy happening within the walls of Google and Facebook and all the others to meet these needs. I think it's a set of conversations that require trade offs and strategy and a different prioritisation scheme. But they are happening. So let me let me do say that, you know, it's not all doom and gloom, there is definitely a lot of energy going into rectifying these, these issues, and, you know, really trying to support the people who are who are doing this work, but it'll, it'll be a while before it's hammered out exactly the best way to do that, especially to meet the needs of the problem and the scale, the volume of the problem, but also the meet the needs of the business, which you know, they do you have a responsibility to go on, in a certain direction. So, yeah, it's, it's, it's a difficult task, but there's definitely a lot of momentum and trying to solve it.
ELEANOR DRAGE (23:05): On a very different note, what does feminism mean to you?
ANITA WILLIAMS (23:09): Yeah, no, I was preparing for this, for this talk. And I've thought how funny or the kind of loose can I be, but no feminism it means one, it means a lot to me, it means building in, you know, female-specific considerations into the architecture of our world, because we do know how many industries and spaces have not taken into account the the female form, right from seat belts, not being designed originally with female statues in mind to, you know, period products being designed in a way that was sometimes poisoning people, right. And so taking these considerations into mind is exactly what feminism is. It's, it's building a world where it is just purely more representative of 50% of the population. You know, though, to do that, there are questions of whether or not positive bias is necessary to do that. And positive bias being, you know, requiring certain elements of female representation in certain spaces to rectify past discriminations or simply relying on like, randomly aggregated probabilities in order to generate the most qualified if you will candidate for something like recruiting, whereas we know that there are many more structural obstacles in the way of the most qualified candidate being female in the first place. So um, yeah, it's, it's a question of what is the right way to build back better in that way. But I do think that having these considerations built in from the get go is very important to equalising the playing field and pushing forward with that equality in the decisions that we make, subsequently.
ELEANOR DRAGE (25:04): And in relation to technology, what does feminism mean, in technology today? How can it be practised differently? How are you thinking about it?
ANITA WILLIAMS (25:14): Yeah, yeah. It's funny, I was on, a on a Reddit forum just this morning looking at an article that was posted. And it was about a woman named Susan Bennett, who was the voice of Siri. And she originally sat for sampling for the voice of Siri. And then I think it became fully automated over time. And so even though maybe her voice is the template, it's not necessarily her voice anymore, if you will. But you know, the lack of maybe recognition or attribution of women's contribution to technology is a huge first piece, right? Like making sure that we are paying credit where it's due and where it's owed to the labour that goes into the technology that we consume. Both technology that's aimed towards women, if you will, but also just general purpose technology. And identifying where women are left out is definitely something I've been thinking about a lot. I'm not sure about you, but I've definitely sat in, in team meetings where my idea has been restated, except more loudly and maybe with more confidence by a male colleague. And that ends up being the, I don't know the direction that the team goes in for a specific matter. And just making sure that technology is properly attributing the labours that go into it both in terms of what we've been talking about content moderators, but also women specifically, but then also understanding the ways in which technology can maybe disproportionately harm females and women and like, understanding the false positive rates of recognition and facial recognition or skin tone recognition for certain groups of women is very important to all things, you know, from autonomous vehicle development and object perception and making sure that what you are perceiving is a human female all the way to child sexual abuse investigations and ensuring that what a system perceives is the age and the race and the right demographic of the person that's being just horrifically treated, right, and all of the legal and judicial consequences of that. And so, yeah, how does feminism relate to technology and vice versa? It's, it's identifying where women are left out, and it's going about the collection of representative stakeholders to build them back in.
KERRY MACKERETH (27:53) Fantastic. Well, Anita, thank you so much for coming on our show, it's really been such a pleasure and I've learned a lot from our conversation, so thank you!
ANITA WILLIAMS (28:01): Thank you. It's been a pleasure. Thank you.
Comments