top of page
Search
Writer's pictureKerry Mackereth

Jennifer Lee on Privacy, Surveillance and Civil Rights

In this episode we chat to Jennifer Lee, the technology and liberty project manager at the American Civil Liberties Union or ACLU of Washington, a non-profit organisation that fights for racial and gender equality and has been one of the leading voices in opposing facial recognition technology. Jen explains why we need to underscore the power dynamics in any decision to build, design, and use a technology, and why Microsoft’s new $22 billion contract to provide the military with technology affects how the tech industry defines good technology. Whether its the NYPD using automated license plate readers to track Muslim communities, or the 400,000 Michigan residents having their unemployment checks wrongfully withheld due to false fraud determinations, Jen tells us what can be done about the wrongful use of powerful technologies. We hope you enjoy the show.


Jennifer Lee is the tech policy lead at the American Civil Liberties Union of Washington, where she advocates for state and local legislation to regulate powerful surveillance and AI-based technologies. She leads ACLU-WA’s work drafting and implementing community-focused policies related to technology, privacy, and civil liberties.


Reading List


Data Capitalism, Data 4 Black Lives: https://datacapitalism.d4bl.org/


The Age of Surveillance Capitalism, Shoshana Zuboff: https://shoshanazuboff.com/book/about/


Algorithmic Equity Toolkit, ACLU-WA & Critical Platform Studies Group: https://www.aclu-wa.org/AEKit



Litigating Algorithms 2019 US Report, AI Now Institute: https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf


Watching the Watchers Workshop Toolkit, coveillance collective, ACLU-WA: https://coveillance.org/



Take Back Tech: How to Expose and Fight Surveillance Tech in Your City, Just Futures Law: https://justfutureslaw.org/wp-content/uploads/2019/07/Tech-Policy-Report_v4LNX.pdf


Power & Technology: Who Gets to Make the Decisions?, Jennifer Lee, Meg Young, P.M. Krafft, Michael A. Katell: https://dl.acm.org/doi/10.1145/3442420



Transcript


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today, we’re talking to Jennifer Lee, the technology and liberty project manager at the American Civil Liberties Union or ACLU [of Washington], which is a non-profit organisation that fights for racial and gender equality and has been one of the leading voices in opposing facial recognition. Earlier last week the ACLU held Amazon accountable for its Sidewalk Surveillance System, which it calls a ‘nightmare to civil liberties’. Jen explains why we need to underscore the power dynamics in any decision to build, design, and use a technology, and why Microsoft’s new $22 billion contract to provide the military with technology affects how the tech industry defines good technology. Whether its the NYPD using automated license plate readers to track Muslim communities, or the 400,000 Michigan residents having their unemployment checks wrongfully withheld due to false fraud determinations, Jen tells us what can be done about the wrongful use of powerful technologies. We hope you enjoy the show.


KERRY MACKERETH:

Thank you so much for being with us today. Just to kick us off, could you tell us a bit about who you are, what you do, and what brings you to the topic of civil rights and technology?


JENNIFER LEE

Well, first, thank you for inviting me here today. My name is Jen Lee and I lead the tech policy work at the American Civil Liberties Union of Washington, or for short, the ACLU of Washington. The ACLU is a nonprofit organisation and the biggest public interest law firm in the US. And while I specifically work within Washington State, I often work closely with my colleagues in other states as well as, so I was working on federal level policy. Together, we work in the courts, with legislators and in communities to defend and preserve the individual rights and liberties guaranteed to everyone in the US by our constitution and our laws. We work on many different intersecting issue areas, which span from health care, immigration, technology, gender justice, and reproductive care, just to name a few things. But within the tech and liberty work that I do, our overall objective is to protect people's rights in the face of new and powerful technologies, and to make those technologies accountable to people, and particularly the people and communities that have always been disproportionately affected by surveillance. This work happens in a few key ways. So first, I work with our team to draft and advocate for community-centric policies and laws that create safeguards around data and technologies. Second, we conduct strategic litigation with our legal team to remedy harms where existing laws are unjust or have been violated. And lastly, I work closely with our organiser to convene the Tech Equity Coalition or TEC, TEC for short, a coalition primarily composed of local organisations working to advance racial and social justice by holding technology accountable, and lifting up the voices of folks who have been disproportionately disempowered in decisions about if and how technologies are deployed.


ELEANOR DRAGE:

Well, it's a pleasure to have you on - ACLU has done some amazing work that we've been following recently. So can you help us solve our billion dollar question? What is good technology? What should it look like? Is it even possible?


JENNIFER LEE

So to answer the question of, can there be good technology? I think we first need to ask another question, which is, who gets to define what good means? Who gets to define terms like cost and benefits, and who has the power to decide whether a technology is a net positive or negative for our society? I think that if we're aiming to advance equity, it's really important to underscore the power dynamics in any decision to build, design, and use a technology because it relates to the power dynamics and being able to define what is good and what is bad. Because we live in a structurally inequitable society, the majority of the time it's people in positions of privilege and power making these kinds of decisions, meaning that the technologies we see in the world are going to be reflective of the value choices made by those people in positions of privilege and power. These value choices that go into deciding if and how to build a technology inevitably intersect with how that technology is deployed used and regulated in our society. So for example, we know that the government, an entity with enormous power, often drives the production and deployment of technologies for purposes that the public, and even those who build the technologies, may not agree with. In 2018, the US Army awarded Microsoft a $479 million dollar contract that included the acquisition of 100,000 HoloLens augmented reality headsets, to be used in both training and active battlefield situations. After that decision was made, 100 Microsoft employees wrote an open letter to the CEO and President of Microsoft, demanding that Microsoft cancel its contract with the military, stating that they did not sign up to develop weapons of war. But that contract wasn't cancelled. And in March of 2021, Microsoft actually won a new $22 billion contract to provide the military with this technology. Those who are subjected to military surveillance and victims of war would probably not define this technology as good. But clearly, both Microsoft and the US military see this technology as very valuable and worth spending $22 billion of taxpayer money. And outside of the military context, there are applications for these augmented reality headsets that many would probably define as good in some contexts. But again, we have to be asking, who gets to decide whether a technology is good, who gets to define its harm, and who ultimately gets to decide if and how technology is built, used and regulated. So it's certainly possible for any technology to be considered good or at least have little impact for a group of people while simultaneously being massively harmful for other groups. And it's possible that many harms are collective societal harms that are not easily visible, like the erosion of privacy rights and other civil liberties. So another example I think of, are technologies like automated licence plate readers, or ALPRs, which are used for law and parking enforcement as well as for traffic management purposes. And this may seem innocuous to many folks, but ALPR technology is a powerful location tracking tool, and thus was a key technology used by the New York Police Department to illegally target the Muslim community for over a decade after 9/11. Using this technology the NYPD mapped where members of the Muslim community lived, station cars with this technology outside of mosques, tracked worshippers attending services, and placed the names of thousands of innocent New Yorkers in secret police files. And while the surveillance programme was ultimately struck down as illegal, it placed unwarranted stigma and suspicion on the Muslim community, made people fear openly practising their religion or even appearing religious, and chilled free speech, which are harms that still resonate today. So to finally answer your question of what does good technology look like and how we can work towards it, if we're looking to build technology that is good for those who have been and continue to be marginalised, we should be asking whether technologies advanced equity from intersectional, anti racist, feminist and decolonial perspectives. A technology’s existence, design and purpose should be fashioned with direction from those who have been historically disempowered. Otherwise, technology will undoubtedly serve to reinforce the status quo and perpetuate existing structural inequities.


KERRY MACKERETH:

Absolutely, and yeah, thank you for such a rich and detailed answer for thinking about these huge issues around you know, whose good are we supporting and propagating through these new technologies. I actually wanted to ask you a little bit more about some of the examples and some of the ideas you are bringing up in the second half of your answer. So focusing on this idea of harm. And I want to ask you what kinds of civil and social and political harms might result from the lack of regulation around these new and emerging technologies?


JENNNIFER LEE:

Without community-driven and people-centric regulation around technology, technology has the potential to bolster, and is currently bolstering discrimination, violent police interactions, and both individual and collective privacy harms that threaten democratic freedoms. For example, we know that both government agencies and private companies are increasingly using powerful automated decision systems or ADS to make some of the biggest decisions in people's lives, from whether they receive health care to whether they can receive housing assistance to how they're sentenced to prison. And a concern with use of these tools is that they are incredibly non-transparent and unaccountable to the people who are directly affected by the decisions made by them. Often, the agencies and companies that develop these secret algorithms may not even know exactly how these tools make their decisions. So when an algorithm decides to deny someone disability benefits, cut somebody off from medications, hold someone in jail without bail - that person may never know why. And the agency or company using the system may never know either. This kind of lack of transparency and accountability has created serious due process nightmares in states like Arkansas and Idaho, where people with disabilities have lost benefits that allowed them to live meaningful lives on their own. And in Michigan, use of these automated decision systems led to over 400,000 residents having their unemployment checks wrongfully withheld due to false fraud determinations. This algorithmic error ruined the lives of many families, some of whom were fined thousands of dollars and had to declare bankruptcy. It's important to note that even when the automated decision making systems are neither secret nor algorithmic, they can still perpetuate bias because they are based on flawed inputs. For instance, when Washington courts decide whether to release a defendant pre-trial, they sometimes use openly published risk assessment tools. However, because these tools use criminal history as a major input in the decision process, the results reflect the racism inherent in our criminal justice system, from discriminatory policing to discriminatory sentencing to discriminatory incarceration. Without regulation of these types of systems, people's lives, and especially those most vulnerable will continue to be harmed. Additionally, from a related perspective, because so many of the technologies we're concerned about regulating today are data driven, it's incredibly important that we have strong data privacy protections. As you know, today, many companies developed technologies that make it easier for corporations and the government to learn about the most intimate aspects of both our online and offline activities. From where we live in work, to what religion we practice, to what we purchased, what we read, and with whom we associate. And with more and more of our lives moving online, especially during this pandemic, we face increasing vulnerability and threats to privacy. This is concerning because privacy violations and misuse of personal information can lead to a wide range of harms, such as unfair price discrimination, domestic violence, abuse, stalking, harassment and discrimination in areas including employment, health care, housing and access to financial services. These harms disproportionately affect people with lower incomes and people of colour and subject these communities to outside surveillance and database discrimination. Unlike in the EU, in the US we don't have a comprehensive federal data privacy law. And at present our personal information is collected, used and shared, often without our knowledge, and much less our consent. So we definitely need privacy regulations. But that conversation again has to return to: who gets to decide what kinds of regulations are good or bad? What we've seen in Washington state and in many other states across the country are tech companies lobbying heavily to push forward incredibly weak privacy laws that provide only an illusion of privacy. For example, the tech equity coalition has been opposing an industry-backed bill supported by tech giants, like Amazon, Microsoft and Google, that would not require companies to get permission to collect, use and share people's information, prohibit people from suing companies if they violate people's rights, and prevent local jurisdictions like cities and counties from passing stronger privacy laws. These types of bills have been opposed by Consumer Rights orgs, privacy advocates and civil rights groups like ours. And I'm happy to share that due to a tremendous advocacy effort, we actually managed to kill this bad industry bill for the third year in a row in Washington State, preventing the passage of a bill that would have only entrenched a status quo that benefits companies but not people. And simultaneously, we are advocating for strong regulations that are driven by companies. So what we need are regulations around technologies that are driven by people, not industry players, and the people who have been and continue to be most harmed by technologies should be the ones driving that conversation.


ELEANOR DRAGE:

Fantastic. Thank you. I also wanted to ask you about surveillance, because it's something that you do a lot of work with and I think what's really interesting now is that you get companies who are building software that isn't necessarily purpose-built for surveillance, but it's being used by law enforcement and other stakeholders in order to watch people more closely, not always for good ends. So can you tell me about what you and your organisation are thinking about surveillance and what work you're doing going forwards?


JENNIFER LEE:

The harms caused by surveillance, privacy invasions, and unchecked technologies are neither new nor surprising. At its core surveillance has been and will always be about power and control. History has shown us that surveillance technologies serve as tools for those in power to control and monitor those most vulnerable, whether or not the technology was originally designed for that purpose. For example, in 1713, New York City passed what was called a lantern law that required only Black, mixed [race], and indigenous people to illuminate themselves by carrying a lit candle at night, which was the technology of the time. Lantern laws were an early form of surveillance that incentivized and made it possible for white people to enforce slave systems and incarceration. As Simone Browne explains in her book, Dark Matters, lantern laws deputized any white person to stop those who walked without a lit candle after dark. This is the 18th century legal framework for stop and frisk policing practices that was established long before our contemporary era. I think we can say that candles are neither advanced nor high tech, but they were weaponized by people in power as a surveillance tool district black people have privacy and fuel the slave patrols and the policing of black bodies and lives that continues today. But now the institutions that continue to oppress and brutalized, the lives of Black, indigenous and other marginalised communities are equipped with tools truly unprecedented in their surveillance power, like facial recognition, location tracking drones, and other AI-based tools. A key difference between these new powerful technologies and technologies of the past is that new technologies are powered by unprecedented amounts of data. These tools have powerful surveillance capabilities, and can even make important decisions about people's lives. If increasingly powerful, invisible, and unaccountable technologies continue to be built and deployed without adequate consideration of the impacts on communities, they will continue to exacerbate the structural racism and inequities already present in our society.


ELEANOR DRAGE:

Absolutely. And to combat this something that you're looking at, and that Kerry and I look at quite closely, because in feminist work, we love collaboration, we think collaborative work and community engagement is the key to solving these kinds of issues, and so many of the people we look up to are doing incredibly challenging community work to resolve it. And that strikes me as something that the ACLU is doing so well. So can you tell me about how you work as an organisation, the kind of collaborative work that you do, what that entails and what meaningful community engagement looks like?


JENNIFER LEE:

Definitely. So there is a lot of collaborative work that occurs and community engagement is an important part of that. But too often, the community engagement processes that are created by corporate, governmental, nonprofit and educational institutions don't actually serve to equip communities with decision making power, but rather function to purposely or inadvertently co-opt community voices and legitimise decisions that have already been made. Task forces, community engagement meetings, and outreach processes that ask community members to draft reports, provide feedback, or create recommendations and repeatedly share their lived experiences, often demand time and energy without also giving communities meaningful decision making influence. It's important to recognise that historically marginalised communities are the experts on the impacts of technology and surveillance. So our goal in convening the Tech Equity Coalition is to bolster the ability of communities to share their expertise and exercise decision making power. That means that we at the ACLU are sharing our power, and we are encouraging others to do the same. Unfortunately, community expertise in decision making processes is often undervalued while academic, technical and legal voices are elevated as the only experts and given authority even when those expert voices are not coming from impacted communities and are contradicting community expertise. So without articulating the specific objective of ensuring community decision-making power, these community engagement processes can function as a perfunctory and performative means to shield the status quo, even if it's done unintentionally. So some questions that we like to ask ourselves before embarking on a community engagement process surrounding technology and technology policies include, have communities already vocalised their support for or opposition to the technology in question? And if so, how will this feedback be considered in decision making? What authority do historically marginalised communities have in deciding if and not just how a system or policies implemented and lastly, in order to build community decision-making power, we need to focus on changing power structure structures within the different contexts in which we operate, whether we're academics, artists, technologists, lawyers, organisers, policymakers, we should be continuously working on sharing our institutional and personal power with historically marginalised communities and ceding some of our power to those who have less. So that's some of the ways in which we try to be collaborative in these community engagement processes, and also try to speak truth to power when we are engaging in them.


KERRY MACKERETH:

Fantastic, and it sounds like such important work and it's really encouraging to see these kinds of collaborative projects going on when thinking about various kinds of algorithmic oppression and various kinds of injustice leveraged through technologies. I want to ask you now thinking about what kind of work so like what kinds of political and legal solutions do you think are required to address the kinds of problems that you've discussed throughout this episode, from surveillance through to the invasion of privacy, through to the ways that these new technologies often enforce or entrench these existing hierarchical relations of power?


JENNIFER LEE:

I think we definitely need strong laws that are fashioned by the people who are directly affected by technologies, and particularly the people who are disproportionately harmed. That means that lawmakers should be actively seeking feedback and guidance that centres for example, immigrant communities, bipoc communities, trans folks, people with disabilities, farmworkers, sex workers and so many more. They should not be centering the voices of industry lobbyists who are working on behalf of big tech companies. We should be passing laws that prohibit discrimination by an algorithm, passing strong data privacy laws, and sometimes banning technologies that pose immense threats to our civil liberties like face recognition technology. We recently banned government acquisition and use of face recognition technology in King County, Washington, which is a home to the headquarters of both Amazon and Microsoft. And this victory was due in large part to the tremendous advocacy of the Tech Equity Coalition, which wrote letters and emails, testified at public hearings, met with lawmakers, and did a tonne of public education that didn't just span this campaign, but actually started many years ago. So I'm really encouraged that we are gaining increasing momentum in this fight to create the policy solutions needed to truly hold tech accountable.


ELEANOR DRAGE:

What incredible victories and it's difficult to overstate the importance and the efficacy of the ACLU. Everywhere I go, everything I read, the greatest achievements, and the most powerful voices seem to be coming from you guys. So thank you so much for being here. I wanted to ask you, what's next? What should we be looking forward to?


JENNIFER LEE:

Well, thank you so much. I mean, I think it's incredibly important to say that these victories are in tremendous part due to the amazing grassroots organisations working locally, in partnership with us to advance these victories. So I'm incredibly excited to continue working with the tech equity coalition and other grassroots partners to push for strong data privacy legislation in the 2022 session, as well as algorithmic accountability legislation and additional face recognition moratoria and bans. In just the past few years, we've also work to develop toolkits that help explain the history of surveillance, the surveillance infrastructure around us and how we can take action. And I really look forward to continue doing that, because it's such an important part of our fight. Yep, that’s it.


KERRY MACKERETH:

I mean, you say “that’s it”, but it's a huge, and incredible, you know, amount of work that you're doing, and you've got planned ahead, and we absolutely wish you all the best in it. So yeah, once again, thank you so much for appearing on our show, it really has been such a joy. And we hope that for all our listeners, all of these examples, all these cases, all these problems, are things that will inspire you to go out and learn more about what the ACLU is doing, but also the ways that you can get involved if you're based in the United States in these campaigns, and in the sorts of pushes forward for sort of more justice around new and emerging technologies. So thank you very much.


JENNIFER LEE:

Thank you for having me.




62 views0 comments

Recent Posts

See All

Comments


bottom of page