top of page
Search
Writer's pictureKerry Mackereth

The EU AI Act Part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of our EU AI Act series we talk to Amba Kak and Sarah Myers West, the Co-Directors of the AI Now Institute, a leading policy thinktank based in New York. Amba and Sarah talk about why policy narratives matter, why it's actually fake news that AI is moving too fast for regulation to follow, and why innovation versus regulation is a lazy and outdated maxim. Meanwhile, we chip in with some weird comments about why kitchen whisks are awesome, and why getting inundated by emails is the present day equivalent of somebody badgering your cows in the 1800s. Don't forget to check out our first instalment of the EU AI Act series with Daniel Leufer and Caterina Daniels from Access Now, which is available on YouTube, Spotify, Apple, or any of your other favourite podcasting platforms. We recorded this episode back in January 2024.


Amba Kak is the Co-Director of the AI Now Institute. Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, ranging from network neutrality to privacy to algorithmic accountability, across government, industry, and civil society – and in many parts of the world. Amba brings this experience to her current role co-leading AI Now, a US-based research institute where she leads on advancing diagnosis and actionable policy recommendations to tackle concerns with artificial intelligence and concentrated power. Amba recently completed her term as Senior Advisor on AI at the Federal Trade Commission. Prior to AI Now, she was Global Policy Advisor at Mozilla; and also previously served as legal advisor to India’s telecommunications regulator (TRAI) on net-neutrality rules. She was also selected as one of TIME Magazine's Influential People in AI for 2024.


Sarah Myers West is the Co-Director of the AI Now Institute. Sarah has spent the last fifteen years interrogating the role of technology companies and their emergence as powerful political actors on the front lines of international governance. Sarah brings this depth of expertise to policymaking in her current role co-directing AI Now, with a focus on addressing the market incentives and infrastructures that shape tech’s role in society at large and ensuring it serves the interests of the public. Her forthcoming book, Tracing Code (University of California Press) draws on years of historical and social science research to examine the origins of data capitalism and commercial surveillance. Sarah’s award-winning research is featured in leading academic journals and prominent media platforms including the Washington Post, the Atlantic, the Financial Times, Nature and the Wall Street Journal. She regularly advises members of Congress, the White House, European Commission, UK Government, Consumer Financial Protection Board and other US and international regulatory agencies and the City of New York, and has testified before Congress on issues including artificial intelligence, competition and data privacy. She recently completed a term as a Senior Advisor on AI at the Federal Trade Commission, where she advised the Agency on the role of artificial intelligence in shaping the economy by working on competition and consumer protection matters.

​

READING LIST:


The AI Now 2023 Landscape Report, Confronting Tech Power: https://ainowinstitute.org/2023-landscape


AI Nationalism(s): Global Industrial Policy Approaches to AI: https://ainowinstitute.org/ai-nationalisms



TRANSCRIPT:


KERRY:

Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts. What is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us. And we'd also so appreciate you leaving us a review on the podcast app, but until then sit back, relax, and enjoy the episode.


KERRY:

In this episode, we talk to Amba Kak and Sarah Myers West of the AI Now Institute, who are the co directors of this leading policy think tank. In the episode, which is the second instalment of our EU AI Act series, Amba and Sarah explore why different tech policy narratives matter, the difference between the US and the EU regulatory landscape, why this idea that AI is simply outstripping regulation is an outdated maxim, and then finally, their policy wish list for 2024. We hope that you enjoy the show.


KERRY:

 Brilliant. Thank you so much for joining us, both of you. And just to kick us off, cause you introduce yourself and tell us what has brought you to thinking about AI regulation and power.


AMBA:

Yeah, I'm really happy to be here. I actually began looking at AI and other kind of technology markets through the lens of the law.


So that is my disciplinary background and has been my training. Kind of how do you regulate it was the starting point. And if anything, my career and particularly the last few years working with Sarah and our team at AI now has been about unlearning that and like not thinking of law as the starting point, but thinking about it more, more critically.


Yeah, I think my I would say I've worked on a range of the greatest hits of tech policy, network neutrality, digital copyright, big data. And I'm not actually sure when that morphed into becoming about AI, but at some point it did. So all of that, and I think, what I said about looking at the law as a starting point is.


Because I think we, I've seen where the law is the law and regulation is entrenched and part of a set of power is entrenched in a particular set of interests always and and can reflect existing power dynamics. It is rare that the law actually functions to perturb those power dynamics.


And so much of our work is looking at this particular policy window and this opportunity that we have, particularly domestically in the U. S. right now, to use law to perturb and shape the existing power dynamics, particularly the imbalance, we would call it, between power as it is distributed within a very small set of tech companies and the rest of the public.


And Sarah.


SARAH:

Yeah. In the last 15 years or so I've been working to interrogate how tech companies have showed up around the world in ways that really change the political landscape.


In prior research roles I wrote a book that delves into how industry lobbying and regulation shaped what became the surveillance business model for the Internet. And I think that's a big part of the origin story of AI as well.


I've spent a lot of time looking at how technologists have historically really over indexed on hype and potential for innovation, but not really looked in an in depth way at the dynamics of power and how that shapes the way that technologies work out in the world. And then more recently, as I've done more policy oriented work, also seeing the role of regulation as this shaping influence and how enforcers are able to more assertively shape the role of artificial intelligence in the economy and in society at large. And so I think my work at AI Now, we're really trying to blend, you know looking at the market and incentives, the technical infrastructures and the policy interventions that can shape the role that tech can play in society at large and really to try and put forward an agenda that, you know.


It inserts meaningful friction in ways that make sure that the trajectory of AI is serving the interests of the broader public.


ELEANOR:

Ooh, meaningful friction. Kerry and I do a lot of work on frictions, and we're looking forward to hearing you talking about that. But first, can you tell us, what is good technology? Is it even possible? And how can pro justice policymaking help us get there? And we'd love you to talk very specifically about what kinds of technologies are easy or good to regulate, what kinds of good technologies are possible through the policy or regulation lens that you work with.


SARAH:

I mentioned earlier that, in, in a lot of ways, artificial intelligence really builds on top of this pre- existing surveillance business model that emerged for the Internet and, an important part of that story is that a few tech companies, in developing that business model began to amass network effects, massive amounts of concentrated power and control over key resources. The infrastructures needed to build AI through compute and cloud infrastructure, hoovering up lots and lots of data about people amassing, the most skilled labor in being able to train and deploy AI systems. And to answer your question, our definition of AI is in many ways, something that proceeds from that dynamic of concentrated power and that I think also influences what the possible future scope of what could be AI for the public good. Because it means that we would need to contest some of the underlying presumptions about what AI and how it works out in the world.


The notion that AI in order to be accurate or powerful needs to operate at a certain level of scale. That carries with it many different ancillary harms from environmental effects to the proliferation of hateful and discriminatory content to, the way that is used in the world tends to be to either automate austerity or ramp up control.


And I think we very much need a proactive vision for. AI that serves the public good, but to get there, I think we're really going to need to break down this existing trajectory and redefine the entire field from the ground up now. Thankfully AI has meant lots of different things over the course of almost 70 years.


So there is precedent for thinking about somewhat differently than in the present moment. I think this flavor of AI really originates back around 2012, but, preserving that scope of things that could be otherwise, I think is going to be really critical if we do want technologies that can serve the public good.


AMBA:

We've been fed a particular narrative that the advancement to AI and now to larger and larger scale AI intuitively sounds like progress, but I think good technology again to answer your question is where we aren't looking where technology isn't the starting point. It isn't, it is the means, not the end. It is not the solution in search of a problem. We start with, okay, we have an education crisis. How do we solve it? And the people answering that question aren't tech executives or startup bros in Silicon Valley, but educators and public school teachers. So that's an example of how we shift the dynamic and the decision making around whether, if at all, and on what terms, technology is part of the solutions for particular social challenges we face.


ELEANOR:

Can I ask you as a quick fire round that we didn't plan? Do you have an example of a good technology? For example, the whisk is one of mine. Kerry, go.


KERRY:

Gosh. I feel like maybe the recipe book or something where you can like record like recipes and ideas, things like that. I definitely use voice notes a lot as well. I'm a big voice note person. So maybe voice notes.


SARAH:

I want to know why the whisk?


ELEANOR:

There's something very beautiful about the way that the metal is shaped, it's easy to produce, it's light, it creates lightness in the act of whipping. I think it's a very nice ornament as well as a useful tool.


AMBA:

I was going to say shared documents and the fact that we can co edit stuff together is has been, I hate the word, use the word revolutionized, but it has revolutionized the way in which we work with each other I'm trying to think of whether there are negative consequences. Maybe the fact that, and this goes to how you like conceptualize or design tools, but the idea that you can sometimes be surveilling what someone else is doing on your document, as I often do to be like, what, where is that person editing my document sometimes feels like it might be a little bit invasive in terms of a team dynamic, if you're co working on something together and someone is like watching your every keyboard stroke, but I think in general the idea of enabling certain forms of collaboration is such an powerful change that, that we've experienced thanks to digital technologies in particular over the last few years.


KERRY:

Yeah, that's a great one. Sarah, do you have a technology in mind from very mundane to very transformative?


SARAH:

Yeah. I keep thinking of like a laundry list of things that are technological that I use day to day from soap and vitamins to I don't know. I think that there are many things that we don't necessarily think right now as technological, but that were nevertheless particularly innovative or transformative in the moment that they entered our lives. And I think, what makes them good technologies also likely is that's moment of social transformation.


Think about the entry of antibiotics into the world and the, social dynamics of how that change medicine. And now we're grappling with the long tail of harms in the risk of antibiotics becoming ineffective. And what do we do then if we no longer have this technology in our lives?


And I guess there's just so many corollaries to think through this present moment that feels like time is so compressed in how we deal with artificial intelligence that I wish that we had more time and space to really think through the lessons from those other good in quotation marks technologies.


KERRY:

Absolutely. And, something that we're actually hosting at Cambridge on March 15th, which I've been aggressively advertising is I'm hosting a conference specifically on AI analogies or thinking about what we compare AI to. For example, often compared to say nuclear weapons and like governance structures.


And what are some alternatives that we could be comparing AI to? So there's a researcher to that. Who I know, for example, who works ideas of like technological restraints. So what happens if we look not at the paths that were taken in history, but actually the paths that deliberately weren't taken and what could that tell us about AI governance?


So that question's fascinating. And Eleanor on a side note if you would like an example of a negative use of a whisk, I went to the school field trip to the Auckland Museum of Technology, which all school children in Auckland have been to. And the school trip sadly ended relatively early because one girl wound up her friend into a sort of very old whisk, her hair, and then they could not unwind her and I don't know if we were ever welcomed back. Even the whisk can be misused. Anyway, we have wanted to have you on the podcast for so long because we super admire your work and the work of the AI Now Institute. I also have the privilege for our listeners of working with Amba and Sarah.


But we particularly want to bring you on for this series, which is on the EU AI Act and thinking about the future of AI and regulation. It's been a massive year for thinking about how the regulatory landscape has been shifting. And our previous episodes in this series have very much explored the ins and outs of the EU AI Act, but we wanted to bring you on because you have a particular perspective one that the AI now Institute has been, I think, really successful in pushing on the policy front. So we wanted to know what, in your opinion, do you think are the strengths and the limitations of the EU AI act as it currently stands and for our listeners, we're filming this kind of near the end of January, it's constantly a shifting terrain, and so we'll probably publish this maybe a month or two on from the recording date but right now, 22nd January, what are your feelings about its broad strengths and limitations?


AMBA:

Can I start with a slightly controversial opinion in general, which is, that there's often a pretty large gap between the stories that are told about a particular law and the change it's going to do make in the world and like A, what it actually says if you get down to the nitty gritty, and B, how it is going to be implemented, or maybe there's actually three different things, right? And the distance between them, depending on the example you take, can actually be very large.


But the controversial part of the opinion, I'm wondering if you guys can hear my baby screaming.


KERRY:

We can't, but after this I would love baby updates.


AMBA:

Yeah, so the distance between these three different stages or parts of the evolution of a regulatory journey can actually be pretty large and the, I guess the controversial part here is that It's almost each of these parts of the law are important on their own terms because they actually do shape the market.


So for example, the stories that we are hearing about the EU AI Act or did hear at some point that this law is going to regulate the AI sector. It is a counterpoint to the idea that innovation needs to proceed. Unrestricted and then more specific stories like the EU AI act is going to draw a red line around the worst users of AI.


All of those really do send important market signals. They send important signals to the public that actually this whole AI domain is not moving too fast for the regulators. Literally, I would say that is like. on the top of my list of sayings that are very popular, which I hate is oh the law is moving too slowly and technology is moving too fast, right?


It like pierces through that. And it says actually regulators in the EU lawmakers have been trying to figure out how to regulate AI for five years now, or more than that, more than this before this whole Chat GPT moment. They've actually reached far enough to actually legislate on this in 2024. And so that's another important one.


The fact that you might actually have certain prohibited uses of AI technology is another like massive signal. So these are the stories or the best stories I think that we have heard about the AI Act. I think they have had already before we even get to what the final law says and it changes every day, and before we get to how will any of this be implemented. I think that has value on its own terms. But on the other hand, there is a problem when these stories about the law become really watered down when you actually get into the nitty gritty of it when you find as, European civil society organizations have been pointing out the final kind of trilogue negotiated text of the AI Act seems to have fairly large exceptions where it counts when it comes to law enforcement use, migration, facial recognition, affect emotion recognition, we expected really clear boundaries, but it seems like the carve outs are overshadowing the headline there. And they're crafty craftily drawn out exceptions that might water down to the point of rendering meaningless in some cases.


And I think that is very, it can be very problematic because A, not a lot of people are going to get to that level of detail to point out that actually the reality is very far from these. The headlines you're hearing and meanwhile, the law is going to pass. If anything, and I think we're not best placed to draw attention to this, but I think really heeding the warnings of civil society organizations like EDRI and Access Now that have been calling attention to these exceptions is really important, especially because this law isn't yet fully baked.


And what would victory look like here is if the grand narratives about how the AI Act is going to regulate the AI sector in a very strong and aggressive way, meet the reality of what that text eventually says. I think we still have that window of opportunity to bridge that gap, but that gap certainly exists.


ELEANOR:

Very well put. And I'm so happy that you said that thing about regulation not being one step behind because that's something that we're told all the time and I'm so sick of hearing it. And you're totally right, I think it's too easy to say that we'll never catch up to AI, it's always going to be in front of us.


Can I ask Sarah, what are the two key differences between the EU AI Act and other legislation, for example, in the USA?


SARAH:

The real front lines of policymaking on artificial intelligence have started from these much older existing legal frameworks. And, really a much more assertive posture by enforcers than we've seen in the last several decades toward the tech industry.


So I think one of the distinctions is, and we're certainly hearing plenty about Congress holding forums and, introducing some existing proposals because they have been looking at this for years now. But also potentially exploring a more omnibus framework. But really the core of how the US is approaching regulation is coming through casework by the Federal Trade Commission. It's coming through work by the Consumer Financial Protection Board. It's coming through the EEOC, the Equal Employment Opportunity Commission, and it's focused on how is AI being used in hiring and how does that impact discrimination.


It's these kinds of measures that are using existing frameworks that are, across the board, I think, sending a clear signal and then, enacting that signal to say, in fact, there's no exemption to existing laws, just because we're talking about artificial intelligence. And, what's particularly significant about that in this market is, many AI companies have taken this approach where they're just going to release systems into public use make them commercially available, but not necessarily engage in deep work on ensuring that they're compliant with existing law or develop, deep testing and validation frameworks so a good example of this is the recent case against Rite Aid, which is a pharmacy chain in the U. S. where, Rite Aid was using facial recognition and it's security cameras and had a very widespread issue of, basically asking people to leave the stores or calling the police on people based on security flags through the spatial recognition technology and, frontline that choice and process of how they implement that the technology was deeply racist.


It was discriminatory against communities of color. It had higher rates of error for women so across the board, it was, it's effects were racially discriminatory and then also they did very little to test or validate the technology and they received a 5 year ban for that use of technology, so it's to those kinds of precedents that I'd look to really understand the U. S. regulatory landscape. And then on top of that, there's, initiatives from the White House in the recent executive order on AI, lots of activity within Congress, although, we'll see what actually gets passed at this stage since we're also talking about, government shutdowns on almost a monthly basis now.


And so I think as a comparative framework, I think enforcement 1st has been the U. S. regulatory posture.


AMBA:

And actually what Sarah is saying is often lost in the broader kind of stories we hear about EU versus the US. I think the laziest perception that you often see in the media is that the EU is regulating, but not building technology and at worst, and we heard this in the context of the EU AI Act, that the AI Act was going to be a further dampener on European kind of business innovation.


On the other hand, the US is this kind of laissez-faire, do whatever you want. We don't take that kind of EU approach towards mandatory regulation. And we focus on voluntary commitments like the ones we saw from the White House. But actually I think that the examples that Sarah highlighted are so critical because they actually show you that sometimes we're not looking in the right places because there actually has been pretty strong enforcement at the sectoral level from enforcement agencies on what are, again, it's quote unquote AI technologies, they may not read have AI in their banner, or, they might use different phrases, but they are you've seen a much more But I guess clear and aggressive enforcement posture in the U. S. in this administration, for sure. I think there is a way to go, but this is to say that on both sides the claims are inflated and I'm not sure that they do the policy fights on the ground much good when they're projected in that way.


SARAH:

I think what's also key is that it's a clear demonstration to, you sometimes hear these questions about, like, how does the law innovate at the speed of technology?


The example that I just gave with the right case was a federal statute that was passed in 1914 and, the law as written can stretch into the technological future. And I think, certainly that doesn't mean that we just leave it at that. And we ignore the need to pass strong laws among many reasons we need bright line rules, because with the current state of affairs, we're trying to fix so many things after the fact and bright line rules would much more clearly conscribe some of the most harmful uses.


But I think certainly it, it's, yeah. It's not lost that, these much older statutes have very, strong applicability to a very otherwise fast paced domain.


KERRY:

That's really fascinating. And I think, this is something that Eleanor and I are also really interested in, which is how can we better leverage existing protections, things like, for example protections against false advertising to hold tech companies to account and the claims that they make about their products, for example, rather than saying we need a new set of legislation for everything related to AI, which in many ways like further exceptionalizes it in a way that perhaps isn't helpful.


On a kind of more fun note. I was talking to a lawyer who works in this area, the history of legislation in relation to these new technological developments and he was telling me about some of these more creative uses of earlier statutes, including, I think, one in the UK, where it was to do with something to do with cyberspace and the extent to which, you could spam someone's servers and they were using a law from the 1800s, which you know, you're not allowed to go to a paddock and badger someone's cows or bother their cows and that's considered badgering.


And so they were trying to argue that, this was the modern day equivalent of going and bothering someone's cloud cows, sending them too many emails. So I quite like to think now, whenever I get junk mail that I'm being badgered and that myself and my cows are very displeased. But I actually wanted to turn to something that the two of you created last year, which is the 2023 AI Now Institute sort of flagship report.


And it was on the idea of the concentration of power in big tech and how this was a meaningful threat to human rights - It was specifically the source of a lot of the issues that we're seeing with AI being developed so quickly and at scale. And so something I wanted to ask you was, with what we know about the EU AI Act right now, how effective do you think is it in tackling this concentration of power in big tech?


AMBA:

Yeah, I think that there are limits to, in general, what regulation can do to perturb the concentration of power and resources in very few companies I don't, I do not think there is one quick fix and that there ever will be, I think the answer is to that challenge will come from a broader kind of political movement, not a policy like one particular regulation or policy. So that's one. But I think more generally we should make no mistake, once it's actually passed, and once it is law, this is a law that, there has been tremendous lobbying to water it down to make sure that maybe it doesn't get off the ground, even as recently as the end of, 2023 we saw almost an entirely entire U- turn on the question of whether foundation models and general purpose AI should be regulated at all on the grounds that this would hamper Europe's national competitiveness, and we saw that familiar and one would think outdated trope of regulation versus innovation become the meme of the day. And I think it's no mean feat when any regulation actually goes through you just have to look to the US where it has been basically impossible to pass legislation on all of these issues, despite there being a glut of proposals for the last several years now, right?


So I guess the first thing to say is, I think the fact that you will have a regulation that puts in place baseline documentation, transparency and accountability norms for, aI technologies is something, it is definitely yeah, I think it counts. It counts for something where it could be particularly strong and where it's able to actually prescribe or prohibit certain kinds of use cases of AI, because I think that is a message to industry more broadly to say that this is not a kind of train that you are running independently. Society is going to decide the limits to the limits and kind of shape this technological trajectory.


So that's an important one. But I think, the way in which concentration of power is sometimes narrowly understood is as a kind of straightforward question of competition. And there, I think the, the AI Act, I would say it's not strong at all on that dimension of the issue. You would probably look to the digital markets act, which, despite its flaws is a pretty landmark legislation when it comes to those issues. The EU Data Act also has provisions that get to self preferencing and interoperability and data portability, which are all I would say part of the more traditional toolbox of how you understand policy interventions that help create more competition and reduce barriers to entry in general in the market.


ELEANOR:

Wonderful. Thank you. And we're going to end on one policy or regulation wish for 2024 - one minute each. Sarah, can you go first?


SARAH:

So of the many interventions that I would love to see happen on artificial intelligence, I think particularly in 2024. It's a moment where the market is poised to become even more cemented in ways that favor incumbent firms, like the firms that run their own cloud infrastructure businesses that, are able to absorb the pressure to profit margins that I think a lot of smaller entities are facing.


And that's where we tend to see some of the worst behavior emerge. What I would love to see in 2024 is swift movement on the competition front that will meaningfully curb the concentrated power that these companies have amassed. And in particular really tackling the business model in artificial intelligence in ways that can prohibit this race to the bottom that we're currently seeing, emerge before our eyes.


ELEANOR:

Brilliant. Amba?


AMBA:

Yeah, I think it would probably be bans. So getting to a point where we have very clear and conspicuous prohibitions on some of the worst and scientifically debunked versions of this technology: emotion recognition, the application of biometric power technologies in the workplace in police and in migration.


And I give these particular contexts because I think the whole conversation on the concerns with AI started because it was people of color and people that were otherwise marginalized in society that were being hurt the hardest by experimentations with these new technologies.


And somewhere in the hype cycle of 2023, it seems like these harm cases and these use cases have been literally pushed to the margins. And so bringing them back front and center and say we already have close to a decade of evidence that these systems don't work they disproportionately hurt people that are already systematically marginalized in society and they should just be banned.


So I think taking a clear position on those would be the wish list for 2024.


KERRY:

Oh, brilliant. I very much hope that your 2024 wishlist comes true, but in the meantime, thank you so much for coming on the good robot. We honestly have been so looking forward to having you on and for anyone listening, of course you can find the transcript for this episode at www. thegoodrobot.co.uk. We'll also link you to recent publications by the AI Now Institute, including the 2023 report we mentioned, but also recent reports on things like Compute Power, and those will all be available for free online. But once more, thank you so much Amba and Sarah for coming on. It really is such a pleasure to talk to you.


Thank you for having us.


ELEANOR DRAGE:

This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage.



11 views0 comments

Comments


bottom of page