top of page
Search

Mar Hicks on the Unexpected History of Computing

In this episode, we talk to Mar Hicks, an Associate Professor of Data Science at the University of Virginia and author of Programmed Inequality: How Britain discarded Women Technologists and Lost its Edge in computing. Hicks talks to us about the lessons that the tech industry can learn from histories of computing, for example: how sexism is an integral feature of technological systems and not just a bug that can be extracted from them; how techno-utopianism can stop us from building better technologies; when looking to the past is useful and when it's not helpful; the dangers of the 'move fast and break things' approach where you just build technology just to see what happens; and whether regulatory sandboxes are sufficient in making sure that tech isn't deployed unsafely on an unsuspecting public.


Mar Hicks is a historian of technology, gender, and labor, specializing in the history of computing. Hicks’s book, Programmed Inequality (MIT Press, 2017) investigates how Britain lost its early lead in computing by discarding the majority of their computer workers and experts--simply because they were women. Their current project looks at transgender citizens’ interactions with the computerized systems of the British welfare state in the 20th century, and how these computerized systems determined whose bodies and identities were allowed to exist. Hicks's work studies how collective understandings of progress are defined by competing discourses of social value and economic productivity, and how technologies often hide regressive ideals while espousing "revolutionary" or "disruptive" goals. Hicks is also co-editing a volume on computing history called Your Computer Is On Fire (MIT Press, 2020). They run the Digital History Lab at Illinois Tech and maintain a site about their research and teaching.


 Reading List:


Programmed Inequality


Your Computer is on Fire


"Hacking the Cis-tem"IEEE Annals of the History of Computing (March 2019)



"A Feature, Not a Bug"SHOT: Technology's Stories (December 2017)


"Computer Love: Replicating Social Order Through Early Computer Dating Systems,"Ada: A Journal of Gender, New Media, & Technology (Fall 2016, issue 10)


Transcript:


KERRY MCINERNEY:

Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot Podcast. Join us as we ask the experts what is good technology, is it even possible, and how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode, an especially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us. And also so appreciate you leaving us a review on the podcast app. But until then, sit back, relax, and enjoy the episode.  


ELEANOR DRAGE:

In this episode, we talked to Mar Hicks, a historian of technology based to Illinois Tech and author of Programmed Inequality, how Britain discarded Women Technologists and lost its Edge in computing. Hicks talks to us about the lessons that the tech industry can learn from histories of computing, for example, how sexism is an integral feature of technological systems and not just a bug that can be extracted from them, how techno utopianism can stop us from building better technologies, when looking to the past is useful and when it's not helpful. The dangers of the move fast and break things approach where you just build technology just to see what happens and whether regulatory sandboxes are sufficient in making sure that tech isn't deployed unsafely on an unsuspecting public.


We hope you enjoy the show.


KERRY MCINERNEY:

 So thanks so much for joining us today. So just to kick us off, could you tell us a little bit about who you are, what you do, and what's brought you to thinking about feminism, gender, and technology?


MAR HICKS:

Sure. I'm Mar Hicks and I'm a historian of technology. I'm also a professor who teaches about the history of technology and the reason that I do what I do partly has to do with just what I'm interested in. And it partly has to do with my past. I used to be a UNIX systems administrator and so I've been interested in how these large infrastructural technological systems work for a long time from the side of, you know, the person trying to make sure that they don't fail.


And that being my job for a while, as well as from now um, the perspective of being somebody who looks at things from the bird's eye view and likes to analyze what's going on with them.


ELEANOR DRAGE:

Wonderful. Well, I was so thrilled to have you on the show. It's a real honor and I can't wait for you to answer our three good robot questions, what is good technology? Is it even possible? And how can feminism help us work towards it?


MAR HICKS:

Those are really big questions. What is good technology? Is it even possible? And how might feminism help us work to it? Um, yeah, so I think good technology is going to be different depending upon whom you ask, because good technology serves the needs of the individual who is using it, um, but more often the community that it's embedded into.


So technologies that serve individuals well might actually not be good technologies if they're doing harm to the broader community in which that individual is ensconced. Um, you know, one example of this is there's a lot of convenience technologies. That might make something easier for an individual, like maybe your surveillance camera lets you see exactly when packages are delivered to your porch, but that might be at the expense of your community's privacy, you know, because that means that people who don't know or don't want to be surveilled or surveilled every time they walk by your house.


So I would say that good technology is something that is community- focused community based and that, you know, there has to be some sort of collective understanding and agreement about what makes it good. And it's not just a tool that lets one person sort of get a leg up over everybody else. Um, is it possible?


Yes, it's definitely possible and we actually see examples of good technology around us presently and throughout history. One problem is that although it is possible, oftentimes it's not the most profitable option, so when you have a system that selects for the most profitable technology, whether or not it's the best, then you know we're gonna have trouble getting to better technologies.


And I think that's one thing that we're seeing in the UK and in the United States and in many other countries right now. And then the third thing, how can feminism. Potentially help us get to better technologies? Well, I think any system of thought or any philosophy that is focused on communities and on equality can help us get to a better set of technologies for living with.


So I would say that that's the role that feminism can play in this.


KERRY MCINERNEY:

That's fantastic. And you know, Eleanor and I come from a background where we work a lot on gender and technology related issues, and I really love what you're kind of saying about, about how creating good technologies is very much thinking about, you know, how do we make technology that benefits a community rather than kind of a corporate stakeholder, for example, even if that means, you know that a technology is going to be less profitable, but more people- first. Um, I think when we are talking about issues to do with gender and technology, often these broader kind of structural commitments get a little bit lost. Uh, because most of the debate that we see around gender and feminism and tech tends to be about, uh, the inequalities of who is sort of succeeding in the tech industry. And this of course, really, really matters because women are grossly underrepresented in the field of ai, the field that we are in. Uh, but kind of more the tech industry, you know, more broadly. Um, but what I think a lot of people don't know and what you've done, a huge amount of work to bring to light, I think, is that this hasn't always been the case.


We might be thinking of computing in particular right now as like a very masculine field, but. You know, previously, you know, women had a primary if not fundamental role in the computing industry. So could you tell us about the historical gender dynamics of the British computing industry and why they changed?


MAR HICKS:

Yeah, sure. I would love to, and I think that's a really good point you make, that we have to be attentive to the way that inequalities that exist in the present haven't always existed in the past, and sometimes they haven't existed in the past in ways that are a little bit surprising. So for instance, what you alluded to in my work is that there used to be a lot more women involved in computing on both sides of the Atlantic. But my book is specifically about the United Kingdom and the reason for that, you know, I'd love to say it was because we had some sort of hidden feminist past, but really what the issue was was that this was before the field of computer programming and computer operation and systems administration and so on was seen as a high status field. So as a result, it was seen as a field that was appropriate for women. And so it was feminized. And by feminized, I don't just mean that women were disproportionately doing the work, but also what that means is that it was perceived as being de-skilled and the wages were relatively depressed relative to men's work of the same time, but.


Actually higher relative to, um, a lot of other women's work. So it was still a good place to be for a lot of women to be in early computing roles. And then as things change and as management begins to see that computers are going to be very important, not just for speeding up work and making things more efficient, but for really directing how labor forces function, how companies function, how entire nations function. All of a sudden, you know, in the UK in particular, from the top down, and by top I mean government, there is this big push to get different kinds of people. Into these technical roles because now the perception is that they're not just technical roles, they're essentially management roles or roles that should be aligned with the concerns of management.


And so that's when you see this strange sort of gendered labor flip as there's more and more computerization. It goes from being a feminized field to being a more masculine- identified one. And the interesting thing about that is that usually in history it goes the opposite way. Usually more mechanization, more automation means that the field becomes more feminized, not the reverse.


So that's why I think computing is sort of an interesting outlier and, um, an example we can, we can get interesting lessons from historically.


ELEANOR DRAGE:

What's so important about this historicisation of computing, just to know that, it's not that, you know, we started off in a, in a terrible place and we're moving towards a better future, but actually it's much more complicated than that. It hasn't always been this way, and I think it's useful to know that , you've showed how systemic inequality and discrimination shaped the history of computing in the UK, but how does this play out today?


MAR HICKS:

Well, it's interesting, right, because I think a lot of times, as you alluded to earlier, there's this perception that it's an individual's problem, right? That maybe some woman or some set of women, they won't get ahead in their careers because of inequality, because of discrimination in a field. And then you also pointed out that this is.


Problematic on a broader level because if, for instance, women aren't at the decision- making table, then that's going to affect what kind of products and decisions are, are made or what kind of, um, priorities a field has. And historically what, you know, we see in this particular example is that as women get squeezed out of these jobs that they had been doing quite expertly even though they were underappreciated and underpaid. It has a terrible effect on the British computing industry. It actually leads to really negative things on a macro level, not just for the British computing industry, but also for. The nation, you know, for the GDP of the country, for Britain's aspirations in the 20th, late 20th and 21st century. And so I think that that's an important historical lesson.


We're seeing a lot of that going on today, especially I would say in the United States where these built in systems of discrimination, these discriminatory practices that actually undergird a lot of our technological systems and how they're built, they're causing massive, massive problems, not only for the people who are being discriminated against by them, but also for everybody, even the people with relative privilege are being hurt by the fact that these systems are essentially broken from the ground up and it's gonna be very hard to fix them because these bugs, if you wanna call them that, and I wouldn't quite call them bugs, I would call them maybe systemic flaws, they are really deeply built in to how these systems work.


KERRY MCINERNEY:

Yeah, that's really interesting. And I think what I really, really like about your work, um, as part from like providing this really, really vital historical context, um, to where we are right now is I think it also kind of flies in the face of this like progress narrative, like that people want to sell us about the computing industry or about kind of, you know, STEM subjects more broadly.


And I think people are often really, really shocked not only to hear the outcomes of your work, to kind of trace us historical narratives, but also to hear that actually, you know, things are not getting better in the British computing industry when it comes to forms of gender representation. But I wanna pivot and talk about a different kind of progress narrative, and that's the one that has been sold to us, I think by kind of big players in Silicon Valley.


Um, a kind of techno utopia narrative, which suggests that AI and other data-driven technologies are just going to make our lives better and better and better. And I think we've seen, you know, some pushback to this with ChatGPT. Um, and so, you know, to some extent, I think the last couple of months have been, you know, a bit of a wild ride in terms of kind of public explorations of AI.


Um, but we know that this kind of techno utopian narrative is something that's of big or kind of a massive interest to you. And so you co-edited a fantastic collection called Your Computer Is On Fire, which directly counteracts and responds to the techno utopianism of Silicon Valley. And so in this book, uh, you write that sexism is a feature, not a bug of technological systems.


So could you explain what you mean by this?


MAR HICKS:

Sure. Yeah. These techno- utopian narratives are really harmful because one of the things they do is they assume that we live in the best of all possible worlds and that technologies are inexorably going to make things better and better for everyone. And that is not true. And when there is that assumption or a lot of people believe in that marketing, then we have no chance of building better technologies or solving these problems because people are unwilling to even recognize that there are problems or that.


Just throwing more and more technology, more advanced technology at something, um, you know, isn't enough to fix a problem. And I say in that chapter that I wrote for Your Computer Is On Fire , that sexism is a feature and not a bug, because in the systems that I'm talking about in that chapter, I'm showing how the ideal of, for instance, a male breadwinner wage in this particular historical context and the idea that women shouldn't be in charge, they weren't constitutionally suited to be management. How those two things combine in this system to cause a lot of harm, but also the people who are in charge, and who are seeing the harms it's causing, they don't fix it because the system is working as designed.


The system is supposed to keep certain people at certain echelons and it's doing that. So even though the broader outcome is not exactly what we would think they'd want, they don't actually see it as a bug. It's a feature of how the system is meant to work. And you know, unfortunately, we see that. Today just to give you, you know, one example.


If you look at things like Uber or Lyft or some of these gig economy apps and companies, Some people might say, 'oh, look, it's a bug in these systems that they are hurting workers, that they're breaking unions, that they've destroyed the viability of, uh, taxi cab drivers, for instance, in New York to make a living'.


Well, that's not a mistake, actually. That is part and parcel of how these systems were designed to work. And we think that being anti-labor is a bug that sets us up for not really being able to see in a clear sided way what we might have to do to fix these systems. It's not a bug. It's, it's a feature. It's part of how they were designed to work, which means that you can't just patch it, right? You have to go maybe back down to the studs and rebuild that house, so to speak.


ELEANOR DRAGE:

 Yes, totally. This was the problem that Kerry and I always had with the euphemism of a bug that's baked into systems, but can also just be extracted . And we know that a system can be working perfectly well, it can be error free, but it can still be harmful.


I wrote a paper with Federica Frabetti on the police's use of facial recognition technology, and we showed how a system can be error free, bias free, but it can still be really, really harmful to different populations.


And so when you know, people say to us, oh yeah, it's about removing the bias, we know that that's not the whole story, that there are systemic issues that also need to be resolved. So I'm really glad that you mentioned this because a bug sounds like something that is external to the system. A bug lives elsewhere in the wilderness, but has made its way in when actually it's a systemic flaw, it's something that is integral to the system. So I'm very grateful for this correction of that very poor analogy.


We're experiencing a pretty bad world moment. There's a ton of stuff going on, everyone's very familiar with. I almost need not mention ongoing crises like Ukraine and fuel shortages, cost of living crisis, which is plaguing the UK at the moment, in the US economists are witnessing the beginning of a recession that's unequally distributed, and of course, ongoing climate catastrophe.


You are a historian by training. Can you tell us how historical crises of the past can help us cope with these contemporary disasters? And also perhaps where looking to the past isn't useful.


MAR HICKS:

 Sure. Well, one of the things that I think is so important as we're trying to figure out what. Uh, history can do for helping us potentially fix current crises is that we have to think about it in, I guess, two separate chunks. And the first is when we look back into the past and we find situations where, you know, we see things that are similar to what's happening today.


Things that are strikingly similar. That's the important first step. But if we can't find a situation in the past that has usable or useful, um, lessons, sort of solutions, then it doesn't matter if we find something in the past that's, you know, that echoes or parallels what's going on today if we can't somehow derive usable solutions from it.


And sometimes we can do so even if we see the same failure in the past, we can learn some lessons from it. Sometimes, you know, we have to find a different historical analogy where maybe the situation was slightly different. It doesn't match up exactly as well as what's going with what's going on today, but it has in it, you know, more hopeful, more usable solutions, things that people tried and that worked. And so that's the second chunk of it, I think. Not just seeing those patterns in the past, but actually looking for historical examples where, for instance, the people who were trying to fix a given technological system had some success and, and how did they do that?


You know, uh, they were potentially working against great odds and they won well, how exactly, you know, what were the exact mechanics of getting that done? A lot of times that involves labor power. Sometimes that involves citizen protest. Um, it often involves regula regulatory, um, changes in addition to citizen protest and labor power.


So those are the ways that I like to think about history as a useful discipline.


KERRY MCINERNEY:

That's really fantastic and I think, you know, not only thinking about like which analogies we use, but also like what angles we take on those analogies. Cause I feel like in relation to like AI development, I've seen a lot of analogies. With nuclear, uh, weapons and the sort of like rhetoric of the AI arms race and then kind of the rhetoric of like actually also the space race, but particularly sort of the race for nuclear weaponry.


And I guess this analogy concerns or worries me sometimes, like, you know, partly because I don't think that these are necessarily analogous situations, but also because I think sometimes, People maybe are drawing what I would think is the wrong lesson from this analogy. They'll say, oh, well, you know, we were able to like put these controls on and, you know, make sure that nuclear weapons weren't deployed.


To which I would say, well, firstly, unfortunately that still is not, uh, guaranteed. But also secondly, we know that these weapons were hugely deployed in Japan across the Pacific in both like war situations and in testing that they had a really devastating impact on people. Many, many people died. And many places are still trying to recover from it.


And so I really like what you kind of are saying about how we draw these analogies. Cause I think it's also this kind of politics of, of caution with like what kinds of analogies we draw and what kinds of lessons we're hoping to kind of share from them.


MAR HICKS:

Yeah, definitely. And I think the example of nuclear weapons, which of course kind of then become nuclear power. It's such a, a good example because as you point out, there are places in Japan that to this day are being studied because of the horrible effects that the detonation of two nuclear bombs by the United States, um, for very honestly, very little military sound, military objective. Um, you know, those sites in Japan and those people in Japan, unfortunately were made into not just enemy combatants, but a testing site. And that's one of the really grim parts. Um, and I think, you know, maybe that's one of the parts that we can get a little bit of a useful historical analogy from that a lot of times these technologies are not deployed for their reasons that we're led to believe. They're deployed just to see what happens. They're deployed to test things on an unsuspecting public, and that's one of the reasons that you know, I'm so thankful for the questions that you are asking about, well, how do we think about. Structural discrimination within these systems because one example that you know, comes to mind regarding like AI technologies, oftentimes AI and especially when it's a system of quote unquote artificial intelligence that has to do with facial recognition, it's paired very tightly with surveillance and surveillance systems, and it's not enough to say like, oh, we're going to somehow take the bias out of this system so it doesn't, you know, discriminate against, let's say Black people. You know, it recognizes black people as well as it recognizes white people.


Well, okay. I mean, in some ways that's important because it means that, you know, people who have darker skin will be less likely to be subject to like false arrest. But on the other hand is removing bias from a system that is about, you know, surveilling people and curtailing their civil rights. How can that be the answer?


Is that enough? Well, I mean, to me it really doesn't seem like it's enough, and it sounds like the two of you are thinking about things in a similar way. Sometimes it's not about fixing a system, it's about saying, we need to put the brakes on whether this system is allowed to be developed and deployed with very little oversight and very little democratic buy-in or approval from the people that it's going to be used on, tested on wielded against.


ELEANOR DRAGE:

 Yeah, it's so important to remember that rarely do the people that the technology is tested on have the authority to meaningfully give approval. And I'm thinking also of when the Pill was tested and trialed in Puerto Rico, and even though people may have given some sort of approval, they allowed this drug to be given to them, and they took it themselves, is that real approval? Is that real transparency of what this untested, trialed drug is actually going to do the effects it's going to have on your body? Certainly not, and I was thinking about this a lot recently in relation to regulatory sandboxing, which is when technologies are trialed in controlled environments. So could you tell me whether you think regulatory sandboxing is actually an ethical way to trial technology?


MAR HICKS:

Yeah, I think in a lot of cases there really isn't an adequate testing phase and things are being released very prematurely, and things that are really in the beta phase are being released as though they are products that are ready for primetime. And in fact, in some cases, companies are purchasing and deploying some of these products, and that's enormously problematic and irresponsible, on the part of all of the parties involved.


But if there is going to be a strong economic incentive to do that, then, you know, we can't just rely on the fox to guard the hen house. We can't just say, Hey, companies, please do better. We have to have something, that, legally requires them to do better and then has an enforcement agency so that when they do break those laws, there is some sort of an enforcement agency other than just class action lawsuits, which can be very effective.


But they're also very slow. And so, you know, I'm gratified that the GDPR and the EU have taken some steps, positive steps in that direction. I think that in a lot of ways we are going to see useful, um, regulatory and legislative changes probably coming from outside the us. Because many of these companies are headquartered in the US and have lobbyists, you know, in our halls of government who are very, very effective.


And that's going to really, uh, slow down and defang the regulatory moves in the United States. Even though there have been, you know, many strides, we're going to potentially see that it's harder to, to get effective regulation coming from the United States.


KERRY MCINERNEY:

Absolutely, and I mean, I work with the AI Now Institute and this is kind of a huge sort of concern of theirs, definitely is thinking about how big tech wields its financial capital to specifically lobby against the kinds of regulatory interventions that are absolutely crucial. And then my own personal interest lies and how the geopolitics of ai and particularly again, this rhetoric of the, like US China AI arms race is very specifically used to promote a big tech-centric industrial policy, under the idea of like, oh, but if we are regulated, then Chinese companies who these american companies say are quote unquote unregulated, which is not true, um, are going to leap ahead of us. And so, you know, I think, um, that regulatory battlefield, you know, I think is just going to be so central over the next, Yeah, a few months, if not kind of next couple of years.


Finally, I wanted to ask you, because, um, I'd really love to hear your thoughts on this. If you were able to see the computing industry go through a big transformation in the way that we saw it go through in the eighties when we had unfortunately this kind of transformation of the field into a hypermasculine one, what kinds of more positive transformations would you like to see happen in this field, let's say 10 years from now?


MAR HICKS:

Well, one of the transformations that I'd like to see is the profit incentives in the field change because I think one of the reasons in the United States. And globally that we've ended up with so many problematic technologies and technologies that were released probably, you know, in an untested form.


We're seeing that honestly, I think more and more as time goes on, um, is because the profit structure and the sort of, uh, model of venture capital funding was not aligned with any sort of broader concern about social good. And I think potentially given the way that tech stocks have been tanking, um, you know that that gravy train might be kind of over, so we might see the profit incentives and the structure of that industry start to change. However, that might, for instance, lead to, as you sort of implied a pivot to, you know, the new Cold War where companies developing AI try to now get uh, government contracts to make up their their losses that they can't, you know, they can't get VC or consumer dollars for.


And that means that, you know, we're going to end up with more and more technologies that are militarized and going towards military ends, and that's not great either. So my hope is that through a combination of Worker power and labor organization and consumer organization and people in communities pushing back on the local and state level, not just the federal level, that we might be able to get some changes to our current technological incentive structure and ecosystem and just start to hold companies accountable in a way that makes it less profitable for , them to continue doing some of the harmful things that they're doing. And I'm not going to be naive about that. I think that's going to be a very heavy lift. But most historical changes worth trying to enact are a heavy lift. So, we'll see what happens. And you know, I, I know you are both involved in this based on what you're, you know, doing in your scholarly work and I'm sure outside of your scholarly work as well.


There's a lot of us who are on the same page and working towards this end goal. And, that's something, that's not nothing.


KERRY MCINERNEY:

Oh, thank you so much. No, I mean, again, like I don't always want to like compel us to end on like a quote unquote positive note because I think, you know, the critical work that we all do as like feminist and scholars just like to me is so important. But I guess I wanted to ask you that question because I think, um, that is what's so hopeful about your work is I think, you know, by looking into the past and looking at how things weren't always this way, it offers us the opportunity to think about different futures.


And one of the main reasons we started The Good Robot is because we wanted to try and plot a path between the kind of techno utopianism and we've talked about in Silicon Valley, but also the kind of overwhelming sense of existential dread that I think hits a lot of us when we start to think meaningfully about, say, the environmental costs of these technologies.


The concentration of power in big tech, you know, the many abuses that happen in the tech industry and by the tech industry. Um, and so I think, you know, one of our kind of platforms, I guess as feminist thinkers about technology. Also, trying to think about, you know, without uncritically embracing technologies, how can we start to imagine like different and hopefully more just futures.


But most importantly of all, thank you so much for taking the time to chat with us. It's really been such a pleasure and so great to get to continue learning from you and your work.


MAR HICKS:

Yeah. Thank you so much for having me. It was wonderful having the chance to chat with you both.


ELEANOR DRAGE:

 This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney, and edited by Eleanor Drage



56 views0 comments
bottom of page