We often think that maths is neutral or can't be harmful, because after all, what could numbers do to hurt us? In this episode, we talk to Dr. Maurice Chiodo, a mathematician at the University of Cambridge, who's now based at the Center for Existential Risk. He tells us why maths can actually throw out big ethical issues. Take the atomic bomb or the maths used by Cambridge Analytica to influence the Brexit referendum or the US elections. Together, we explore why it's crucial that we understand the role that maths plays in unethical AI.
Maurice Chiodo addresses the ethical challenges and risks posed by mathematics, mathematicians, and mathematically-powered technologies. His research looks at the ethical issues arising in all types of mathematical work, including AI, finance, modelling, surveillance, and statistics. He set up the Ethics in Mathematics Project in 2016 and has been its principal investigator since then, delivering seminar series, giving invited talks, and producing scholarly articles in the area. Maurice has direct industry experience with over 30 startups, having been a member of the Ethics Advisory Group at Machine Intelligence Garage UK for over 2 years. He comes from a background in research mathematics, holding two PhDs in mathematics, from the University of Cambridge and the University of Melbourne, and has over a decade of experience working as an academic mathematician on problems in algebra and computability theory.
TRANSCRIPT:
Kerry: Hi! I’m Dr Kerry McInerney. Dr Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us, and we’d also so appreciate you leaving us a review on the podcast app. But until then, sit back, relax, and enjoy the episode!
Eleanor:Â We often think that maths is neutral or [00:01:00]Â can't be harmful, because after all, what could numbers do to hurt us? In this episode, we talk to Dr. Maurice Chiodo, a mathematician at the University of Cambridge, who's now based at the Center for Existential Risk- that's downstairs from us at the Leverhulme Center for the Future of Intelligence. He tells us why maths can actually throw out big ethical issues. Take the atomic bomb or the maths used by Cambridge Analytica to influence the Brexit referendum or the US elections. We think it's crucial that we understand the role that maths plays in unethical AI. We hope you enjoy the show.
Kerry:Â Amazing. So thank you so much for joining us here today. It's really such a pleasure to get to chat to you about your work. So just to kick us off, could you tell us about who you are, what you do and what's brought you to thinking about technology and ethics?
Maurice:Â Thanks for that. And thanks for having me. My name is Dr. Maurice Chiodo. I'm a research mathematician by training and by profession. I've been in mathematics departments for the last 20 years [00:02:00] now as an undergraduate and graduate student postdoc and then a researcher.
And mid on in my career, I'd say about, about 2010, I started to think about not just the mathematics I was doing, or mathematics others were doing, but also the impact of this sort of work and what it meant for wider society. So the people that were directly impacted by this, or people indirectly impacted, or just what brought the consequences for society.
So about 2010, I started thinking about this. In 2016, I started thinking about it more actively. And that's the point where I set up and founded the Cambridge University Ethics and Mathematics Project, which looks at the societal impact of mathematics, the ethical questions that arise from mathematics, mathematical work, and mathematicians.
And these are not one and the same thing, they are three slightly distinct entities, and I address and approach all three in the sort of work that I do.
Eleanor:Â Awesome. Thanks so much. And just for listeners, I got interested in your work, partly when you told stories about what was happening in the math department and how your work was being received that [00:03:00]Â I found really surprising and quite shocking.
And also because when I was looking at all the chat around the A level algorithm, a lot of it centered around this idea that mathematics couldn't be biased. And eventually there was a U turn and they found a different way of grading, but maybe we can come back to that. So can you tell us first from the perspective of an ethical mathematician, then what is good technology? Is it even possible? And how can ethics help us get there?
Maurice:Â Good technology is, in my view, a misnomer. There is technology which can be good. There is no technology which is an unmitigated tool for good. All technology represents an advancement in human ability human capacity. And these advancements can always be used for good or for harm.
Some technologies might be... a bit better than others to say that this is to say that something is strictly a [00:04:00] good technology I believe is best known because of the sort of double edgedness of all technologies not just mathematics.
Kerry:Â It's really fascinating and I think you know this idea of like technology as an unmitigated good until not being possible definitely resonates I think with a lot of the different kinds of perspectives on tech that we've seen throughout the podcast from, for example, Leonie Tanczer, who works on feminist approaches to cybersecurity and thinking about how technologies that might seem like they can be good, like smart home tech can quite easily be exploited for anti feminist or sexist purposes for the purposes of intimate partner violence and domestic violence.
So if anyone listening, we'd really encourage you, if you're interested in this particular problem to YouTube, Apple, Spotify. And also on our website but I wanted to hone in on your special area. And ask you to just kick us off could you share some examples with us as a kind of more lay audience of where maths has had some bad social outcomes?
So for example, I know something that you've talked about and think about is unethical maths in relation to the Cambridge Analytica [00:05:00]Â scandal, something which is of course, very close to heart here.
Maurice:Â So what roughly happened from my understanding is it's alleged they were able to obtain some information from Facebook. So this was done by offering people some small, a bit of small amount of money, a dollar or something to download some Facebook app, which let them link, find out who their friends were, and in the process, this app would hoover up their Facebook information and the Facebook information that they could see off of their friends. My study is that Facebook was able to get 250, 000 people to download and use this app. And that reached out to all the friends of these people, which was approximately 80 million people, I believe. So they're able to get this data from 80 million people. And again, it's alleged they were able to use that data to create these sort of psychometric profiles, so the ability to look at anyone's [00:06:00]Â Facebook page and say, okay, this person is of this sort of category, this sort of type, they have these sorts of beliefs, they carry these sorts of actions in life. They're motivated by these sorts of things. So that's where the psychometrics come in.
But of course you've got this massive data set, the data of 80 million people. So you need to do, a lot of data processing to figure out how to actually, to learn from the data of 80 million people, to learn how to put people in a box. So the mathematics there was to say, okay, the psychometrics says these are the boxes that we have.
These are the way, these are the categorizations of people. How do we learn how to put someone in a box based on what we read off their off their Facebook profile? And this is where the data science came in. And you have people in Cambridge Analytica who had, PhDs in theoretical physics discrete mathematics, so these are, these are genuinely well trained, mathematically trained, or in in, in adjacent fields . Once you know how to read someone's Facebook page and put them in a box, then you can start administering [00:07:00]Â adverts. Because it was very cheap to sell adverts on Cambridge.
And so it's alleged that Cambridge Analytica was able to use this inference from these 80 million profiles that they access data from. And they were able to deploy targeted political advertising on the entire Facebook population in the U. S. in the case of the 2016 U. S. election and in the U. K. in the case of the 2016 referendum on membership of the European Union, Brexit. What can you do here? Firstly, it's very cheap to send an advert. This is handy.. So you start deciding who to send adverts to based on what you think someone is like. And it's not just simple political persuasion adverts, vote red team, vote blue team sort of thing.
You can do all sorts of things. You can measure someone, you can put someone in a sort of psychometric box and say, Oh, this person is very committed to their beliefs. But they're a bit wishy washy on political engagement. And so you might be able to register that someone is a very staunch supporter of the red [00:08:00]Â team, so to speak. But they may or may not go to vote. So you may put forward an advert to them of the form, hey everyone, the red team is guaranteed to win this. Go everyone where this is a shoo in for us for the election. And so this person is disincentivized to vote. Because I think we're already going to win, why would I bother going out to vote?
Or it's something like, you might detect that, again, they're very sort of staunch red team supporter, but they're not very diligent when it comes to details. You might send an advert like, hey everyone, don't forget to go and vote on Wednesday, whereas in actual fact the voting is on Tuesday.
So you can move around their voting action. Again, by, by, by registering roughly how they behave, understanding how they behave. So there's all sorts of little tricks you can do. You might be able to do maybe fairly standard political adverts where you know someone is particularly sensitive to issues around children.
And they might say something like we need... tougher gun control. I'm making these examples up here. Tougher gun [00:09:00]Â control to protect our children and that might resonate particularly well with such a voter and change their voting preference and so on. Now the actual specifics here don't matter in the sense that it doesn't matter If you get it wrong some of the time, you're paying pennies for each advert.
So if you get 5, 10, 15, 20% of these adverts completely wrong, doesn't really matter. What you really want is to be able to swing large numbers of people at a ballot box. So if you can swing 80% of your target audience one way and 20% you fail on that's still a significant difference, especially on a tight election.
And this is one of the dangers of mathematical work, because you can't do this without being able to initially process the data of 80 million people that was allegedly being taken in the first instance.
You need that mathematical understanding, statistical analysis, and I believe some machine learning as well, to get that initial learning so that when it comes to administering adverts, you know how to send out those [00:10:00]Â adverts. And remember, you might not actually send adverts to the whole voting population.
You don't bother sending an advert to someone who you know is going to vote, you know is going to vote in one direction, and you know will never change their mind. This is a waste of money. But, you might get that soft middle of people who might change their voting actions or their voting preferences, or some combination of the two.
Eleanor:Â So you're saying that the mathematicians who are working on this should have thought, is what we're doing right? Is this democratic? And they weren't.
Maurice:Â I believe, now I can't speak for them because I haven't met them, but I believe they may have shared the general opinion of many mathematicians I have met, which was to argue, I just do the mathematics, the rest is not my problem.
And this is the dominant ideology that I've come across in my over 20 years in mathematics departments around the world. With very few, there are exceptions, there are people who don't believe this, they are in the in the serious minority. I would guess that at about sort of 10, 15% sort of thing.[00:11:00]Â
And the vast majority say, I do the maths, the rest isn't, the mathematics, the rest is not my problem.
Kerry:Â That's interesting. So I guess I've two questions on that, which I'm going to give you both at once and you can struggle with the two of them. So that's the risk of coming on as a podcast guest.
And the first is, why, where do you think that's come from? This kind of dominant belief by the sounds of it that, I just do the maths and it's other people's job to deal with the ethics side of things. And then secondly is it that you think that maths itself doesn't have an ethics and that, but that it's put to unethical use or do you think that there's an ethics to maths itself? There's something about it to say what we need to be able to grapple with the embedded ethics of mathematics.
Maurice:Â Now to your first question, where does ideology come from? You're talking about discipline. I won't use profession because it's not clear mathematics is a well- defined profession yet, but there's discipline with about a four or five thousand year history.
And that history has led to certain internal belief systems [00:12:00]Â of what we do, how we do it, why we do it, a belief of undertaking mathematics for enjoyment in a sense almost self gratification the aesthetics of doing mathematics the sort of privilege of being able to do mathematics and defending that, and the lack of instruction to think otherwise. Which comes first, the chicken or the egg, which comes first, the ethical mathematical teacher or the ethical mathematical students and well none have come first yet, so this hasn't been introduced into the discourse apart from some efforts, including my own over the last 4,000-5000 years. There's much more to be said here but of course summarising a 5, 000 year old culture in 30 minutes is a bit of a tall order.
For the second question, is there ethics in the mathematics itself, or is it just how you use it? And this is the sort of question you can ask of any technology. Is there ethics in a gun or is there the ethics in how you use it?
Is there ethics in a [00:13:00] nuclear weapon? Making a nuclear weapon, even if you just leave it there on the table is a decision to do something. If you think back to, let's use another example, the advent of nuclear weaponry in the 1940s, one of the key things that happened was that the the Nazi Germany did not know that this technology was possible.
They didn't have the right model and governing equation to, to make this prediction. This was Heisenberg's era. He predicted you needed many tons of uranium to produce a self- sustaining nuclear reaction. The allies predicted you need about, you needed about 60 kilos and there's a very different numbers.
You could make 60 kilos of uranium. You couldn't make several tons of uranium, not in any sort of reasonable timeframe. But Heisenberg was using a different mathematical model, and that's why his prediction was way, way off. Now, had he had access to that mathematical model, and I'm literally talking about six A4 sheets of paper, no more.
Six A4 sheets of paper that will be [00:14:00] intelligible to a second year mathematics student doing a mathematics degree of today's standards. Then Heisenberg would have been able to deduce, ah, hang on, I've made a mistake, we can do this, we can do this with 60 kilos of uranium rather than several tons.
Question is is there math, is there ethics in the mathematics there? If you take those six pages from the Manhattan Project and plunk it in the middle of Berlin in 1941, then you have an ethical dilemma. So it is more than just what people do with technology, it is where you put mathematics, who is looking at it, how they're looking at it, why they're looking at it, the factors in here.
It is not just how you do and enact the mathematics, even just the fact of choosing to do or not do mathematics has an ethical component to it, as I think I've just demonstrated.
Eleanor:Â That was a brilliant explanation, and we're also asking those who, what, why questions about technology, but it's interesting to see mathematics positioned as a kind of technology in itself.
That's something [00:15:00] that we don't usually hear, and we know that one of your colleagues has explained that mathematics can also be understood as a language, and that tends to be quite a provocative statement in mathematics as well, surprisingly. There's this high barrier to entry with maths. People feel like, oh, I don't understand, I can't understand And therefore, people don't really know what's going on and don't try and understand what's going on, which is why the A level algorithm thing was so interesting to me, because the teachers didn't really understand how it worked, but they definitely knew that what it was doing was wrong. And so they could come at it that way.
But what you're saying is that the people who are working on things in mathematics in relation to these high risk applications are not taught to think, Oh, wait, am I... Doing the right thing, is this... a good use of what I'm doing? Is this ethical? But they are exactly the right people to point out when things are going wrong, because they're the ones that know how a [00:16:00]Â system works and what its applications are.
So can you talk to us a bit about whistleblowing in mathematics? Because you've said that it's really hard to be a whistleblower because there's no ethical board mathematicians often don't know how to communicate these issues to the public. So what's the problems there?
Maurice:Â First and foremost, the problem is you need people who care.
And as I explained earlier when most of the mathematically- trained population say, I just do the mathematics, the rest is not my problem, you rapidly run out of people who might care, who might practically do something. Then they need the training and understanding to, to comprehend not just the work that they're doing, but the impact on wider society.
This is sometimes not an easy thing to understand. Okay, I'm doing mathematics on the in front of me to see how that affects other people requires more thought, requires time spent rehearsing and training and thinking. As [00:17:00]Â you spend time on things, you get better at them. If you spend no time on something, you're probably not very good at it.
When you have mathematics students who spend thousands, getting onto tens of thousands of hours training in mathematics, just doing very stripped down, abstract mathematical problems in a subservient way. Now, I'm not sure if you've seen this, but in mathematics, we ask questions in an instructive way to encourage subservience in the students.
Say things like, compute this number. Work out this integral, find this probability. We tell the student to do it. We don't ask them, do you want to find this? Or is this the right thing to find? We tell them, you find this now. And so that subconsciously trains in a level of subservience in them. So when a boss comes up and says, compute the A level grades of these students, They, their reflex response is, oh, I've been asked a question, I need to go and do it now. It's a mathematical question. So you get this subservience that's trained into mathematician. You've got to break through that understanding what the impact is. But then you have to worry [00:18:00]Â about a reward mechanism. Why would a mathematician in academia, in industry, wherever, why would they spend time, effort, and expose themselves to social, political risk to call out some bad or harmful mathematics that's being done?
When there's no reward mechanism for them. So in academia, you get rewarded if you publish a paper or win an award, but you win an award before you're publishing papers. So generally it's all down to, research output. You don't get rewarded for doing work like this. You don't get rewarded for going to journalists and talking about the harmful effects of that.
You go, you get rewarded sometimes for talking about the good effects of mathematics. So if you're one of these public faces of mathematics, I'm sure you can think of some examples here in the UK and the US as well, people who go on TV and radio and newspaper and podcasts and say, oh, look how great mathematics is, then these people get a little bit rewarded.
But if you go on some sort of podcast and say, oh, mathematics is being really harmful here, and it's causing damage here and here, [00:19:00]Â there's no professional reward mechanism. In fact, you might get in trouble from your boss. If you're a mathematician in industry is your company going to pay you to spend time to do this?
And if not, how will you find the time to, to do this? It does, it's more than a five minute job to sit down and figure out what the hell went wrong with particular examples in mathematics. The Cambridge Analytica example- Paul Olivier de Hay, who's a Mathematician. He worked with a journalist, Carole Cadwalladr, I believe her name was to start to uncover and expose this.
He was doing some serious work in that. So you need this very difficult, you need sort of nine planets to align. In this whole process you need people to care. You need them to know how to think about these things and to think about 'em effectively. They need to be in a position to be able to do they need to be sufficiently safe and protected to not be crushed into the process. And they need to know who to go to. You won't find many mathematicians who have spoken to a journalist at any point in their life.
Eleanor:Â This is [00:20:00]Â actually fascinating because I have been working with Carole Cadwalladr's team, The Citizens, who are these investigative journalists, looking at AI that's being used by the Met Police in the UK that was used to track BLM protests in the States, to track and monitor protests here.
And what we've done is combine findings from my work with a philosopher of software who used to be a software engineer who's called Federica Frabetti. And we've collaborated with the citizens to track what's going on. And I'm also collaborating with another journalist at Liberty. And so it's this combination then of academia, technical knowledge, philosophy, and investigative journalism that allows these stories to break. And you're so right. We never get in touch with journalists because they're never really floating. The only reason I got in touch with them was because my cousin, Louis Barclay created a plugin for [00:21:00]Â Google Chrome that helps people use Facebook less and then Facebook sued him.
And then Carole got in touch. So it's through these sorts of, you know, tech activist circles that you create these connections that are so important to being able to tell these stories to the public.
Maurice:Â One difficulty here is that mathematicians are often trained to solve the problem entirely. And when they see a problem, they can't solve entirely on their own- Work that's done in solitude. Not always but you work by yourself as a mathematician. Of course, there are people who don't but a lot is done as an individual. And if you see a problem that you can't solve by yourself, or perhaps you can't solve with the help and contribution of the other people you usually work with, and these are usually other mathematicians in your area or a very closely related area, then you might just put up your hands and surrender and say, I can't, this can't, I can't solve this.
Now, of course, these problems can't be solved just with mathematics. You need to engage with in these instances journalists moving [00:22:00]Â forward regulators and lawmakers these sorts of entities to actually, affect meaningful change. There's no point in the world just knowing about bad things.
You need to change stuff. So you've got people in a discipline where the, again, one of the dominant ideologies is you need to be good enough to solve the whole problem by yourself, and if you can't, you fail. So the knee jerk- reaction, which would rule out many mathematicians even trying to do this sort of work, even if they have, or even if the other nine planets have aligned, from my analogy before, would be I can't solve this, even if I, work out what went wrong, I don't know who to tell and how to tell them and how to get things to change, so I should just give up. So you need 10 planets to align and you only have nine planets. So this is where the problem starts to arise.
Kerry:Â Gosh. Yeah. It's really fascinating. My brother was a mathematician is currently a philosopher of math. He did maths Olympiad, which for anyone who is not immersed in this world, for very good reason, this is like the youth Olympics of mathematics.
And it's very intense, but I know what you mean. This kind of [00:23:00]Â overwhelming ideology of solving the problem. There's so much pride and shame and intensity bound up in that. When it comes to this, I don't know, plane or level of mathematics, I have absolutely no comprehension of. So I'm just like, yay, solve the problem. Go my brother. But I have no idea what's going on. But I want to ask you a bit, you outlined so well, I think a lot of the problems when it comes to incentives, when it comes to whistleblowing when it comes to these ideologies in mathematics teaching, but you've also said that you're someone who's trying to change that in math department.
So I'd love to know a bit about how do you personally try to bring about these conversations in math departments? How do you try to generate greater ethical awareness among mathematicians?
Maurice:Â It's a difficult process. And again, you're going, I mentioned before, you're going up against 5 000 years of pretty calcified cultural beliefs and understandings here.
And it's not just that. You're also criticizing the one thing that mathematicians hold to the highest regard, which is their [00:24:00]Â mathematics. In a sense you're insulting their gold idol. You very quickly get a knee jerk reaction, even before people hear you out, of you've, you've defiled my gold idol, I don't want to hear from you anymore.
And this has been the main response that I've received from the majority of mathematicians I've... interactive, which is, oh, this is, we believe that mathematics is pure and holy, so to speak, and you're saying otherwise, so what you're saying must be complete rubbish.
So that, that knocks out most mathematicians that you're trying to engage with. And then, You have various mathematicians who may try to suppress this sort of work or hide it or conceal it from the students or make arguments like the students haven't got time to think about this because they have to go and do their real mathematical work.
Now, this is an issue of priorities. But again, going back to what I was saying before. We train mathematicians in a certain way. One of the things we do is you train them [00:25:00]Â to do and learn the maximum amount of mathematics possible. We try and cram as much in as we can. Anything that's deemed to be not mathematics is deemed to be somewhat of a waste of time.
Now, this is not for all universities around the world or for all academics around the world. There are certainly exceptions. But again they, they form a minority. And especially when it comes to universities which are actively doing things like this, they're from extreme minority, you can probably count them on your fingers, perhaps you can count them on your thumbs.
So we're talking very small numbers here. Because even if you do have a mathematician in an institution who sees this as a good idea, They might get completely overwhelmed by the other colleagues who say no, this is a completely bad idea or this is a waste of time or other arguments like this.
There are lots of challenges. What I've tried to do is create, firstly, create a knowledge base, create resources, create understanding. When I started doing this seven years ago, there was nothing. If you [00:26:00]Â googled ethics and mathematics, you would find half a dozen, at most three page. articles, not even articles, like small notes in little side maths publications here.
And there are people saying, Oh yeah, there's this problem in mathematics. And they often focus on a particular area. In the eighties, it was about, nuclear deterrent and this Star Wars program that Reagan was trying to set up in the 90s and early noughties, you had a financial mathematics and then you had issues of cryptography.
So you'd get these sort of like something would, some subset of mathematics would go kaboom. And then people would say, Oh, that was not very good. And then they'd write a three page article saying that was not very good. Mathematicians should try and address that. And usually they're the approach is right.
We did a Hippocratic Oath for mathematics. And I have a paper on why such a thing is necessary, but not sufficient. So getting mathematicians to to swear by Hippocratic oath, this one paragraph sort of thing, maybe a few [00:27:00]Â paragraphs, doesn't have the same sort of backing that a full ethics training in say medicine does, which is where you see the typical, the standard Hippocratic oath.
So if you don't give people the training and background and understanding, if you don't develop the corpus of knowledge of what it means to behave ethically and unethically in mathematics, then saying here, sign this form or here, put your hand on this book and swear by this paragraph achieves nothing.
So I and some colleagues have been trying to produce resources so that others can do this more easily rather than spending months and months, if not years, trying to put together, say, a short course in ethics and mathematics, though because people can now do this by reading, one document that's sitting, that's easily accessible. If they're endeavoring to to teach mathematics in a more sort of open way that helps students see where the mathematics is going, what it's doing, then they'll have ways to incorporate that into their existing teaching without having to reinvent their entire syllabus.
There are ways to insert this thinking and these sort of exercises into their teaching in a fairly streamlined way. So these are the sorts of things that I and my colleagues are trying to try to make [00:28:00]Â it easier for others to carry out this sort of work, as well as make it easier for practicing mathematicians to carry out their mathematics in a responsible way.
And recently we released a manifesto for the responsible development of mathematical work, which is basically a companion to mathematical work where you can go through and see what work am I doing? What are the factors? What are the aspects I need to consider as I do my work?
It's a very general document, of course, because mathematics is a general field, but there are Similarities, there's commonality in how you do mathematical work, be it in finance, be it in surveillance and cryptography, be it in data science, be it in AI, be it in statistics, wherever you've got math, wherever you've got a room with mathematically trained people doing work in there then this should be, this would, we think, be relevant to them.
Kerry:Â I think that's just so important. And again please let us know when the resources are out, we'd love to distribute them through the good robot website or any other kind of platform where we can help promote them because I do think a huge problem when it comes to ethical [00:29:00]Â work and areas like mathematics or computer science is so much reinventing the wheel going on or people kind of feeling like I've got to start from scratch.
I don't know what's out there. So I think the fact that you're doing this work and you're making this as seamless as possible for people is awesome. Absolutely crucial. And then finally, I want to ask you about a really interesting example that you shared with us, the Sally Clark case.
And can we use this maybe as a way just like to hone in specifically on some of the ethical issues you've raised around the use of mathematics and specifically why it's important to think about why something can be correct in an arithmetic kind of sense versus something being statistically significant.
Maurice:Â So I'll, I'll, I'll reverse this. And first I'll address the question of correcting the arithmetic sense. And this is something that, that mathematicians struggle with. And I think that wider society struggles with as well. And this is the difference between truth and meaning. In mathematics, you can say true things.
I can say two plus three equals five. But what does that mean? It means nothing. [00:30:00]Â You find a place, you find a scenario where you believe you can infer meaning from that mathematical truth. Truth versus meaning is crucial. Mathematics gives you absolute truth with absolutely no meaning. So in the Sally Clark case, Sally Clark was accused of murdering her children.
She had a child who died of cot death, and then she had another child who also died of cot death. And there was a process used in the courts at the time to say, if you have more than one child who died of cot death, The chance of that happening is so minutely small that you, there must have been some foul play at hand.
The chance of one child dying of cot death is one in 10, 000, approximately. So there was a medical practitioner whose name I forget now and he was since struck from the medical register because he was providing this, this evidence, this argument in court, which, which was then shown to be completely fallacious and explain what happened in the moment.
[00:31:00]Â So the argument is if the chance of one child dying of cot death is one in 10, 000, and the chance of two children dying must be the chance of the first dying times the chance of the second dying. So 1 in 10, 000 times 1 in 10000, which gives you approximately 1 in 100 million thereabouts. And so the courts could sort of say, well, it's such a minute chance that this was coincidence. It must have been, it must have been foul play. But of course, just because I can multiply two numbers doesn't mean it gives me any meaning. So you might have scenarios where there is some correlation.
There might be some environmental factors, maybe there was some problem with the cot, maybe there's some genetic predisposition. So, actually, if you've had one child that died of cot death, the chance of you having another child die of cot death is much, much higher than 1 in 10, 000. So, whilst the arithmetic was correct, multiplying these two numbers gives you 1 in 100 million, 1 The justification [00:32:00]Â for why these were the right, the correct two numbers to multiply together was flawed.
Now, to give the rest of the story, Sally Clark was convicted of murder, went to jail for many years, was eventually acquitted, for the Royal Statistical Society intervened and said, no, this is bogus statistics. So this is one of those cases of this time a mathematical organization calling out bad mathematics, saying, no, this mathematics was wrong and it's caused harm.
Problem is these organizations take a long time. So it was many, many years before Sally Clark was released from prison. And in that time, uh, her health, in particular, her mental health had declined a lot.
So, Unfortunately, mathematics gives authority to things. The mathematics, you hear the phrase, the mathematics says this. Well, mathematics says nothing. I don't know if you've ever listened to a mathematics book, but not much sound comes out of it. Mathematics gives you absolute truth, assuming you've done the arithmetic correctly, but the [00:33:00] meaning is human inferred, human implied.
Yes, I can multiply these numbers together. Why does multiplying these numbers together tell me whether someone is guilty or innocent? And unfortunately, this is where it can all go horribly, horribly wrong. And, that was one such example of the authority of mathematics, going over its scrutiny.
And people are often quite scared to question and query mathematics. You can imagine a defense lawyer who's trained in law, may not have much experience in mathematics, may not even know who to go to. Just like mathematicians dunno. They need to go and speak to a journalist. A lawyer may not know they need to go and speak to a mathematician.
In a certain case, the authority of mathematics shines through. It gives, it, gives certain arguments, is Teflon Shield. "I did the mathematics. You can't argue with the mathematics." Whereas in reality, mathematics can be used to launder an ideological or intellectual position. Start with some sort of shady [00:34:00] assumptions, use some mathematical reasoning and say, well, I come to this conclusion, my mathematics is watertight and the equations probably are, and all the steps are probably very, all correct.
But you dig into the initial assumptions and you find some erroneous statement or erroneous claim or something that isn't quite justified or doesn't quite apply to the meaning of the situation at hand.
Kerry:Â I mean, I think this is just so powerful and so important, not only because of this really tragic case, and I didn't know about this case, and it's, you know, it's just absolutely devastating to kind of hear how someone's life, when they're probably already experiencing such extraordinary grief, kind of get torn apart, um, completely through the use of just really faulty mathematics.
And I completely, agree with what you're saying about mathematics providing this Teflon kind of shield or kind of this seemingly foolproof, seemingly, um, flawless surface for what underneath can actually be completely flawed. Um, and it reminds me actually, when I just started out in this field, I talked to Professor Derek McCauley, who, is a computer scientist, but is also really interested in the ethical use of computer sciences, [00:35:00] mathematics, and he was bemoaning the way that statistics in particular kind of used in very misleading ways in the public sphere. And so he's talking about, for example, um, recidivism algorithms which say this person has a 73% chance of reoffending, to which he said, well, no, that's completely, completely wrong, either they will or they won't reoffend. So it's, it's either zero or a hundred percent, like, you know, to even just frame it in those terms, like completely misses what the statistical analysis is doing. But these, those kinds of slippages of language, I think are just so common.
Maurice:Â That's an interesting example there. So we mathematicians got lucky a few hundred years ago, because we start using mathematics in the field of physics and in physics, you have falsifiability. You do some sort of mathematical calculation, you say well the ball should go this high and then come back down if I throw it up in the air and then you can test this and if you're wrong you can see that you're wrong or you build a bridge and if it falls down you knew you were wrong.
We now use mathematics in areas where we don't have falsifiability, [00:36:00]Â like you're computing recidivism probabilities. So the chance of somebody re offending if they're released on bail or something like this. You can't rerun the universe and check what would have happened if I did the other thing.
You, you can't necessarily falsify it. So whilst society believes that science is a high bar and mathematics is the higher bar, We're now using mathematics in many cases where we don't even have falsifiability. So actually, mathematics in those instances falls below the authority and rigor of science.
But because we use the word mathematics, it must be higher than science because mathematics, as the public believes, it sits at on a higher plane at a higher level. Whereas actually it's often now, especially with the use of AI. And the mathematics behind how AI is put together and developed, it is actually less scientific than standard science.
Yet it's given more authority than science because it's got the badge, the label of mathematics. And it's extraordinarily dangerous, but extraordinarily, extraordinarily, enticing for mathematicians to use this sort of [00:37:00]Â language because they can sell their wares more easily.
Kerry:Â Yeah, absolutely. And I think that's really fascinating as well.
The sense of, you know, actually this can't be reproduced, say to take the Cambridge Analytica example, that we talked about in the episode, we can't rerun those elections and figure out like, Oh, actually were people influenced by these advertisements and change their voting in such a way.
I think that's a really useful and interesting perspective. But I want to mainly just say thank you so much for coming on the podcast. It's been really, really fascinating to get to hear about the incredible work you're doing, the way that you're kind of bringing these really important ethical conversations into mathematics.
So we wish you the very best of luck in that. Um, but yeah, mainly just want to say, it's been a real pleasure to talk.
Maurice:Â Thank you. Thank you very much.
Eleanor: This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry McInerney, and edited by Eleanor Drage.
Comments