top of page
Search

Lorraine Daston on the Exorcism of Emotion in Rational Science (and AI)




In this episode, the historian of science Lorraine Daston explains why science has long been allergic to emotion, which is seen to be the enemy of truth. Instead, objective reason is science’s virtue. She explores moments where it’s very difficult for scientists not to get personally involved, like when you’re working on your pet hypothesis or theory, which might lead you to select data that confirms your hypothesis, or when you’re confronted with some anomalies in your dataset that threaten a beautiful and otherwise perfect theory. But Lorraine also reminds us that the desire for objectivity can itself be an emotion, as it was when Victorian scientists expressed their heroic masculine self-restraint. She also explains why we should only be using AI for the parts of our world which are actually predictable, and how it’s not just engineers who debug algorithms, now that task is being outsourced to us - the consumers - as we’re the ones who are now forced to flag downstream effects when things go wrong.


Lorraine Daston is Director at the Max Planck Institute for the History of Science, Berlin, and regular visiting professor in the Committee on Social Thought. Her work focuses on the history of rationality, especially but not exclusively scientific rationality. She has written on the history of wonder, objectivity, observation, the moral authority of nature, probability, Cold War rationality, and scientific modernity. Her current book projects are a history of the origins of the scientific community and a reflection on what science has to do with modernity.

READING LIST:


Daston, L. Rules: A Short History of What We Live By (Princeton University Press, 2022).

Daston, L. (ed.), Science in the Archives: Pasts, Presents, Futures (Chicago: University of Chicago Press, 2017).


Daston, L. “The Coup d’Oeil: On a Mode of Understanding,” Critical Inquiry 45(2019): 307-331.


Daston, L. “The History of Science and the History of Knowledge,” KNOW 1(2017): 1-25.


Daston, L. “When Science Went Modern,” Hedgehog Review 18(2016): 18-32.


Schuller, Kyla. The Biopolitics of Feeling : Race, Sex, and Science in the Nineteenth Century. ANIMA (Duke University Press). 2018.


Yao, Xine. Disaffected : The Cultural Politics of Unfeeling in Nineteenth-century America. Perverse Modernities. 2021.


Daston, Galison, and Galison, Peter. Objectivity. First Paperback ed. 2010.


TRANSCRIPT:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

In this episode, the historian of science Lorraine Daston explains why science has long been allergic to emotion, which is seen to be the enemy of truth. Instead, objective reason is science’s virtue. She explores moments where it’s very difficult for scientists not to get personally involved, like when you’re working on your pet hypothesis or theory, which might lead you to select data that confirms your hypothesis, or when you’re confronted with some anomalies in your dataset that threaten a beautiful and otherwise perfect theory. But Lorraine also reminds us that the desire for objectivity can itself be an emotion, as it was when Victorian scientists expressed their heroic masculine self-restraint. She also explains why we should only be using AI for the parts of our world which are actually predictable, and how it’s not just engineers who debug algorithms, now that task is being outsourced to us - the consumers - as we’re the ones who are now forced to flag downstream effects when things go wrong. I hope you enjoy the show.


KERRY MACKERETH:

So thank you so much for being with us today. It's so lovely to meet you and to hear about what you do. So just to kick us off, could you tell us a bit about who you are, what you do? And what brings you to thinking about gender and feminism and technology.

LORRAINE DASTON:

So my name is Lorraine Daston. I'm a historian of science. I work in Berlin, at the Max Planck Institute for the History of Science (MPIWG). But I also teach at the University of Chicago, in the Committee on Social Thought. And I've worked on quite a number of topics. One of the great advantages of the history of sciences, it's a very undisciplined discipline. And among them are the history of probability and statistics and more broadly of quantification, and also the history of objectivity. I'm interested, I suppose, most broadly, in forms of rationality in their history, and especially in formal forms of rationality, which is what led me to an interest in rules and in algorithms.

ELEANOR DRAGE:

That's fantastic, and brilliant introduction. So to begin are three big questions. What is good technology? Is it even possible? And how can feminist ideas help us get there? What's your two cents on that?

LORRAINE DASTON:

Well, as a Jane Q. citizen, I definitely think there's good technology and we're using some right now. it's wonderful that we can actually talk with one another without you having to come to Berlin or me having to go to Cambridge. And it's going to be increasingly wonderful as we all try to fly less. I bless the name of whoever invented the coffee machine every morning. So I think there's yes, no, I think I think our lives would be nasty, brutish, and short. Without technology, there are lots of good technology, the problem is much more specific about AI. And as you know, better than I do. AI is a many splendored thing we might be talking about anything from the classical, Herbert Simon, Alan Newell programmes, which were deductive systems, they were an attempt to get as much out of the fewest possible assumptions and axioms as possible, very much like you lose geometry to machine learning, which is very messy in a way, but very flexible. And very powerful for that reason, especially because of the enormously increased calculating power. So one way of thinking about the difference between classical AI and machine learning is simply that the gigantic leap in calculating power of computers, has meant that there's been a lot of brute force calculation that's been possible. The problem at the moment with AI, and perhaps this is the feminist perspective on it, is that it takes as its data - let's take machine learning, because that is certainly by now considered to be the cutting edge of the development of algorithms - it takes as its data to train on, data about the past, which means that should the future deviate from the past, imagine a pandemic occurs. Or imagine that a whole new group of people, women, for example, enter the workforce, then it is very ill suited for purpose. And it tends to reinforce past arrangers. There's something intrinsically conservative about machine learning. And anyone who has not benefited from the status quo, as it existed for decades before, not only women, but especially women, is likely to be at a disadvantage for that reason. So it's not as if there's anything malicious going on or any kind of ill intentions, quite the contrary. It's simply the nature of the beast that if you're training upon past data, and the past encapsulates - fossilises as it were - certain assumptions about the way the world is, then those people who were never thought of as part of that order will be at a disadvantage. This need not have anything to do with AI. I'm sure that many women can identify with a situation in which the schedule of the working day - including the academic working day has been implicitly structured from the standpoint of a man who probably has a family probably has children but does not have primary responsibility for them. So for example, if the regular colloquium Hour, which all members of the department are expected to attend is set at 6pm, you can be sure if there is somebody at home, feeding the children and putting them to bed. There's nothing algorithmic about this. This is simply an example of a default assumption, which is built into an institutional structure because of certain thoughtlessness and a certain assumption about the gender division of labour in this case.


ELEANOR DRAGE:

That's fascinating. Can you connect for us what you've just said about the conservative nature of algorithms to the myth of neutral or objective AI?

LORRAINE DASTON:

The two are in a sense orthogonal to one another. So as I've just said, it's anything but neutral, in terms of the assumptions, it absorbs from its data, that it examples that it's fed, this could be changed, this is another case of market forces at work. If you simply take the cheapest possible data, then you will get the most banal data. And it will almost necessarily repeat, recapitulate, the assumptions of the status quo. You could do this more imaginatively and have a more representative sample, but that will be more expensive. But the reason why it's considered objective has to do not with what it does select or what it is based upon, but rather what it is not based upon. So objectivity is an epistemic virtue, but it's a negative epistemic virtue [relating to knowledge]. And it's not based so much on a positive approach to gaining either impartiality or the truth, but rather a negative one blocking out certain obstacles to impartiality. And the fear is, in many cases - let's make this concrete. Imagine that a large company is interviewing candidates for a position. And the fear is not entirely unfounded, that certain cues about the candidates resumes may betray something about the candidates gender, race, ethnic background, and that these may in turn, be prejudicial. When human beings make the first selection of candidates to be interviewed, this is not a fanciful fear. Controlled studies have been conducted. And it turns out to be the case. And if that's what you're worried about, the algorithm does have certain advantages, because it can be so programmed as to not respond to those cues. Now, it is not entirely clear how to do this, if you are basing your algorithm on the data from the past, it may very well simply reinscribe those prejudices, because it will look as if the workforce hired by that company in the past has always had a certain profile, and candidates will be selected according to that profile. But if you are aware of those prejudices, it may be easier to correct algorithmically, then it would be to try to basically retrain the human human resource officers at your company. So in that sense, you can see why the algorithm might be seen to be objective, in a very narrow sense, which is it can be so designed so as not to have the biases that certain humans might have, it might have other biases, but it won't have those biases. And if what you're most worried about are those biases, perhaps for legal reasons, perhaps for ethical reasons, then you might see this as an advantage. But let me give you another example which shows the difficulty of actually realising this ambition. In the United States, I do not know whether this is the case in the United Kingdom, some courts have experimented with algorithms in order to do the sentencing. So the verdict is still done either by a jury or a judge. But the sentencing - how long a punishment is, or how severe a fine is - is in some cases entrusted to algorithms. These algorithms were meant to be impartial, because it was suspected, again, not without reason that there was racial bias. Unfortunately, it turns out that the algorithms replicate this racial bias. And the reason they replicate the racial bias is that they are trained on the examples from the past of past decisions. So it's not a trivial matter, even with the best of intentions, to eliminate such biases, from especially machine learning algorithms, so there's nothing intrinsically neutral or objective about an algorithm per se.

ELEANOR DRAGE:

Well exactly - So why then, do efforts to de-bias algorithms often assume that it is possible to turn these algorithms neutral again? As if they were neutral to begin with and then humans came in and made them biased. Can we go back to the fascinating work you’ve done on objectivity, on the history of objective science. Can you tell us about how the line between objective and subjective has shifted over time in science?

LORRAINE DASTON:

So this is based on a book that Peter Galison and I published in 2007, with the lapidary title, Objectivity. And what we looked at in the book was the emergence of objectivity as an epistemic virtue and the analogy here is with moral virtues. So it's not a new idea for us to think that there's more than one moral virtue. There's kindness and honesty, and justice. And the same idea holds for epistemic virtues, which are the virtues - so moral virtues help us be good epistemic virtues, help us to attain the truth -

ELEANOR DRAGE:

And can you explain what epistemic is for the people listening who may not know.

LORRAINE DASTON:

Epistemic is how we know what we know. So the problem here is, just as all virtues are defined by their opposite vices. So honesty is defined by the temptation to lie and be dishonest. Justice is defined by its opposite - injustice, kindness to cruelty, etc. There's also the same kind of tension built into these epistemic virtues. And the argument of the book is that there are lots of these epistemic virtues versus there are lots of moral virtues, they emerge at different times under different circumstances. And they all don't always tally with one another, just as moral virtues do. So we've all been in a situation in which honesty and kindness are at loggerheads with one another. A beloved colleague comes up to you and asks your candid opinion of his latest really boring book. Are you going to be honest, or are you going to be kind? And the same thing goes with the epistemic virtues about how we know what we know. In the case of objectivity, this is also one, it's a relatively new one, it really emerges only in the beginning of the 19th century. And it's when, instead of being most worried about losing oneself in the complexity of nature, which is the preoccupation of most researchers - in the 18th century: they're really worried about the signal being drowned out by the noise because nature is so complicated, a labyrinth so variable - instead, researchers start to worry about themselves. They worry that their projections are either conscious or unconscious. Their favouritism for a pet hypothesis might lead them to distort the data and they start taking precautions against that. So instead of allowing themselves, for example, to take the points, observation points, and find and you know, look and see what seems to be a reasonable middle value of them, they start having formulas which will do this. One of the earliest examples is in astronomy. So if you've ever actually done any astronomical observation, but it's really true for almost any empirical science, you find that you don't get just one value, for example, for the position of a comet, if you're tracking a comet, there's so many things that interfere - the choppiness of the atmosphere, whether your telescope is stable or not, whether you, for example, have a head cold that night. So you always get a scatter of data points. And the question is, what do you do with the scatter? How do you find the true trajectory of a comment? Previous astronomers had just sort of, they knew from Newton that it had to be a conic section, a certain kind of curve, and they just let what is the best approximation for this curve. Starting in the 1820s, and 30s, astronomers began to get very nervous about this, they became very anxious, and they began to develop methods, quantitative methods, which replaced their judgement, their best judgement. And this is an expression of objectivity. It's an attempt to discipline oneself to restrain oneself. And perhaps the most interesting thing with regard to our conversation more generally that is relevant is that it divides the world up into the objective and the subjective. And the question is, then, what happens to judgement? Judgement is neither a personal whim - the fact that you like vanilla ice cream, or I like strawberry ice cream - nor is it something that can be mechanised, either by a formula, or by a machine. It sits squarely in the middle between objectivity and subjectivity. And that has been an enormous problem ever since we require judgement, because the world never aligns with, with our expectations, with our rules, and yet, we suspect judgement, because we imagine that there's something dangerously subjective about it, something perhaps partial or biassed about it. And that sort of is the modern dilemma that we can no longer have a good account of judgement. It's both necessary and precarious.

KERRY MACKERETH:

That's absolutely fascinating. And I just think that work like yours in the history of science is so important, because there's so many things like objectivity that we take for granted, or certainly myself, you know, trying to think about objectivity as a value epistemically only sort of came in in the 19th century. That is amazing. Because as an academic, as a scholar, this idea of being objective, being neutral, is so fundamental to so many of our different disciplines. I actually want to come to some of those dualities that you were talking about at the beginning. And the idea that, you know, something like honesty only makes sense in relation to dishonesty, and takes us to this binary between the objective and the subjective. And, as you've said, the problem of judgement that sits in the middle. And something that I think we tend to associate with subjectivity is emotion, and emotionality. So having a preference for someone or something, feeling a particular way, and that influencing the kind of decision that you make, and to how did though, objectively come to be associated with a lack of emotion or sort of emotional distancing?


LORRAINE DASTON:

Right, so, again, this only makes sense from the standpoint of the negative vice that you're trying to counter. And what you're trying to counter is a kind of overly intrusive self or the subjectivity. And that can take almost any form. As I said, most probably the most banal case is a pet hypothesis, which leads you to select data, which confirms the hypothesis. Every scholar, every scientist has been confronted with this. What do you do with that uncomfortable coronary angular piece of evidence that refuses to fit into your beautiful theory, but it can also take the form of a desire to be right, a strong emotional desire to be right. And we've all seen this played out, usually in a very unedifying fashion in scientific and scholarly polemics in which the tone of the debate escalates beyond anything which is reasonable in the pursuit of truth. But another form of emotion is aesthetic. The aesthetic legions to a beautiful theory, a beautiful explanation, as opposed to one, which is messy with loose ends and untethered strings, here, here, here and there. And that too, has been the subject of a great deal of controversy, ardent advocacy and equally ardent suspicion on the part of both scientists and scholars. And I should say, they are you perhaps, in an attempt to show how we can imagine a world of knowing without objectivity, I have perhaps, leaned over to accentuate its strangeness. But there are some kinds of studies where that kind of mechanical self restraint is perhaps absolutely in place. I think, for example, of studies involving differences in gender or race, that it might be perfectly in order to make sure that every statistical precaution is taken, in order to bar implicit bias on the part of the researchers, or perhaps even more familiarly at the moment, the double blinding that occurs in a randomised clinical trial for new medications. That's an example of, we all desperately want the medicine to work, we please make the medicine work. Especially if you happen to be a sufferer of a certain disease, or the loved one of someone who suffers with a disease, there's nothing you want more than the medicine to work, that emotion can have its effects on the outcome of the trial in the placebo effect. And for that reason, double blinding, which is a mechanical form of objectivity, is a reasonable precaution. So that's another case in which the emotion can be an enemy, to finding out what really is the case.

KERRY MACKERETH:

That's really interesting. And I think the examples that you give do raises questions as well around, you know, how can we try and counteract the ways that emotion doesn't always lead us to make good decisions, even though as feminists, you know, we're very interested in the way that emotion has been kind of devalued as a form of knowledge. It's also itself not neutral. And there's so much amazing work by scholars like Kyla Schuller and Xine Yao, who are pointing out and looking at the complicated history of emotion itself, and the way that that has also been used in these different kinds of sexist and racist hierarchies. And so how do we think about that and mitigate that, but I think it also raises this question of, you know, objectivity as a sort of epistemic value or approach alone also doesn't necessarily make us stop and step back and say, actually, why are we doing this experiment in the first place? I think, particularly in regards to some of the experiments around gender and race.

LORRAINE DASTON:

Quite, and this an interesting question, you know, why would people want this right? So if, for example, a large research project was proposed to test for differences between people who have blue eyes and brown eyes, eyebrows would be raised into the hairline. People would say, why is this an interesting question? Why, you could very easily address the same kind of sceptical puzzlement to a lot of studies on race and gender.

KERRY MACKERETH:

Absolutely. I had a friend's advisor used to say if there's a gap in the literature, it's probably there for a reason, if there really is a gap.

LORRAINE DASTON:

Exactly - “fills much needed gap in the literature” as we say Yes, right. Exactly. Exactly. I mean, and maybe, you know, just on the subject of emotion, objectivity itself is an emotion. I mean, the literature especially the early literature, the Victorian literature of objectivity is absolutely saturated with the emotions of heroic manful self-restraint. And, you know where there are purple passages written in which the researcher wishes to vault ahead to a generalisation, a magnificent theory, only the manful self-restraint reigns in this perilous epistemic temptation. I'm obviously overdramatizing, they often over dramatise.

KERRY MACKERETH:

I think that's so crucial, it's really interesting. A lot of my own work focuses on - in Asian diaspora studies - on Oriental inscrutability and the way that Asian people have been racialized as the unfeeling and so I think it's really helpful and heartening to hear about objectivity itself as being this very emotive thing that was used to show sort of how great and how masculine these scientists were, because they could demonstrate sort of that they could both show emotion, but also restrain it in the right ways. I actually also want to come back to the field of AI, which you've mentioned that they're getting the episode, another field, which definitely could benefit from the question of, but why are you doing that? And why is it interesting? Because something you've talked a lot about is how AI is often modelled after human intelligence, right? And that this is sort of a failure of the imagination to to engage with what intelligence is and can't be outside of the human. So we'd love to hear your thoughts on that. What does it mean to think about forms of AI outside of this one very singular mode of intelligent thinking?

LORRAINE DASTON:

It seems to me a real missed opportunity. So for decades, really over a century, science fiction writers have speculated about alien forms of intelligence, and what form they might take. There are sort of anthropological speculations, à la Ursula le Guin, there are more sinister experiments, mental experiments, thought experiments in more recent, the dark forest science fiction, but it's easy to hear we have AI, which is our first encounter with an alien form of intelligence. And it's really stupid to try and make it the same as our intelligence. So let me give you an example. Simply, as I mentioned before, my 13 month old granddaughter is staying with us for the summer. And if you show a small child slightly older than that she is now but only slightly an elephant, or a giraffe. After about three examples, the kid has got it. I mean, they will recognise literally a drawing of a giraffe, a giraffe in the zoo, a stuffed giraffe, a painted giraffe, a cartoon giraffe, they've got ‘giraffe’. It will take hundreds of 1000s of examples, to get a machine learning programme to be able to master this task. Perhaps will eventually master it. But however it is mastering this task, it is clearly by a different process than the toddler is using. So this seems to be prima facie evidence that these are two very different forms of intelligence. And the question is, why would you make the poor AI programme jump through flaming hoops as it were, to approximate something that a toddler does very, very well, after three examples, when it can be used for something else, which it does much better than we do. So it just strikes me as weirdly inefficient to try to make AI into NI - natural intelligence. The only reason I can think of is once again, a kind of a garden variety market force, which is the idea of replacing human workers. But this strikes me as not only perfidious, but also doomed. And the reason is very, very simple and it returns to our earlier topic of conversation, which is that because all forms of AI depend on assumptions about the past, they're very bad at responding to surprises or some exceptions. The world is full of exceptions. And indeed, increasingly, it is full of states of exception, not just the pandemic, but climate change is going to increasingly throw spanners in the works with regard to our expectations of how things works. Human beings can improvise. This is a very difficult challenge for programmes, which expect the world to be steady as she goes, stable and predictable. That seems to me, again, a kind of mismatch of tool and task. We should be using AI for the parts of our world, which really are predictable, I think they're great at astronomical calculations. But perhaps not using it for those other parts of the world where once again, judgement discretion, improvisation will be required.

ELEANOR DRAGE:

I think that's completely it. And part of the issue is that there's a lot about the world that engineers or companies think is stable, that think is unchangeable and predictable.

Like, for example, with gender recognition software. Which is why attempts have been made to create AI systems that recognise a person’s sexuality or gender. This relies on the assumption that the body can be captured and read. But these things are always in flux. Gender can be exciting and very mobile. So attempts to use AI to calculate things that are not predictable says more about us than the world.

LORRAINE DASTON:

Yes, I absolutely agree. I think it's very interesting for me to think about, how could you ever assume that - so you're so right, if you were to remove the engineers who develop this kind of software from their extraordinarily cocooned and stable environment, and thrust them into almost any place in the world, where life is full of surprises, both happy and unhappy, they would be thrown into disarray and probably dismay. So the question is, how sheltered do you have to be, in order to have this assumption at all? I'm sure it's a good faith assumption, based upon their experience, but it requires extreme hothouse conditions: it’s an orchidaceous life. And whenever I read about the complete environment that Google has created for its employees, I can't help but think that this is perilous both for them and for the rest of us. Because it is indeed exactly this kind of protected island of stability and order and steadiness, which can only be maintained by enormous effort, and which is entirely out of joint with the rest of our lives. And I think there is a point to be made that which is, I think, in part, at least a feminist point, perhaps is a broader point, which is, I suspect that women's lives are full of more surprises than the default male life in many ways, in part because the world was never meant for the generation of women who now are trying to make their way in positions that were tailored to a different kind of life, with different kinds of domestic support systems, for example. And it certainly was never a world that was meant for forms of gender which are no longer binary. And those kinds of lives will be more unstable, less predictable, than the lives that are assumed by most algorithms. So there is, in that sense, an intrinsic bias against people who are trying to imagine a new kind of life.


ELEANOR DRAGE:

That's wonderful. And as we're on the feminist utopian path, just to finish then, can you draw that out a bit further than how feminist work creates an element of surprise, how feminist what I know lots of feminist art and AI design projects which have given us a glimpse of what AI might be like if it wasn't driven by only market forces, or driven by people who have very different incentives and want to create different kinds of products? So can you tell us a little bit more about what you can imagine as a way forward?

LORRAINE DASTON:

I'm not sure this is specifically feminist. I just think it is in the interest of let us say 95% of the world's population who don't live on these islands of predictability and stability. And it's obviously in the interests of everyone who uses software, that it be standardised. Anyone who has ever had to deal with even shifting from a PC to a Mac and back and forth, realises the headaches here. The problem with that is that it's always an opportunity for monopoly capitalism, which we would certainly like to avoid, for every possible reason. And there has long been a solution to this in other industries, which is the international standards organisation (ISO) based in Geneva, I think created in very shortly after World War Two, I think, in 1947. And for purely economic reasons, because it's simply more efficient to have a global standard for electrical plugs and cat food and various other things, they form technical committees, which developed standards - world standards - for many, many, I mean, not just hundreds, probably 1000s of products. And one could imagine that this would, and this is a body, which accepts only technical arguments in favour of one or other standard, and I read an ethnography recently, of one of these working committees, which was very funny, because, of course, because the working committees are composed of engineers who represent both national and industrial interests, they are of course tempted to really represent those interests in the formation of the standards, wouldn't it be convenient if we just use the standard already established in my country. And when this happens, they are taken aside at the coffee break, and given a dressing down by the more experienced members of the committee. And this apparently works very well. And you can imagine in a kind of microcosmic fashion, how it could. You know, this is perhaps utopian, but I don't see why a lot of decisions about standardisation, for example, couldn't be abstracted from their current corporate corporate matrices, and handed over to the ISO to firsthand standardisation. There'll be a small set by no means, it will by no means solve the entire problem, but it will at least stop for example, programmes being circulated before they are entirely debugged. And depending on us, the poor consumers to be the alarm system that flags that this new programme paralyses the rest of your computer when activated. So that is perhaps utopian, but it seems to me that there's at least prima facie reason for thinking that it might work.

KERRY MACKERETH:

Absolutely, learning this has been such a delight. It really is so wonderful to get to sit down and have these really fascinating chats with you. I wish we could just keep going all day but you have a gorgeous granddaughter to get back to as well as your amazing breadth and depth of scholarly work. So we just want to say thank you so much again for coming on the show. It's been wonderful. It's been a great pleasure to talk with you.

ELEANOR DRAGE:

This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.



164 views0 comments
bottom of page