top of page
Search
Writer's pictureKerry Mackereth

Soraj Hongladarom on Machine Enlightenment

Updated: Jul 4, 2022

In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive.


Soraj Hongladarom is professor of philosophy and Director of theCenter for Science, Technology, and Society at Chulalongkorn University in Bangkok, Thailand. He has published books and articles on such diverse issues as bioethics, computer ethics, and the roles that science and technology play in the culture of developing countries. His concern is mainly on how science and technology can be integrated into the life-world of the people in the so-called Third World countries, and what kind of ethical considerations can be obtained from such relation. A large part of this question concerns how information technology is integrated in the lifeworld of the Thai people, and especially how such integration is expressed in the use of information technology in education. He is the editor, together with Charles Ess, of Information Technology Ethics: Cultural Perspectives, published by IGI Global. His works have also appeared in Bioethics, The Information Society, AI & Society, Philosophy in the Contemporary World, and Social Epistemology, among others.


Rens Dimmendaal & David Clode / Better Images of AI / Fish / CC-BY 4.0


Reading List:


S Hongladarom. AI & Society 13 (4), 389-401


S Hongladarom. Minds and Machines 21 (4), 533-548


Cambridge Scholars Press. 2007.



Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

In this episode, we speak to Soraj Hongladarom, a professor of philosophy and Director of the Center for Science, Technology, and Society at Chula long korn University in Bangkok. Soraj explains what makes Buddhism a unique and yet appropriate intervention in AI ethics, why we need to aim for enlightenment with machines, and whether there is common ground for different religions to work together in making AI more inclusive.


ELEANOR DRAGE:

It's a great pleasure to have you here today. Can we start by having you tell us a bit about yourself, what you do, and what brings you to the topic of Buddhism and technology?


SORAJ HONGLADAROM:

Great to be here, and thanks for having me. My name is Soraj and I teach philosophy at Chulalongkorn University in Bangkok, Thailand. What brought me to working on Buddhism and technology is that I'm Buddhist, as 95% of Thai people are. In order to find answers that are relevant to my context it's natural to look at Buddhism, and it could contribute something to a global audience and the global academic community too.


KERRY MACKERETH:

What does Buddhism mean to you?


SORAJ HONGLADAROM:

Oh, that's a good question. We don’t often reflect on that question here in Thailand because Buddhism is so natural to our environment. But whenever we find troubles or problems, we usually look at what Buddhism has to say in order to find answers. It functions as a resource pool, so to speak, where we can get sustenance, or, in my case, find inspiration for answers to all sorts of problems that occur with modern technology.


ELEANOR DRAGE:

We are The Good Rohot, so we're asking what makes good technology? What does good technology look like? This is, of course, a huge question and only part of that question is about creating belief systems for machines. You've said before that in Buddhism, being good isn't just about being ethical, so I'm really interested in how you think Buddhist philosophy can help us approach these questions, particularly now that AI ethics has become a catch all with which to deal with any problems that might arise from AI. So can you tell us what you think makes Buddhism unique in its response to the question of what makes good technology?


SORAJ HONGLADAROM:

I rather like the term good technology, because for me it points towards the English phrase ‘being good’. I understand being good as having two broad meanings, because when the word is translated into my language - into Thai - the two meanings are referenced with two separate words. On the one hand, technology can be good in the sense of being efficient, functional, or effective. If we have a good car, for example, that means that it does not break down easily and so on. The other meaning of good is closer ethical. When we talk about a car being good in the second sense, it means that the car is operating within or has an ethical environment and ethical components. In fact, if we look back to the Greeks, like Plato and Aristotle, these two senses of being good used to be interrelated. The word they used for good is much the same as the one we use in traditions like in Buddhism, where these two senses are very much related to each other but have distinct words to reference them.


What makes Buddhism unique? It focuses on the idea of interdependence, and the doctrine of non-self. This is a bit technical, but the idea is that you try to become altruistic and compassionate by opening yourself up and looking at the world not as a place populated by ego, but where all things are connected. From there you can find ethical considerations.


KERRY MACKERETH:

What kinds of harmful outputs or what kinds of problems do you think might come from technology that doesn't abide by these kinds of Buddhist principles


SORAJ HONGLADAROM:

What goes wrong when technologies don’t follow Buddhist principles? To address this, we have to think about the essence of Buddhist ethics. Once we are clear on that, then we can start to answer the question. Compassion is one of the principles, and an acknowledgement that all things are interdependent is another. These are the things you bear in mind when you cultivate yourself - which is what being good in Buddhism means. It means you try to do things that do not harm others, another very basic principle - really the number one principle. Then you try to cultivate your bodily action, speech and mind. Bodily and speech actions can include trying not to harm others, or not lying or stealing. Cultivating your mind means that you do not covet other people’s possessions, and that you remind yourself of the principles as often as you can so that when you’re having bad thoughts you can catch them before they turn into speech or bodily action. This is what Buddhists try to cultivate in themselves.


So how can these principles materialise in technology? Well, the kinds of technological products or systems that violate these very basic ideas from Buddhist ethics might arise if, for example, technologists metaphorically build a wall around themselves so that their technology become a means to enrich themselves only or enrich their ego consciousness.


KERRY MACKERETH:

I'm really captivated by the foundational principles of Buddhist ethics. And I was wondering, do you think these principles can be codified into a set of rules that a machine can follow? So do you think a machine can achieve enlightenment for example?


SORAJ HONGLADAROM:

Thanks for the question. Well, the term ‘machine enlightenment’ is in my recent book The Ethics of AI and Robotics: A Buddhist Viewpoint and it demands an explanation! Engineers prefer something that they can grasp easily and embed into their algorithms. If you devised an ethical code of conduct for AI based on Buddhist ethics, the end result would not look terribly different from the kind of ethical guidelines for AI that the EU or any other institution in the West have produced. It would include concepts like privacy rights, transparency, and accountability, because those are some of the concepts that we really need to protect if AI is to behave ethically. In Thailand, there is a company that is developing facial recognition technology, and they are aware of the ethical implications of their products, because, as we know, facial recognition technology has a very strong potential to invade our privacy and to cause an imbalance of power between state authorities and citizens. So we know that we have to be really careful when we begin to use such technology. There are benefits of course, there might be some applications where facial recognition can provide tangible benefits, but there are obvious harmful effects. So we need a strong code of conduct or a set of rules which could be translated into legal mechanisms strong enough to deter unethical action without crippling innovation. However, Buddhism can give these codes of conduct more substance through paying more attention to inclusiveness, the reduction of social injustice and inequality than other codes in existence at the moment.


ELEANOR DRAGE:

Lots of those points resonate with a feminist perspective on technology. For example, we're both trying to respond to the question of how to make technologies more inclusive, we're both putting emphasis on methods and processes rather than structures, and we're both paying attention to relationality, as you just said, the interdependence of humans, technology and the natural world. I think this last one is particularly important - we must take into account the interdependence of all different actors in technology when responding to the problem of who's accountable for AI. So we have a lot of things in common, I think, but how do you see feminism relating to Buddhism?


SORAJ HONGLADAROM:

Yes, that's a fantastic question. But before we go into that, I forgot to talk about machine enlightenment, the key concept in my book. The term is a metaphor - I'm not saying that machines can become enlightened, that would be absurd, or perhaps in the next 100 years, who knows! But as of now, there is no possibility that a machine will become conscious anytime soon, let alone enlightened. So, that is out of the question. But that does not mean that we cannot talk about machine enlightenment as something to aim for. In Buddhism, we aim for something that is almost impossible: we cultivate ourselves according to guidelines with the goal of enlightenment, which is the supreme ambition of any Buddhist, and once attained means freedom from all suffering. Of course, that's an ideal, and it is very difficult, or perhaps impossible in a single lifetime, to achieve that type of enlightenment. But the structure that we have in Buddhism situates enlightenment at the top of where we can get to, and all guidelines on ethical conduct that Buddhists need to follow are designed to move us in that direction. When we extrapolate that to machines, what we have is the goal of supreme perfection; the end result of a really, really good, or perfectly good technology. Guidelines for any type of machine aims to eventually achieve that end result. Prominent among those guidelines is the idea of inclusiveness, which is very much dependent upon the idea of compassion and interdependence. From these, we have the belief that no group in society should be marginalised or neglected, and that we have to include everybody. That is very much akin to the idea of social justice. So from that, we can get back to your question about feminism: personally, I think that feminism motivates what we are trying to achieve in ethical thinking on technology, especially AI. In fact, I am a member of a group of scholars and activists who are trying to create a more inclusive, gender-oriented ethics for technology that is more sensitive to marginalised groups. The question that we are concerned with is how to create AI in such a way that all these groups are not neglected and are always included.


KERRY MACKERETH:

Do you think that we can get religions to come together to solve problems in AI? Do you see Buddhism as being particularly compatible with certain other religions or activist traditions like feminism, and if so, how do Buddhism allow for collaboration?


SORAJ HONGLADAROM:

I very much believe that the foundations of all religions have a great deal in common. We know that when we engage in inter-religious conversation or dialogue, we shouldn’t foreground their differences. Of course, we know about the differences already and we know they will always persist, but when we talk about cultivating oneself in Buddhism, I think that my brothers and sisters in other religions share the same feelings. It doesn’t matter so much what these differences are if your end result is to enter into communion with the divine. In order to achieve that, you need some kind of ethical conduct. Some may think that you need divine grace, but you cannot just lie still and wait for grace to come! You have to at least do something. In which case, we can find a lot of similarities and common ground among the religions. The idea from Buddhism that you should not build a wall around yourself, you should open yourself to others, can resonate very much with other religions too, because blocking other people out also means closing yourself off from God. And that is a no, no. I'm not a Christian, of course, but turning your back to the divine is not something I think they want to do as believers in God. So yes there’s a lot of common ground and I think we can work together.


KERRY MACKERETH:

That's really encouraging. We have one final question for you, which is that you’ve talked a lot about all the amazing insights that Buddhism can bring to AI ethics. But we'd like to flip that question for you and say, what do you think that the study of technology can do for Buddhism?


SORAJ HONGLADAROM:

There is a saying in Buddhism that enlightenment is the end result, but there are always a multiplicity of ways to end up at the final destination. It is not the case that there is only one way and it’s this way, only that there are a myriad of ways which take you to the same destination. Your question about what technology can do is a very fascinating one. I'm reminded when I heard this question about all the meditation apps now available, which you just need to download and then they lead you through meditation, even if you have not had any training in it before. Even if you are an absolute beginner, you can start doing meditation by listening to those apps and following whatever the apps tell you to do. I have had a look at some of them and they are quite good. That's one example. There are also, of course, lots of websites about Buddhism so it’s been very useful for the propagation of beliefs and ideas worldwide. We now have the internet and that is tremendously helpful.


KERRY MACKERETH:

This has been really, really wonderful. I personally learned so much from this interview, so thank you again.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.



53 views0 comments

Komentar


bottom of page