Back Home > Cover Story > Our Digital Selves > AI and the Language Barrier
November 2022   |   Volume 24 No. 1

Cover Story


AI and the Language Barrier

Artificial intelligence (AI) is increasingly making important decisions about our lives. But does AI understand terms like ‘creditworthiness’ and ‘terrorist threat’ the same way humans do? Chair Professor of Philosophy Herman Cappelen argues it is time for more dialogue and effort across disciplines to address this important issue.

AI has become an embedded backdrop to our daily lives. When you go to the bank seeking a loan, the decision will almost inevitably be made by AI. If you have a malignant tumour, your treatment will be informed by AI. Some court decisions in places like China and the US are decided by AI. Whether someone should be flagged as a terrorist – and whether a bomb should be dropped on a specific site – is also a decision driven by AI.

“Artificial intelligence is everywhere now. But how can we be sure that it uses the same meaning we do when we say ‘medical treatment’ or ‘loan’ or ‘bomb’? How can we get AI that shares our language, that we can understand and that can understand us?” said Chair Professor of Philosophy Herman Cappelen.

Professor Cappelen has pioneered the use of philosophies of language to consider human interactions with AI. His aim is to understand which questions need to be addressed to make AI more interpretable to humans and ensure they are not a threat to humans – a very real concern in some circles, particularly as AIs become more powerful. He co-authored a book last year, Making AI Intelligible: Philosophical Foundations, that explores these issues.

“To protect against that threat, some people have introduced the idea of aligning the values of AI with human values. But in order to do that, the moral language that we speak needs to somehow or other be incorporated into AI,” he said.

Language is also a factor in ensuring humans understand why AIs make decisions. The European Union has a law requiring explanations, but the technology still lacks sophistication.

Book cover

Making AI Intelligible: Philosophical Foundations by Herman Cappelen and Josh Dever was published in 2021 by Oxford University Press.

Morality, norms and algorithms

Professor Cappelen argues AIs need to be able to explain themselves using human language and values and that this cannot simply be achieved by tweaking algorithms and mathematical formulas because human language and meaning are sociological phenomena that develop through interaction with others. “It’s as if you just studied the brain to understand language. It is not only the brain that determines whether you understand language, but also your interactions with the larger community,” he said.

This means experts from the humanities and social sciences need to also be involved in developing AI that is interpretable and can offer explanations, not just computer scientists and engineers. “In order to think that the AI has norms or a morality or an ethics, you need to know what it is for a program to have a norm in it. And that requires understanding of the role of moral language, the nature of moral principles, the nature of morality and so on. Philosophers and social scientists have spent years studying these questions.

“Right now, the people dominating the discussion on the direction and proper use and social implementation of this technology are those who make the technology – which you might think is a bit worrying,” he said.

He is trying to foster interdisciplinary collaborations through the AI & Humanity Lab he has established in the Faculty of Arts, which explores how AI interacts with and transforms humanity, and his membership of the new HKU Musketeers Foundation Institute of Data Science, which is engaged in cross-faculty research on big data and AI.

Time pressures

Professor Cappelen acknowledges the task of injecting human values into AI is not an easy one, and it becomes even harder when cultural considerations need to be accounted for.

“I think it is the biggest challenge AI faces right now,” he said. “But the solution can’t be to say that AI is not going to have any morals at all. If you’re worried about existential threats as AIs get smarter, then the solution could be that we don’t need to pick a very particular morality, we just need to make sure that the AIs like humans, that they want to support our welfare and that they’re not going to kill us.

“There is an argument that what happens with this technology over the next 20 to 25 years and what we do with it will be the most important decision in human history, because it will shape everything that comes after it. I don’t know if that’s true, but it is certainly not impossible. The technology develops so fast and the consequences are almost impossible for us to understand. If we don’t get some control over it right now, we might just lose the chance to have control.”

There is an argument that what happens with this technology over the next 20 to 25 years and what we do with it will be the most important decision in human history, because it will shape everything that comes after it.

Portrait

PROFESSOR HERMAN CAPPELEN