Back Home > Cover Story > Game-Changer > The Rise of AI
November 2023   |   Volume 25 No. 1

Cover Story

The Rise of AI

Illustration by Midjourney. This image was created using the prompts abstract, simplicity, future, pathway, AI exploration, and embrace AI.

Listen to this article:

ChatGPT and other generative artificial intelligence technologies burst on the scene this year seemingly all of a sudden. AI scholar Dr Qi Liu and Professor Yiu Siu-ming, developer of the new spinoff, Stellaris AI, explain where these technologies came from and where they are going.

The arrival of ChatGPT may have excited media and social media experts and the general public about the prospects of artificial intelligence. But to engineers, there was nothing new under the sun.

The concept of AI has been around for decades. Back in 1950, the British scientist Alan Turing even offered a definition of AI: when a person is unable to tell if they are communicating with a human or a computer.

And since then, says Dr Qi Liu, Assistant Professor of Computer Science who received an Honourable Mention in the global 2023 AI 2000 Most Influential Scholars list, the technology has evolved in waves, taking new steps every decade or two. First, it was based on a rules-based approach, then probabilistic or statistical methods, and then, when these were both found too limited to handle real-world scenarios, neural networks.

The concept of neural networks, which underpins machine learning and consists of nodes passing information back and forth to each other until a consensus is reached and lessons are learned from mistakes, actually dates back to the 1940s. But it was out of fashion until about a decade ago, when it was used to recognise images, such as whether a photo was a dog or a cat. Since then, it has been widely explored and had game-changing effects in learning from vast amounts of text data.

“ChatGPT 4 is like a simulated human brain. It can already do a little bit of reasoning and it understands human text well. Many people feel excited about this progress and want to build on top of it,” Dr Liu said. “But there is a lot of headroom for improvement. It is more like a starting point.”

Developing capabilities

His colleague, Professor Yiu Siu-ming, also in the Department of Computer Science, agrees. “Generative AI is not magic. We’ve been doing research on this for a long time. In fact, right now it’s very similar to Google. The difference is that rather than just giving you links, it gives you a summary. But later on, there should be more useful applications.”

Both academics are working to develop those capabilities and address some of the shortcomings, such as the poor performance of generative AIs in handling mathematics and other non-text-based questions, hallucinations, possible misalignment with human values (for instance, providing instructions on making bombs) and huge energy consumption.

Dr Liu, for instance, is trying to get AI to combine other modalities with text, such as recognising an image of Barack Obama and being able to answer questions about him. He is also working on the recognition of structured datasets such as charts and tables, which fall between text and images. The latter could be useful for businesses and already he has been approached by industry about its use in detecting money laundering.

Dr Liu and his team are also trying to reduce ‘hallucinations’, in which the AI makes up facts and citations, by connecting it to better quality information, and to make AIs more energy efficient by reducing the quantity of data or length of time required to train them.

Embracing it

Professor Yiu, for his part, recently launched the spinoff Stellaris AI with his former PhD student, Dr Jacob Jikun Wu. They have developed an alternative ChatGPT-like system with hundreds of billions of parameters that is unique in using Cantonese. The model has shown promise where other models falter, such as mathematics and logic, and importantly, is not at risk from copyright or access issues related to overseas-owned AI.

“We are now the only ones in Hong Kong to train a model from scratch without relying on OpenAI [which owns ChatGPT]. The next step is how to make use of this,” Professor Yiu said.

The team are looking into making AI work as a personal assistant, for instance by checking flight availability and booking a seat, or buying stocks based on an individual’s financial background and acceptance of risk.

“AI is here to stay and people should embrace it. Problems like bias and false information are not new, they exist in society and in other technologies, too,” he said.

Dr Liu also believes everyone will have to adapt. “People are worried AI will replace their jobs. I think the tendency is not reversible. They need to embrace AI in their daily work to improve efficiency. That is a good thing. People can always find other things to work on that are more creative or less intensive,” he said, but he added: “We still need to be careful to avoid doing harmful things to human beings or society.”

AI is here to stay and people should embrace it.

Portrait of Professor Yiu Siu-ming