May 2024 | Volume 25 No. 2
Cover Story
The Fact Checkers
Listen to this article:
About 15 years ago, Professor Masato Kajimoto from HKU’s Journalism and Media Studies Centre (JMSC) noticed a worrying trend. Reputable media outlets were publishing stories from social media without checking if they were true – stories such as the rumoured death of American actor Morgan Freeman (he was alive) and the projection of sunrises on giant outdoor screens in Beijing so people could see the sun despite bad air pollution (the image was an advertisement).
Media outlets soon cottoned on, but the problem of fake news and misinformation on social media was only just beginning. Professor Kajimoto began tracking it around the time of 2014’s social movement in Hong Kong, when misinformation was circulated to promote certain narratives. In 2016, the whole world started to pay attention in the wake of Brexit in the UK, and the elections of Donald Trump in the US (who popularised the term ‘fake news’) and Rodrigo Duterte in the Philippines.
“There are certain groups in society that have very strong motivations to produce and disseminate misinformation. They are trying to mislead or deceive the public. It’s definitely something that I think journalism can address,” he said.
This is a tall task, given the vast quantities of content that are circulating. On Facebook alone, more than one billion stories are posted each day. Professor Kajimoto therefore is selective. He tracks multiple groups and topics at a time on Facebook, Twitter (now called X), Reddit, and other platforms. Misleading popular posts shared within private messaging apps such as WhatsApp and WeChat also often appear on these public platforms.
Spotting fake news
His research has identified groups and narratives targeted by fake or misleading posts, showing, for instance, how religion is a factor in Southeast Asia where posts target Muslims in Myanmar and Chinese Christians in Indonesia. In Hong Kong, misleading posts have been more political, with disinformation widely used against opposing political views in the period before the National Security Law was introduced in 2020.
Professor Kajimoto also heads the Annie Lab at HKU – which stands for the Asian Network of News & Information Educators – a project he started with other journalism educators in Asia in 2013. Annie Lab has been training students on fact-checking, such as how to verify if accounts are real or fake and if images or videos have been manipulated.
The students focus on a variety of claims circulating on social media. Examples of their work include showing that a post on X claiming China’s blood products account for 80 per cent of the global market was false (it’s normally under one per cent except for the pandemic years) and that a photo claiming to show a Palestinian boy mopping up the blood of his family in the recent Gaza war was misleading (it was taken more than 10 years ago after the slaughter of a cow).
While fact-checking skills are valued in the job market and society in general, there is also a downside. “We often joke about fact-checkers having post-traumatic stress disorder because you see content after content that is clearly trying to mislead and deceive people, and lots of graphic images and conspiratorial narratives. What we can do is very limited – the volume is just getting worse. So in a way, it’s a very bleak future.”
GenAI complicates things more
Generative artificial intelligence (GenAI) makes things even bleaker because it is so difficult to verify sources, with worrying implications for public information. “There are lots of voice fakes now, especially in countries having contentious elections, and you cannot tell if this person really said that or if somebody used AI to generate their voice,” he said.
But there are a few slivers of hope. Big tech companies are looking into digital tags to identify AI-generated images and many media outlets now fact-check political leaders. The idea of ‘pre-bunking’ (as opposed to debunking) is also taking hold to help citizens spot fake and misinformation. The JMSC itself has a dedicated course on GenAI.
Professor Kajimoto also thinks misinformation and fact-checking should be put in perspective. If people have decided on an issue, they are unlikely to change their minds. Think of the US election – no amount of debunking is likely to sway supporters of Donald Trump or Joe Biden to vote the other way. Even during the pandemic, no amount of scientific information changed the minds of anti-vaxxers, because they held deep fears of vaccination.
“When it comes to trust, oftentimes it comes from somebody that people know. So no matter how many times news organisations or medical doctors try to convince them, they may not listen. But if it comes from loved ones, then people might change their minds. Trust is more emotional than rational,” he said.
We often joke about fact-checkers having post-traumatic stress disorder because you see content after content that is clearly trying to mislead and deceive people, and lots of graphic images and conspiratorial narratives.
Professor Masato Kajimoto