By Fauwaz Abdul Aziz (Analyst, History & Regional Studies Programme)
Executive Summary
• Widespread concern about the damage wrought by disinformation has led various quarters to call for the prevention and punishment of the creation and purposive spread of false or misleading content on the Internet and social media, specifically.
• The potential and actual effects of what is popularly, but inaccurately, called ‘fake news’ range from damage to personal, political, and business reputations; harm to social and material health and wellbeing; and even cases of individual and group safety and security.
• This article briefly considers the three main ‘frames’ in which legal and regulatory regimes have been advocated to tackle the phenomenon of disinformation.
• This article summarizes principle concerns with the state-centric approach as well as the self-regulatory approach advocated by the government and industry.
• Instead, it calls for a tripartite system of co-regulation that sees the participation of the state, industry, as well as civil society and the public interest.
• The article then proposes some characteristics and features that should be present in the Malaysian system of disinformation regulation.
Introduction
It is apparent to most in Malaysia already, as in other societies around the world that are connected by Internet and communication technologies (ICT)[1], that disinformation has become a scourge for societal wellbeing. The rise and spread of what is popularly, though inaccurately, labelled ‘fake news’, has driven public debates and policy deliberations on the social consequences of developments in ICT, ranging from specific concerns about artificial intelligence (AI) and its simulations and misrepresentations of human identities, capabilities, activities, and practices; to the use of digital programmes and technologies to invade, access, and extract private and sensitive data; and scams and frauds[2].
Over a period of just nine months from January to September 2024, for example, RM548.1 million was lost in Malaysia to investment scams[3]. Disinformation could serve to serious mishaps and even death, such as the false social media posts falsely declaring the end of hemodialysis treatments at public hospitals[4]. And of course, the publication and spread of allegations that serve to damage business or personal reputations: false announcements from certain groups or individuals could result in either misplacement of trust on, or mistrust of, those groups or individuals, or their business or political rivals.
Then there are postings that are not explicit but are otherwise clear (‘those who know, know’) as to the intended message or subject (‘we are under attack’) and object or target (i.e., the ethnic Other). While Malaysia has been fortunate (so far) to not have experienced the lynchings, destructions of whole villages, and other acts of communal violence that followed the spread of false ‘naews’ of crimes purportedly carried out by one or other ethnic group (as, for example, what we saw in neighbouring Myanmar )[5], we have come dangerously close (e.g. the November 2018 rioting at Seafield Sri Maha Mariamman Temple in Subang Jaya, Selangor). As Heidi Tworek pointed out in 2021, the policy question five years earlier was whether social media would be regulated; whereas the question today is a question of how and when such regulation would take place.[6]
Government regulation
Various quarters have called for the authorities to clamp down on disinformation on the Internet by putting in place the legal and regulatory provisions and structure to prevent and punish the creation and spread of ‘fake news’.
However, there are several problems and questions that arise from assuming a government- or ‘state’-centric approach to mis/disinformation raises.
For one, government-created and enforced legal and policy provisions are regarded as ineffective, or even counter-productive, in tackling some issues. To take the example of Weimar Germany in the early 20th century, Hans Bredow, a German bureaucrat, thought reforming German radio regulations to protect the Weimar republic from anti-democratic forces by legislating direct state supervision of the content of communications. Unfortunately, the state-centred regulation that Bredow built up and put in place allowed Nazis to exercise immediate and unrivalled mastery over radio communications as another tool in its arsenal for national propaganda which brought them to power in 1938.[7] In Malaysia, the Anti-Fake News Act of 2018 illustrates an example of a law that suffered from an overly broad remit, was susceptible to misuse, and which was eventually repealed. Examples abound also of legislation being used to suppress information that is in the public interest to know so as to demand accountability and improvement.[8] The examples illustrate the slippery slope to censorship of vital information and freedom of expression when the government is given exclusive remit to regulate information. [9]
Governments, even if they enjoy a preponderance of human and material resources in many domains of social activity, are notoriously slow in responding to technological advancements and changes – such as ICT. If the US’ Department of Defense’s Defense Advanced Research Projects Agency (DARPA) with its technological and resource capacities to tackle disinformation [10], can been described as moving at a “glacial pace”[11] relative to the changes and developments occurring amongst private sector technological companies, what more Malaysia with its small size and limited resources.
Close cooperation between government and ICT experts raises other problems, however, since such experts are overwhelmingly employed by the very industries and companies eyed for regulation. This applies to the situation within Malaysia, but more so in the international political economy of global Internet and social media companies. How do we balance public sector engagement with technology experts and Big Tech corporations when it is those very same corporations that allow their data to be used for political and other purposes? [12] Would forcing Big Tech to reveal their proprietary data (e.g., their black box of algorithms) be legally possible? Would public disclosure of such data address disinformation at all? Would it stop disinformation – or actually make it easier?[13]
Other questions, less technical, arise with the question of government regulation of disinformation: what happens when the state itself is the source of ‘fake news’? Is disinformation only undesirable and punishable when non-state actors or individuals are responsible/involved? What sanctions do state actors and representatives – from politicians, bureaucrats, or enforcement agency officials or their agents, or even by those non-state actors favoured by the state – face when they make or spread disinformation? How many of such actors or individuals have so far been charged, fined, or otherwise punished?
Self-regulation
A second approach that has been mooted is for the ICT industry to regulate itself. The European Commission created the EU Code of Practice (CoP) on Disinformation in 2018 (which was revised and adopted in 2022), which adopts the self-regulation approach. The objections to this approach are as readily apparent – if seen from the criterion of effectiveness being the first option of government regulation. Just as Internet and social media companies serve as the main sources of information for people today compared to traditional media, they are also the central sources and channels of disinformation. If the profitability of ICT giants such as Alphabet, Meta, X, among others, (Youtube made US$20 billion in 2019; Alphabet made US$160 billion) lie in the business model and technology design of keeping Internet users “interested, engaged, and logged on as long as possible through the use of sticky content” by way of algorithms and artificial intelligence – even if such algorithms are found to encourage the consumption of “extreme content”[14] – it militates against logic for such companies to voluntarily and effectively depart from such a model or design.
Co-regulation
This leads us to the third option that involves both the private sector Internet and social media platforms and the public sector bodies of the government working in cooperation with a ‘civic’ body that is independent of both, but which is yet empowered to regulate and enforce against disinformation. Efforts to raise and expand digital literacy in society so as to develop social resilience to disinformation, on the one hand, and to establish ‘fact-checking’ initiatives and bodies, on the other, are both of critical necessity in tackling the problem of disinformation. Yet, these efforts alone are akin to ‘fire-fighting’ and ‘chasing ambulances – sorely inadequate given the powerful political and profit-motivated drives and forces of disinformation.
There have been a number of proposals in relation to co-regulation to tackle disinformation. In 2018, the London School of Economics and Political Science’s Truth, Trust and Technology Commission published “Tackling the Information Crisis: A Policy Framework for Media System Resilience”, in which it recommended the establishment of an independent body that would research and report on disinformation, coordinate with various government bodies, and impose fines and other penalties.[15] The UK government issued in 2018 a white paper on ‘online harms’, including on the topic of disinformation, that proposed the set up of an independent regulatory body that would draft a code of conduct for the the tech industry containing, among other provisions, the tech companies’ ‘duty of care’ towards Internet users, penalties for non-compliance, and provisions for blocking Internet access. [16]
Co-regulation has, therefore, been mooted in the form, essentially, of an independent commission whose members come from a cross section of public, private, and civil society sectors to regulate and enforce rules against disinformation independently of the government or the private sector.[17] Efforts towards realizing a body in Malaysia to regulate the media as a whole have been in the making for decades in the proposed Malaysian Media Council (MMC). As expressed by the Pro-Tem Committee of the MMC set up in 2019 to revive those efforts, the Council envisions the formulation of “boundaries that promote responsible publishing” encompassing “fair, balanced and accurate reporting” in tandem with the media’s role to “spotlight injustice and inequality, be a forum for public debate”, and to “fact check against fake news”.[18] “At the same time,” the “media itself needs to be accountable for its own conduct, and ensure that the public have a right to question and hold the media accountable for its reporting.” [19]
On disinformation, specifically, the Pro-Tem Committee of the MMC declares,
An independent and professional media industry is the best form of social inoculation against the spread of false and fake news. Armed with news and views based on facts, society becomes less prone to wild accusations and fearmongering. The fabric of knowledge becomes the open platform for debate, understanding and compromise, the best and only counter for divisive misinformation and a language of hate spread in closed social media and chat groups. [20]
In such a vision, the Malaysian Media Council represents “all major stakeholders in the media”, particularly publishers and media practitioners such as journalists, designers, and photographers, and non-media practitioners sch as members of the public and civil society organisations. While it is obvious that the Pro-Tem MMC and many civil society groups prefer the smallest role possible in the Council for government or industry, it is not likely that either of the latter will acquiesce to such an arrangement and all parties will need to arrive at a compromise towards achieving a workable platform and mechanisms for co-regulating the media.
Among the other important aspects of the proposal for the MMC are the drafting of a code of conduct for the media industry, a dispute resolution procedure for public complaints against the media, and a mechanism for funding of the MMC that would ensure its effectiveness, independence, and sustainability.
What would regulation tackling disinformation look like?
Fortunately, there are ample works in place with proposals for some of the defining characteristics of effective regulation targeting disinformation. Ben Epstein, for example, highlights the need for disinformation regulation to have elements of “forward thinking”, clarity of focus, adaptability and responsiveness to developments in the ICT technology and international architecture system within which disinformation is carried out.[21] Most challengingly, says Epstein, effective regulation should promote values and independent structures and institutions against forces seeking to overturn democratic practices and ideals. [22]
No less challenging is the need for regulations to to target disinformation and impose sanctions that are proportional to the harms, actual and potential, of disinformation – instead of penalties that are either ineffective (e.g., fines that appear significant but are actually insignificant in the larger scheme of things, or even to the party responsible, such as the US$170 million fine in 2019 on Alphabet’s Google and Youtube when Alphabet raked in US$160 billion that same year). Proportionality would also dictate minimizing the additional harms caused by the particular regulation itself: ‘overkills’, for example, should avoided – as in those actions by the authorities that serve to suppress creativity and innovation, freedom of expression, or the right to information.
Admittedly, ‘proportionality’ is a principle that does not lend itself to easy definition or application; amongst the plethora of disinformation ‘anti-vax’ contents created and shared over these past few years, for example, there are some that do not clearly evince an economic or political motivation to render them as ‘clearly’ disinformation.
Conclusion
Within the national context, the dimensional complexity of disinformation renders it a near-impossible task to formulate an effective regulatory regime to combat disinformation, not to mention the many related issues. These range from the particular intricacies of Malaysian politics; philosophical and ideological differences; democratic and other competing processes, values, and considerations; local and regional differences; and technology-related concerns such as data security and privacy – all of these point to the obvious, yet still sticky, point that context matters and that there is no ‘one size fits all’ solution to the problem of disinformation.
In the even larger scheme of the regional and international political economy and architecture of ICT, there is only so much that can be done within Malaysia’s national borders even if there was constructive and productive collaboration between the government, private industry, and civil society to tackle disinformation.
As Penang Institute has urged elsewhere,[23] individual governments such as Malaysia’s or even its national economy cannot invoke sufficient market or political power to persuade such platforms as Alphabet, Meta, X, or Apple to respond to disinformation. As pointed out earlier, in one year, just one of these corporations rake in the equivalent of half of the total value of the Southeast Asian region’s US$300 billion digital economy. Getting these Big Tech companies to comply with demands to regulate their own contents would require an amalgamation of the region’s forces – states, civil society, and media organisations – to come together to determine how to assert their collective will to ensure that Big Tech mitigate the harm resulting from the disinformation spread and amplified through their platforms.
Footnotes
[1] While exact definitions vary amongst scholars about such terms as ‘fake news’, ‘misinformation’, and ‘disinformation’, they appear to land on the inadequacy of ‘fake news’, especially over the tendency of parties to weaponize the term subjectively and against any party with which the user has policy or ideological disagreements. ‘Misinformation’ has been defined as the inadvertent sharing or spread of false information, whereas ‘disinformation’ is the purposive or strategic creation and spread of wrong or misleading information. See, for example, S.S. Lim & S. Wilson (2024), Unraveling fake news in Malaysia: A comprehensive analysis from legal and journalistic perspective. Plaridel. Retrieved from https://doi.org/10.52518/2024-4limwil.
[2] Hossein Derakhshan and Claire Wardle, “Information Disorder: Definitions” (paper presented at the “Understanding and Addressing the Disinformation Ecosystem” Workshop, Philadelphia, December 15–16, 2017), cited in Epstein, Why is it so difficult to regulate disinformation?
[3] Farik Zolkepli. Majority of investment scams in Malaysia rise linked to social media, say cops, The Star Online, 18 October 2024, retrieved from https://www.thestar.com.my/news/nation/2024/10/18/majority-of-investment-scams-in-malaysia-rise-linked-to-social-media-say-cops.
[4] N. Trisha, Malaysia committed to universal healthcare, don’t be fooled by fake news, says Dzulkefly. 20 October 2024, The Star Online, retrieved from https://www.thestar.com.my/news/nation/2024/10/20/malaysia-committed-to-universal-healthcare-don039t-be-fooled-by-fake-news-says-dzulkefly.
[5] Paul Mozur, A Genocide Incited on Facebook, With Posts From Myanmar’s Military, New York Times, 15 October 2018), retrieved from https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html on 7 November 2024.
[6] Heidi Tworek, Policy Lessons from Five Historical Patterns in Information Manipulation, in W. Lance Bennett & Steven Livingston (eds), The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States (Cambridge University Press), 184.
[7] Heidi Tworek, International Grand Committee on Big Data, Privacy, and Democracy, “Remarks to the International Grand Committee on Big Data, Privacy, and Democracy,” 2019, retrieved from www.cgai.ca/international_grand_committee_on_big_data_privacy_and_democracy on 10 November 2024.
[8] ‘Refugee activist Heidy Quah given discharge not amounting to acquittal for improper use of network facilities’, Bernama, 25 April 2022, retrieved from https://www.malaymail.com/news/malaysia/2022/04/25/refuge-activist-heidy-quah-given-discharge-not-amounting-to-acquittal-for-i/2055559 on 9 November 2024.
[9] Jon Henley, “Global crackdown on fake news raises censorship concerns,” The Guardian, 24 April 2018, retrieved from www.theguardian.com/media/2018/apr/24/global-crackdown-on-fake-news-raises-censorship-concerns on 9 November 2024.
[10] Pete Norman, “U.S. Unleashes Military to Fight Fake News, Disinformation,” Bloomberg, 31 August 2019, retrieved from www.bloomberg.com/news/articles/2019-08-31/u-s-unleashes-military-to-fight-fake-news-disinformation on 9 November 2024.
[11] Ben Epstein, Why is it so difficult to regulate disinformation? In The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States, W. Lance Bennett & Steven Livingston (eds), Cambridge (UK): Cambridge University Press, 195.
[12] As when Facebook allowed Cambridge Analytics to harvest tens of millions of its users’ data around the world for political interests, including in Malaysia.
[13] Epstein, Why is it so difficult to regulate disinformation? 198.
[14] Ibid.
[15] Commission on Truth Trust and Technology, “Tackling the Information Crisis: A Policy Framework for Media System Resilience,” The London School of Economics and Political Science, 2018, www.lse.ac.uk/media-andcommunications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf.
[16] Epstein, Why is it so difficult to regulate disinformation? 202-203
[17] An example of such a ‘co-regulatory’ body can be found in the Advertising Standards Authority (ASA) in the UK. The ASA is tasked to “regulate the content of advertisements, sales promotions and direct marketing in the UK”, to investigate “complaints made about ads, sales promotions or direct marketing”, and deciding whether such advertisements comply with the ‘UK Code of Non-broadcast Advertising, Sales Promotion and Direct Marketing’ which the ASA formulated.
[18] Report of the Pro-Tem Committee Malaysian Media Council, 30 July 2020, retrieved from https://mediacouncil.my/ on 9 November 2024, 8.
[19] Report of the Pro-Tem Committee Malaysian Media Council, 8.
[20] Ibid, 9.
[21] Epstein, Why is it so difficult to regulate disinformation? 201.
[22] Ibid.
[23] Malaysia in the ASEAN Chair 2025: Policy Recommendations from Penang Institute, Issues, 8 October 2024, available at https://penanginstitute.org/publications/issues/malaysia-in-the-asean-chair-2025-policy-recommendations-from-penang-institute/.
You might also like:
- National Unity 2.0: A Necessary Upgrade and Revival in Malaysia’s Nation-building Ambitions and Appr...
- Eradicating Child Marriages in Southeast Asia: Protecting Children, Challenging Cultural Exceptional...
- Persevering towards Recovery for Penang’s Tourism Industry
- Survey on Attitudes to Covid-19 Vaccination in Penang
- Tracking Malaysia’s Development Expenditure in Federal Budgets from 2004 to 2018