In an era where technology is reshaping the political landscape, the emergence of artificial intelligence (AI) and deepfake technology poses a profound challenge to the integrity of elections in Malaysia. From fabricated videos to misleading audio clips, these tools have the potential to manipulate public perception, erode trust in democratic processes, and exacerbate polarisation. As Malaysia gears up for future elections, the question looms large: how vulnerable are voters to these digital deceptions, and can the political system withstand their impact?
The use of AI-generated content to influence politics is no longer a distant threat. Across the globe, examples of deepfake manipulation have already surfaced. In the United States, a fabricated audio clip mimicking President Joe Biden’s voice urged New Hampshire voters to abstain from a Democratic primary last year, illustrating how easily such technology can be weaponised. Closer to home, Malaysian political operatives and commentators are increasingly wary of similar tactics. A senior media consultant, speaking to a local journalist, expressed concern that as elections approach, manipulated video reels depicting politicians in compromising situations could flood social media platforms, sowing confusion and doubt among the electorate.
Yet, the effectiveness of such tactics remains debated. In Malaysia, past instances of alleged personal scandals—whether real or fabricated—have not always swayed voters as expected. A politician implicated in a controversial video lost a parliamentary seat in the 15th General Election (GE15) in 2022, but many argue this was due to opposition to his political stance rather than personal allegations. His subsequent victory in a state election in 2023 suggests that salacious content, even if widely circulated, may not derail a career if voters prioritise policy over scandal. This raises a critical question: do deepfakes hold the power to fundamentally alter electoral outcomes, or are they merely a sensational distraction?
The Mechanics of Deception
Deepfake technology, which uses AI to create hyper-realistic but fabricated audio and visual content, has advanced at an alarming pace. What once required sophisticated equipment and expertise can now be achieved with accessible software, making it a tool not just for state actors or well-funded operatives but also for individuals with malicious intent. In Malaysia, a recent example of a crudely edited poster targeting a Sabah lawyer-turned-politician highlighted the potential for even amateur efforts to deceive at first glance. Though the image was quickly debunked—thanks to disproportionate editing and the politician’s own clarification on social media—it underscored a troubling reality: not all voters have the time or inclination to scrutinise content before forming opinions.
The risk is amplified in a political environment where opacity often fuels rumour and speculation. Malaysian politics, with its complex coalitions and historical undercurrents, provides fertile ground for misinformation to take root. A lie repeated often enough, as a political strategist recently noted, can morph into perceived truth. This phenomenon is not new—consider the enduring narrative surrounding the 2006 murder of Mongolian model Altantuya Shaariibuu, where public belief in the use of C4 explosives persists despite court clarifications to the contrary. With AI and deepfakes, such myths could be turbocharged, backed by seemingly authentic evidence that is harder to disprove.
The Vulnerability of Trust
At the heart of this issue lies the gullibility of the electorate—or, more accurately, the varying levels of digital literacy among voters. While some Malaysians, particularly younger, tech-savvy individuals, may approach suspicious content with scepticism, others could be swayed by a well-crafted deepfake that aligns with existing biases. The speed at which content spreads on platforms like WhatsApp and TikTok, often outpacing fact-checking efforts, compounds the problem. In rural areas or among older demographics, where access to reliable information may be limited, the impact of a fabricated video or audio clip could be particularly pronounced.
Moreover, the potential for foreign interference adds another layer of concern. AI-generated misinformation campaigns could be orchestrated from beyond Malaysia’s borders, exploiting local tensions or historical grievances to destabilise the political landscape. While there is no confirmed evidence of such interference to date, the possibility cannot be dismissed, especially given documented instances of external influence in other democracies.
A Double-Edged Sword
It would be remiss to portray AI solely as a harbinger of doom. The technology also offers opportunities for positive impact in politics, from enhancing voter education through tailored content to streamlining campaign logistics. AI-driven analytics can help politicians understand constituent needs more effectively, while chatbots could provide accessible information on policies or voting procedures. However, these benefits are contingent on ethical use—a standard that, in the heat of political competition, is not always upheld.
The Malaysian government and civil society must grapple with how to regulate this technology without stifling innovation. Existing laws, such as the Communications and Multimedia Act 1998, provide some framework for addressing online misinformation, but they are not tailored to the nuances of AI-generated content. Proposals for stricter regulations or digital literacy campaigns have surfaced, but implementation remains slow. If reforms are introduced, they could potentially curb the spread of deepfakes—though, as with any untested policy, their effectiveness remains speculative.
The Human Factor
Interestingly, while AI and deepfakes dominate headlines as emerging threats, human behaviour remains a significant driver of misinformation in Malaysia. Rumours, often spread through personal networks or informal channels, can gain traction without any technological intervention. A recent circulating rumour with potential political consequences, as shared by a businessman with connections in high places, exemplifies this. In seeking to verify such claims, individuals inadvertently lend them credibility—a cycle that predates AI but could be exacerbated by it.
This interplay between human and technological factors suggests that solutions must address both. Public awareness campaigns, supported by media outlets and educational institutions, could equip voters with the tools to question suspicious content. Meanwhile, political leaders have a responsibility to model transparency, countering opacity that fuels unfounded speculation. Technology companies, too, must play a role, whether through improved detection of deepfakes or clearer labelling of AI-generated content.
Looking Ahead
As Malaysia navigates an increasingly digital political sphere, the spectre of AI and deepfakes looms large. The technology’s ability to manipulate reality challenges the very foundation of democratic choice, testing the resilience of institutions and the discernment of the public. While it may be tempting to view every viral video or audio clip with suspicion, such an approach risks fostering cynicism that could undermine legitimate discourse.
The path forward requires a delicate balance: harnessing the benefits of AI while safeguarding against its misuse. If left unchecked, deepfakes could erode trust not just in politicians but in the electoral process itself. Yet, with proactive measures—be they legal, educational, or technological—Malaysia has the opportunity to set a precedent for managing this global challenge. For now, the question remains: will voters, armed with scepticism and supported by robust systems, prove resilient to digital deception, or will the allure of a fabricated “truth” prevail?