Artificial intelligence (AI) will continue to grow in popularity and be applied in many areas, including elections. While the potential of AI to improve the electoral process is recognized, the misuse and abuse of AI technology has also come to the fore, as seen in recent elections around the world.
In Indonesia, the use of AI and deepfakes became more visible ahead of the presidential elections in February 2024.1 In Turkey, AI-generated deepfake videos entered the campaign arena ahead of the elections on March 31, 2024.2 Ahead of South Korea's parliamentary elections in April 2024, 129 election-related deepfakes were discovered.3
In response to the misuse of AI in elections, the Election Commission of India (COMELEC) has urged lawmakers to pass a law banning the use of AI and deepfakes, with less than a year to go until the 2025 midterm elections.
The National Movement for Free Elections (NAMFREL) has expressed opposition to a ban on the use of AI in elections for the following reasons:
- It could stifle innovation and unintentionally limit the benefits of AI in improving the election process.
- AI technology is evolving so rapidly that regulations may become ineffective.
- Banning or restricting the use of AI could infringe on freedom of speech and expression.
- COMELECs may face challenges in implementing laws banning or regulating AI, as implementing and enforcing the laws will require expertise and a new set of skills.
NAMFREL instead recommended that the COMELEC draft a code of conduct that embodies a set of ethical principles that all stakeholders involved in elections would be asked to adhere to. The set of principles were discussed at a roundtable discussion held at the University of Asia and the Pacific (UA&P) on June 26, 2024, which included representatives from election monitoring organizations, the information technology industry, AI experts, academia, and the Commission on Elections.
Principle 1: Transparency.
The use of AI in generating election-related content, including political ads, must be disclosed and such election-related content must be appropriately marked. Disclosure should include funding sources, expenditure, AI techniques used, data on the target audience, and the sources of such data. Transparency should extend to the entire AI ecosystem, from content creation to audience targeting, and social media platforms should actively participate in and adhere to codes of conduct.
Principle 2: Respect for human rights
AI-generated content must not infringe on individuals' suffrage, digital or privacy rights. Harmful uses of AI can be punished, but this must be balanced with freedom of speech, supported by mechanisms to respond promptly to complaints about AI and inform people of potential rights violations by AI-generated content.
Principle 3: Accountability
Candidates and political parties should register their intention to use AI in their election campaigns and be open to auditing of AI-generated content. Legal liability and penalties should be applied to candidates, political parties, election teams, and PR and advertising companies that develop or cause the generation of AI-generated content. Shared accountability is crucial, as detection and monitoring will be difficult for election management bodies alone.
Principle 4: Integrity
AI-generated content will need to maintain data integrity, and social media platforms will actively moderate election-related content. Ensuring veracity will involve candidates, political parties and media, and will be supported by clear, truthful sources and mechanisms for fact-checking information.
Principle 5: Impartiality and non-discrimination
AI-generated content must be reviewed to detect discrimination based on race, sex, age, socio-economic status, religion, or other protected characteristics, and safeguards must be in place to prevent such bias. AI-generated content that displays discrimination must not be published.
Principle 6: Supervision by COMELEC
In the exercise of its oversight functions, the COMELEC may set up a committee or task force to monitor the use of AI in generating election-related content, including AI-generated political ads, with a focus on detecting misinformation, disinformation, and deepfakes. The COMELEC should implement a reporting and complaints process to regulate AI-generated election paraphernalia (AI-GEP). The COMELEC may encourage candidates, political parties, and other stakeholders to adopt self-regulatory mechanisms regarding the use of AI in elections and election-related activities.