Objectives
The Education on Risks to Electoral Integrity (EREI) project is designed to advance the study of electoral integrity through the integration of AI, offering a pedagogical toolkit for students and researchers. Specifically, the project aimed to identify and analyze eight global elections between April and June 2024, selecting diverse electoral contexts across different regions and democratic consolidation stages. Through analyzing historical and contemporary data on electoral risks—including voter fraud, electoral violence, disinformation, and cyberattacks—EREI provides a structured approach to studying how these threats could impact future elections.
The research defined electoral integrity based on democratic principles of universal suffrage and political equity, focusing on professional, impartial, and transparent election processes. By posing multiple prompt versions to the Claude AI platform, researchers sought to understand how prompt formulation impacts risk assessment and develop a nuanced approach to electoral risk forecasting.
Outcomes
The project produced the Forecasting Electoral Risks Toolkit, a pioneering platform that revealed both the potential and limitations of AI in electoral integrity research. Key findings demonstrated that AI’s reliability varies significantly depending on media coverage and available documentation. For instance, Mongolia, despite its small population, provided rich insights due to extensive international media reporting, while countries like Mauritania and the Dominican Republic offered limited information.
Students were exposed to the risks AI poses to electoral integrity, such as AI-generated misinformation. They engaged with case studies that allowed them to critically assess how AI might affect democratic processes. Through this research, students gained practical insights into the complex interactions between technology and democratic governance. AI will increasingly become part of the curriculum in courses related to political science and governance. This project indicates a future where students not only learn about AI but also engage in critical discussions about its broader societal impacts.
The research uncovered critical insights about AI’s capabilities and constraints, emphasizing that large language models should be used as preliminary analytical tools rather than definitive sources. Researchers noted important limitations, such as Claude’s 12-month source restriction, which prevented capturing recent developments like Mexico’s political violence in late 2023 and early 2024.
Two potential follow-up research initiatives emerged, including creating a global electoral risk database with consistent methodology to enable comparative regional analysis and developing a best practices guide for electoral integrity risk prevention, management, and mitigation.
Team
Jeffrey Fischer
Senior Fellow and Adjunct Lecturer, Democracy and Governance Program, Department of Government, School of Arts & Sciences
Tala Alahmar
Research Assistant, M.A. student, Democracy and Governance Program, Department of Government, School of Arts & Sciences
Jonathan Mendoza
Research Assistant, M.A. student, Democracy and Governance Program, Department of Government, School of Arts & Sciences.