Without proper governance and input from multiple stakeholders artificial intelligence poses risks to freedom of expression and elections. Credit: Unsplash/Element5 Digital
By Naureen Hossain
UNITED NATIONS, May 7 2025 – The prevalence of artificial intelligence (AI) is changing the flow and access of information, which has a wider influence on how freedom of expression is affected. National and local elections can demonstrate the particular strengths and vulnerabilities that can be exploited as AI is used to influence voters and political campaigns. As people grow more critical of institutions and the information they receive, governments and tech companies must exercise their responsibility to protect freedom of expression during elections.
This year’s World Press Freedom Day (May 3) focused on AI’s effect on press freedom, the free flow of information, and how to ensure access to information and fundamental freedoms. AI brings the risk of spreading misinformation or disinformation and spreading online hate speech. In elections, this can violate free speech and privacy rights.
In a parallel event hosted in the context of the World Press Freedom Global Conference 2025. The event also coincided with the launch of a new issue brief from the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the United Nations Development Programme (UNDP) detailing the growing influence of AI and the potential risks—and opportunities—to freedom of expression during elections.
Recommender algorithms that determine what a user sees and interacts with when it comes to information can have wider implications on the information that that user has access to during an election cycle, according to Pedro Conceição, UNDP Director of the Human Development Report Office.
“I think we need the humility to recognize that they are so complex and they have this element of novelty that requires us to bring together perspectives from across a range of stakeholders,” said Conceição.
Freedom of expression is essential for elections to be run in a credible, transparent environment. Fostering this freedom and access to information allows for public engagement and discourse. Countries are obligated under international law to respect and protect the freedom of expression. During elections, this responsibility can become challenging. How this responsibility is handled across state authorities varies between countries. The increased investments in AI have allowed for actors in the electoral process to make use of this technology.
Electoral management bodies are responsible for informing citizens on how to participate in elections. They may rely on AI to disseminate the information more readily through social media platforms. AI can also help with the implementation of strategic information strategies and public awareness efforts, as well as online analysis and research.
Social media and other digital platforms have been visibly employing generative AI as their parent companies experiment with how it can be integrated into their services. They are also employing it in content moderation. However, there has been an emphasis on increasing platform engagement and retention, at the risk of compromising information integrity. Young people in particular increasingly use social media as their main source of information, according to Cooper Gatewood, Senior Research Manager focusing on mis/disinformation at BBC Media Action.
“Audiences are aware of and understanding of the quantity of false information circulating at the moment,” said Gatewood. He discussed the findings of surveys conducted in Indonesia, Tunisia, and Libya, where 83, 39, and 35 percent of respondents, reported concerns with coming across misinformation or disinformation on a regular basis. Conversely, there was a “parallel trend” emerging in reports from Tunisia and Nepal that many users agreed that it was more important for information to be spread quickly than for it to be fact-checked.
“So this clearly demonstrates that AI-generated disinformation, especially in situations like elections, humanitarian contexts, crisis situations… where information can be spotty, or difficult to access, or move quite quickly… [the] false information that is shared quickly by audiences can very quickly have an impact and can produce a harm,” Gatewood warned.
Within the context of freedom of expression and elections, AI poses several risks to their integrity. For one, technological capabilities vary across the gamut among countries. Developing countries with a smaller tech infrastructure are less likely to have the tools to make use of AI or to deal with the issues that emerge. The frameworks on governing digital spaces and AI in particular would also affect how effectively countries can regulate them.
Frameworks outlined in documents such as UNESCO’s Guidelines for the Governance of Digital Platforms (2023) and their recommendations on the Ethics of Artificial Intelligence (2021) provide stakeholders with insight into their responsibilities in protecting freedom of expression and information in the governance process. They also provide policy recommendations around data governance, ecosystems, and the environment, among other areas, based on the core need to protect human rights and dignity.
As Albertina Piterbarg, a UNESCO Electoral Project Officer in the Freedom of Expression and the Safety of Journalists Section, remarked at the panel, the organization found early on that it was “increasingly complex” to address digital information in only a “black-and-white” way. What they realized was that it was important to “create a multi-stakeholder approach” in dealing with digital technology and AI. This meant working with multiple stakeholders, such as governments, tech companies, private investors, academia, the media, and civil society, to build up a “common understanding” of the impacts of AI through capacity-building, for example.
“We need to address this in a human rights-based approach. We need to address this in an egalitarian way. And in every election, every democracy is important. It doesn’t matter the commercial impact or other private interests,” said Piterbarg.
Pamela Figueroa, President of the Board of Directors of the Electoral Service of Chile, spoke at the panel on her country’s experiences with AI during the electoral process, notably the risk of “information pollution.” She warned that the deluge of information thanks to AI could “generate asymmetry in the political participation,” which can in turn affect the level of trust in institutions and the whole electoral process itself.
Information has become increasingly complex in the digital age, and AI has only added to that complexity. While people are increasingly aware of the presence of AI. AI-generated content, namely “deepfakes,” is being used to undermine the political process and discredit political candidates, and the technology to create deepfakes is unfortunately easily accessible to the public.
It has been proven that AI models are not immune from human biases and discrimination, and this can be reflected in their outputs. AI has also been used in spreading gender discrimination through harassment and cyberstalking. Women politicians are more likely to be victims of deepfakes depicting them in sexualized contexts. When used in social media, gender discrimination and harassment can discourage women from political participation and public debate during elections.
With that said, AI also presents opportunities for freedom of expression. The brief points out that a multi-stakeholder approach is needed to address the specific needs for information integrity in the face of AI. Ensuring trust in the electoral process is more important than ever. State authors can achieve this through effective and reliable strategic communications campaigns, with the support of other stakeholders such as the media, civil society, and tech companies. Media and information literacy must be further cultivated to navigate the complex information spaces, with investments in both long-term and short-term interventions targeting youths and adults.
Digital platforms also have the responsibility to implement safeguards on AI and ensure protections in election-specific contexts. The brief outlines certain measures that can be taken, including investing in adequate content moderation for election needs; prioritizing the public good in how algorithms recommend electoral information; conducting and publishing risk assessments; promoting high-quality and accurate electoral information; and consulting civil society and electoral management bodies.
What this demonstrates is that the dynamics between AI, freedom of expression, and elections require multi-stakeholder approaches. Shared understanding and structured methods will be critical in conducting elections in a fast-moving environment, and the insights drawn from this specific context can provide strategies for how to cultivate AI’s broader potential for humanity. This must be taken into account when we consider that modern generative AI technology has been made more accessible and mainstream in the last two years and has already resulted in transformations across multiple sectors.
“We’ve taken these AI tools and they’re basically in everyone’s phone, And… to some extent it’s free,” said Ajay Patel, Technology and Election Expert, UNDP and the author of the issue brief. “So, where is that going to lead? What happens? What kind of innovation is going to be unleashed? For good? Sometimes for ill, when everyone has access to this sort of powerful flat technology?”
IPS UN Bureau Report