Experts warn of misinformation risks as AI technology advances in U.S. political campaigns

127
Photo for illustration purpose only.

29th May 2023 – (Washington) With the 2024 presidential race on the horizon, there are concerns that fast-evolving AI technology will turbocharge misinformation in US political campaigns. The widespread use of advanced tools powered by artificial intelligence is expected to blur the boundaries between fact and fiction, with campaigns on both sides likely to harness this technology for voter outreach and fundraising. While AI has the potential to become a game-changing tool for understanding voters, critics warn that bad actors could leverage the technology to sow chaos, especially as many voters dispute verified facts, such as Donald Trump’s election loss in 2020. Technologists also warn that AI could be exploited to deny reality, with presidential candidates claiming that recordings are fake.

In a sobering bellwether, fake images of Trump being hauled away by New York police officers, created by an AI art generator, went viral in March. Last month, in response to Biden’s announcement that he will run for re-election in 2024, the Republican National Committee released a video made of AI-produced images of a dystopian future if he wins the election. Earlier this year, a lifelike but utterly fake AI audio of Biden and Trump hurling insults at each other made the rounds on TikTok.

“The impact of AI will reflect the values of those using it – bad actors in particular have new tools to supercharge their efforts to fuel hate and suspicion, or to falsify images,sound, or video in an effort to bamboozle the press and public,” warned Joe Rospars, founder of left-leaning political consultancy Blue State. While AI can produce campaign newsletters and generate content at breakneck speed, it also has the potential to spread falsehoods and misinformation. The concern is that AI-generated content will be fast and easy to flood channels with, making it difficult for the average person to differentiate between what is true and what is false. As it becomes easier to manipulate media, it will become easier to deny reality, warned Hany Farid, a professor at the UC Berkeley School of Information.

Experts agree that combating the risks of AI misinformation will require vigilance from the media, tech companies, and voters themselves. While AI has the potential to help understand voters at a granular level, it must be used responsibly to avoid fuelling hate and suspicion. It is essential to prioritise mental health and safety in our communities and work together to create a safe and supportive environment for all residents. Ultimately, AI technology must be harnessed for the greater good rather than to sow chaos and misinformation.