23rd May 2023 – (Washington) On 22nd May, a fake image of an explosion at the Pentagon went viral on social media, causing a brief dip in the markets. The image, which many suspected came from artificial intelligence, was spread by several accounts and amassed over half a million views in just a few hours.
The spread of the fake image caused alarm, with many concerned about the potential for generative AI to cause problems in society. Generative AI technologies make it easier for non-specialists to create convincing images in just a few moments, instead of needing the expertise to use programs such as Photoshop.
The Pentagon was forced to comment on the incident, stating that there was no explosion at the building. The Arlington, Virginia fire department also confirmed that there was no incident taking place at or near the Pentagon.
The incident followed other occurrences of fake imagery that also created buzz recently on the internet, including former US president Donald Trump getting arrested and Pope Francis in a puffer jacket. The spread of fake news and disinformation on social media has become a growing concern in recent years, with many worried about the potential for these falsehoods to cause harm.
The shared image caused the markets to be knocked for a few minutes, with the S&P 500 stumbling by 0.29 per cent compared to its Friday close before recovering. Some experts believe that the dip was likely related to the spread of the fake news as trading machines picked up on it.
The incident highlightsthe potential for fake news to cause real-world consequences, such as market fluctuations and public panic. It also underscores the importance of fact-checking and responsible sharing on social media platforms.
It is crucial for individuals to verify the sources of the information they consume and share and to avoid spreading unverified claims. Social media companies also have a responsibility to combat the spread of fake news and disinformation on their platforms.
As generative AI technologies continue to advance, it is essential to consider the potential risks and implications of their use. AI-generated content can be convincing and difficult to distinguish from real images or videos, which could lead to further instances of fake news and disinformation.