A fake (verified blue check-mark) Bloomberg Twitter account posted an apparently AI-generated photo of an explosion at the Pentagon this morning, and the stock market reacted.
100 Percent Fed Up reports – The reportedly AI-generated image showed a fake explosion at the Pentagon this morning that spread like wildfire across social media platforms, causing a brief selloff on the US Stock market this morning. According to the Kobeissi Letter, there was a $500 billion market cap swing and a drop of 500 points on the S&P in a 30 minute span, as a result of the fake AI image.
The report about the effects on the stock market was provided in a tweet by the Kobeissi Letter, a self-described “industry-leading commentary on the global capital markets.”
This morning, an AI generated image of an explosion at the US Pentagon surfaced.
With multiple news sources reporting it as real, the S&P 500 fell 30 points in minutes.
This resulted in a $500 billion market cap swing on a fake image.
It then rebounded once the image was… pic.twitter.com/DpHgflkMXP
— The Kobeissi Letter (@KobeissiLetter) May 22, 2023
According to the New York Post, the fake photo, which showed smoke billowing outside the Pentagon, was shared by Russian state media outlet and other accounts alongside claims that an explosion has occurred at the complex. RT later deleted the image.
In a tweet, Nick Waters explains why this image of an “explosion near the Pentagon” is AI-generated:
Confident that this picture claiming to show an “explosion near the pentagon” is AI generated.
Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses.
Confident that this picture claiming to show an “explosion near the pentagon” is AI generated.
Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses. pic.twitter.com/t1YKQabuNL
— Nick Waters (@N_Waters89) May 22, 2023
The Arlington County Fire Department quickly tweeted a message debunking the hoax photo.
@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public. pic.twitter.com/uznY0s7deL
— Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023
Elon Musk has appeared in several interviews recently where he’s warned about the dangers of misinformation with AI.
Elon Musk has been warning everyone about the potential dangers of AI since at least almost a decade ago:
“There are some scary outcomes, and we should try to make sure the outcomes are good, not bad.” (2014)@elonmusk @cb_doge pic.twitter.com/vagnZyn275
— Wojciech Pawelczyk (@WojPawelczyk) April 15, 2023
Dr. Geoffrey Hinton, nicknamed the “Godfather of AI,” was so concerned by the dangers posed by AI technology that he quit his job at Google last month so that he could speak out without hurting his former employer.
After years of laying the foundation for AI technology, Geoffrey Hinton, a groundbreaking British computer scientist known as the “Godfather of AI,” is leaving his position at Google to join other specialists who are warning about the danger Ai now presents. Seventy-Five-year old Hinton worked as a vice president and engineering fellow at Google in the field of artificial intelligence (AI).
Hinton was interviewed by The New York Times. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton noted when asked about current AI technology.
The launch of GPT chatbot in March, which is the latest version of OpenAI, has brought about a deep concern in the AI world. AI professionals signed an open letter written by the nonprofit Future of Life Institute (FLI), warning that the technology poses “profound risks to society and humanity.”
Speaking about the response to the open letter, FLI said: “The reaction has been intense.” FLI is a nonprofit group seeking to mitigate large-scale technology risks, wrote on its website.
“We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”
Those that have driven AI technology in recent years are not saying they are terrified by the implications of their work and what it could mean for the future. Hinton agrees, finding recent advancements in AI “scary.”
This story originally appeared on TheGateWayPundit