After some famous cases of false images going viral on the web, last May 30th, Twitter launched a new update on Community Notes, their program to stimulate users to collaborate through notes in the tweets, adding integrity information in shared posts, and keeping people well informed.
Before, let’s have a quick look at what is going on in the AI environment that has cause concern around the technology community.
Fake images go viral on social media
AI-generated images circulate freely every day on the web. Sometimes as innocent jokes between people that create images with AI apps and then share them on their social media accounts.
On the one hand, they can generate fun on the web, on the other hand, they can be used for evil, causing panic and unsafe conditions.
Recently a photo of an “explosion” near the Pentagon went massively viral, including for many verified accounts. According to CNN, “Under owner Elon Musk, Twitter has allowed anyone to obtain a verified account in exchange for a monthly payment. As a result, Twitter verification is no longer an indicator that an account represents who it claims to represent.” and also even the major Indian television organization, Republic TV, reported the alleged explosion using that fake image. Additionally, reports from Russian News outlet RT were withdrawn after the information was denied.
Pope Francis, according to The New York Times is “the star of the AI-generated photos”. The one of Francis supposedly wearing a puffy white jacket in a French fashion style, earned more views, likes, and shares than most other famous AI-generated photos.
Donald Trump also was a fake news target with AI-generated photos showing his alleged escape attempt and further images of “his capture” by American police in New York City, at a time when he is really being investigated as a witness to several criminal actions.
Tech giants and AI worry, and make “apocalyptical” predictions
Meanwhile, renowned representatives in AI react to AI risks. It is not new that some actions aimed at alerting the world to its dangers have been taking place. Let’s see some recent examples:
Pause on Big AI projects
An open letter signed by several names in the technology community, including representatives of the giants, such as Elon Musk himself. The open letter is asking for a 6-month pause in AI research and development.
The letter divided experts around the world. While some support the pause due to imminent risks such as misinformation, others see no point in taking a break because they believe that artificial intelligence is not yet self-sufficient.
AI-Godfather warnings
Last month, AI Godfather Geoffrey Hinton, resigned from Google so that he could warn the world about the risks humanity may be under. Hinton believes that machines may become smarter than humans soon and warned about AI chatbots in what he called “bad actors” hands.
22-word statement
The most recent high-profile warning about AI risk, whose signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman and also by two of the 2018 Turing Award winners, Yoshua Bengio, and previously mentioned former Google employee, Geoffrey Hinton.
It is literally 22 words long and says “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
According to The Verge “both AI risk advocates and skeptics agree that, even without improvements in their capabilities, AI systems present a number of threats in the present day — from their use enabling mass-surveillance, to powering faulty “predictive policing” algorithms, and easing the creation of misinformation and disinformation.”
How Twitter fact-checking can be useful in fighting misinformation
Elon Musk’s social media believes people should choose what might be displayed on Twitter and the company has been developing some resources that count on their users’ help to feed their database about potential misinformation.
Community Notes is a resource that is available below a tweet, where users may add useful context related to shared posts, indicating possible misleading information. Then, collaborators may rate the note and only if it is considered useful, will it remain on the tweet.
This alone wouldn’t be enough to block viral fake images. So, now with “fact-checking”, it will be possible to add notes directly to the media, which will help to avoid their dissemination. Once that is on the note, it will be easier to identify AI-generated images as previously seen on the platform.
Even so, due to the delay in the process of adding and rating a note, this may not be the most agile solution to avoid massive sharing that usually happens in seconds, which means that we still have a long way to go towards a safer world, with AI working for the best, as a great partner of humanity and not as an enemy.
Do you want to continue to be updated with Marketing best practices? I strongly suggest that you subscribe to The Beat, Rock Content’s interactive newsletter. We cover all the trends that matter in the Digital Marketing landscape. See you there!