AI-Faked Photos and Videos Concerns

ai-faked

Social media analytics company Graphika has reported identifying images of faces for social media profiles that appear to have been faked using machine learning for the purpose of China-based anti-U.S. government campaigns.

Graphika Detects

Graphika, which advertises a “Disinformation and Cyber Security” service (whereby it can detect strategic influence on campaigns) has reported detecting AI-generated fake profile pictures and videos that were being used to attack American policy and the administration of U.S. President Donald Trump in June, at a time when the rhetoric between the United States and China had escalated.

The Graphika website has posted a 34-page file online detailing the findings of what it is calling the “Spamouflage” campaign.  See: https://public-assets.graphika.com/reports/graphika_report_spamouflage_goes_to_america.pdf

Spamouflage Dragon

The China-based network that, according to Graphika, has been making and spreading the anti-U.S. propaganda material via social media has been dubbed “Spamouflage Dragon”.  Graphika says that Spamouflage Dragon’s political disinformation campaigns started in 2019, focusing on attacking the Hong Kong protesters and exiled Chinese billionaire Guo Wengui (a critic of the Chinese Communist Party), and more recently focused on the U.S and the Trump administration.

Two Differences This Time

The two big differences in Spamouflage Dragon’s anti-U.S. campaign compared to its anti-Hong Kong protester campaign appear to be:

1. The use of English-language content videos, many of which appear to have been made in less than 36 hours.

2. The use of AI-generated profile pictures that appear to have been made by using Generative Adversarial Networks (GAN).  This is a class of machine-learning frameworks that allows computers to generate synthetic photographs of people.

Faked Profile Photos and Videos

Graphika reports that Spamouflage Dragon’s U.S. propaganda attacks have taken the form of:

– AI-generated photos used to create fake followers on Twitter and YouTube.  The photos, which were made up by GAN from a multitude of images that have been taken from stolen profile photos from different social media networks were recognisable as fake because they all had the same blurred-out background, asymmetries where there should be symmetries, and the eyes of the subjects were all looking straight ahead.

– Videos made in English, and targeting the United States, especially its foreign policy, its handling of the coronavirus outbreak, its racial inequalities, and its moves against TikTok.  The videos were easily identified as fake due to being clumsily made with language errors and automated voice-overs.

What Does This Mean For Your Business?

With a presidential election just around the corner in the U.S. and with escalating tensions between the super-power nations, the fact that these videos, AI-generated photos and their fake accounts can be so quickly and easily produced is a cause for concern in terms of their potential for political influence and interference.

For businesses, the use of this kind of technology could be a cause for concern if used as part of a fraud or social engineering attack. Criminals using AI-generated fake voices, photos and videos to gain authorisation or to obtain sensitive data is a growing threat, particularly for larger enterprises.

Sponsored

Ready to find out more?

Drop us a line today for a free quote!

Posted in

Mike Knight