After Taylor Swift’s porn images: deepfakes are making inroads

--


February 3, 2024
Today at
01:00

Now that just about anyone can use AI to create believable fake videos and audio clips, deepfakes are flooding the internet. From singer Taylor Swift to VRT anchor Annelies Van Herck, many have to believe it. “If we really want to change anything, we have to regulate social media.”

At least 45 million times. That’s how many people viewed Taylor Swift’s fake porn video that surfaced on X late last week and went viral in just a few hours. It wasn’t the megastar’s only explicitly sexual video. The social medium, which has virtually no control over what happens on its platform since it was taken over by Elon Musk, was suddenly overrun. A ban on the search term ‘Taylor Swift’ was the only possible and drastic barrier it could put up.

The images of Swift, perhaps the most powerful celebrity of the moment, were of course not real. It concerned deepfakes: a form of computer-generated media that appears increasingly realistic. The video and audio fragments are fabricated by malicious people to put someone in a compromising situation.


On the internet you can assume: when it rains in America, it drips in our country.

Olivier Cauberghs

Researcher at Textgain

The phenomenon has existed for some time. The term deepfake was first used in 2017 as a combination of ‘fake’ and ‘deep learning’, an important technique in artificial intelligence (AI). Due to the major progress in that domain, what had long been warned happened: fake has become mainstream.

Not long ago, it took thousands of photos and a lot of computer power and patience to create fake videos that looked somewhat real by pasting someone’s face into an existing video. Today there is a wide range of free home, garden and kitchen apps that create new images with AI. Swift’s images were crafted with Designer, a generative AI tool from Microsoft that converts textual input into images.

Deception and fraud

‘The more technology evolves, the greater the impact of excesses such as deepfakes. Today they are accessible to anyone with a web browser,” says Olivier Cauberghs, researcher at Textgain, a company specialized in detection of online hate speech. An ecosystem exists for the production and distribution of fake videos on private channels such as 4chan and Telegram, and from there they make their way to the surface of the Internet, such as regular social media.

Because with Swift the sweetheart of America was targeted, the consequences were corresponding. Microsoft immediately took action and closed the loophole in its Design software, which was used to create the images. CEO Satya Nadella gave a TV interview in which he called the incident “alarming and terrible.” Even the White House spokeswoman was asked for a response. But deepfakes also cause damage to us. “On the internet you can assume that when it rains in America, it rains in our country,” says Cauberghs.


The algorithms are specifically trained on women’s bodies.

Catherine Van de Heyning

Law professor

This is confirmed by Catherine Van de Heyning, attorney and law professor at the University of Antwerp who specializes in cybercrime. ‘As with all sexual image abuse, the trends are global.’

In almost all cases, women are the victims. The reality is that image manipulation technology is primarily used to create non-consensual porn. A study by Homesecurity Heroes showed that by 2023, the number of deepfake videos circulating on the internet will have increased by 550 percent compared to four years earlier. 98 percent are sexual. “The algorithms are specifically trained on women’s bodies,” says Van de Heyning.

Deepfakes are not only a powerful weapon in the arsenal of the troll army of online misogynists. The technology is also used to commit fraud and mislead people. For example, a fake advertisement surfaced in January in which VRT anchor Annelies Van Herck advertises a gambling app that extorts money from people. The quality was questionable, but tens of thousands of people saw them on Facebook and Instagram.

Biden’s safe word

The political impact of that kind of AI-driven disinformation is also increasing. In the American state of New Hampshire, where primary elections recently took place, Democrats were called by a voice that sounded remarkably similar to that of President Joe Biden – even his safe word ‘malarkey’ (nonsense) was missing – and who called on people not to vote.

In September, an audio clip surfaced in Slovakia in which presidential candidate Michal Simecka appears to be talking about how he wants to buy votes. It was fake, but Simecka, who was leading in some polls, lost the election a few days later. In the United Kingdom, an audio clip embarrassed Labor leader Keir Starmer as he could be heard abusing employees. But his voice also turned out to be cloned.

Examples from other countries are numerous, and experts warn that with the 2024 election bonanza, in which half of the world’s adult population will vote, the risks to the democratic process are real. Especially through the use of audio, which is easier to recreate. A quick Google search will immediately reveal at least ten voice generation providers. Moreover, there is less alertness to its possible manipulation and it does not have to be as perfect as fake video to appear credible. Background noise even gives a fragment an aura of authenticity because it makes it appear to be a secret recording.


Deepfakes and deepnudes are assessed in the legislation as only moderately risky, and that is not enough.

Catherine Van de Heyning

Law professor

The phenomenon is difficult to combat. In the digital world, it is almost impossible to make something that has been created and distributed disappear again. There are some technological solutions, such as adding some kind of digital watermark to the pixels of a photo. This makes it possible to detect at a later stage whether an image has been manipulated. Another technology is adding a digital protective layer, making it much more difficult or impossible for an AI model to recognize images and use them as training material.

But the tools have their limitations. There may be more salvation in analog guardrails. In the US, Washington is feeling the pressure for stricter rules now that the Swifties, as Taylor Swift’s fans are known, are making themselves heard. The EU already has the Digital Services Act, which makes platforms responsible for the content that circulates on them. And the AI ​​Act, which requires reporting if AI has been used to create something. But an opportunity has also been missed there. “Deepfakes and deepnudes are estimated in that legislation as only moderately risky, and that is not enough,” says Van de Heyning.

According to the professor, more is needed. ‘Making the practice a punishable offense is of course a good thing. But if we really want to change something, we have to regulate the industry. So that platforms do not simply state in their terms of use that it is prohibited to distribute deepfakes of children, for example, but that they make it technically impossible.’


When Photoshop was introduced thirty years ago, many feared only chaos, and that did not happen. So we shouldn’t panic.

Olivier Cauberghs

Researcher at Textgain

The most important weapon is probably increasing media literacy, so that people learn to look more critically and consciously at what they see on the internet, and also learn to estimate its human impact. “It’s not just about young people,” says Zara Mommerency of Mediawijs, a Flemish knowledge center that provides training on, among other things, resilience against disinformation.

And while it is certainly not the intention to only talk about the damage that technology can cause, the challenge is great. ‘The speed at which technology is developing makes it difficult to keep up and threatens to undermine general trust in all forms of media.’

Are we then in danger of sliding into a world in which everything digital is by definition suspect? This can also become a political instrument, because the more often deepfakes occur, the faster politicians, for example, can invoke them as a defense to deny truths. ‘The pendulum must not swing,’ says Cauberghs. ‘When Photoshop was introduced thirty years ago, many feared only chaos, and that did not happen. So we should not panic, but we should be vigilant. Fortunately, it remains only a very small share of internet users who have an intention to do something harmful.’

The article is in Dutch

Tags: Taylor Swifts porn images deepfakes making inroads

-

NEXT Jelle Cleymans and Jonas Van Geel respond sarcastically to a sneer from a well-known comedian: “We certainly took his opinion into account” | Showbiz