Justin Trudeau deepfake ad promoting ‘robot trader’ pulled off YouTube – National

A deepfake advertisement depicting Prime Minister Justin Trudeau’s likeness promoting a financial “robot trader” has been pulled off YouTube.

Google, which owns the online video-sharing and social media platform, told Global News it also suspended the advertiser accounts associated with these “scams.”

Deepfakes are realistic yet fabricated images, audio and video created by using artificial intelligence. Recent advances in the technology have made them more indistinguishable from human-created content.

“Protecting our users is a top priority and we have strict policies that govern the ads on our platform,” a spokesperson said.

“These scams are prohibited on our platforms and when we find ads that breach our policies we take immediate action, including removing the ads and suspending advertiser accounts when necessary, as we did in this case.”

Story continues below advertisement


News footage of Justin Trudeau (centre) shaking hands with dignitaries seen in a deepfake advertisement on YouTube March 27, 2024. YouTube has removed the advertisement of a deepfake Trudeau endorsing a “robot trader.”


YouTube/Global News screenshot

Global News came across the deepfake ad on Wednesday morning. Running about two-and-a-half minutes, the ad depicted manipulated audio of Trudeau’s voice endorsing a website and product for “passive income from $10,000 CAD per month.”

It was taken out by “J.U. Lanz” in Switzerland, the YouTube page showed. The ad directed users to a webpage for the product.

The audio played over visuals of Trudeau, including what appears to be generated video of his face that attempted to physically mimic the audio that was playing.

“I am confident in the robot trader and guarantee financial results to every investor,” the audio of Trudeau’s voice is heard saying.

“Embrace the future of wealth creation and act now. Visit our website and start the registration process. The door to financial prosperity is open – are you ready to step through it? Visit the site, and register.”

Story continues below advertisement


Click to play video: 'Implications of AI Deepfakes'


Implications of AI Deepfakes


The images and audio of Trudeau were fake and manipulated, a senior government source official told Global News.

Jenna Ghassabeh, a spokesperson in the Prime Minister’s Office, called the video an example of “concerning and unacceptable” behaviour.


The email you need for the day’s
top news stories from Canada and around the world.

“We have seen how malicious accounts and users can proliferate falsehoods,” Ghassabeh said in a statement.

“The amount of deceptive, fake and misleading information and accounts targeting elected officials, is increasingly concerning and unacceptable, particularly in an era with deepfake technology.”

Trudeau video matches ‘common trends’ of deepfake scams: expert

The Trudeau deepfake scam that Global News viewed matches “common trends” of others like it, said Suzie Dunn, an assistant professor of law and technology at Dalhousie’s Schulich School of Law.

Story continues below advertisement

Dunn told Global News in an email Thursday it can still be tricky to create a visual deepfake that doesn’t look glitchy, but with audio, the technology is much more convincing.

“Recently, there has been a trend of fake celebrity ads, such as Taylor Swift advertising Le Creuset deals that are actually just scams. This looks like one of those,” she said.


Click to play video: 'Taylor Swift deepfake images: Why people are concerned over pornographic AI photos'


Taylor Swift deepfake images: Why people are concerned over pornographic AI photos


Dunn said it’s “increasingly easy” to make audio deepfakes nowadays.

“The technology is opensource and readily available online,” Dunn said.

“A person with no technological skills might struggle to figure it out, but for someone with some basic understanding of programming who has enough real audio recordings of a person, it wouldn’t be a challenge to create.”

Story continues below advertisement


Click to play video: 'Tech Talk: YouTube cracks down on deepfakes & China unveils EVs with drones'


Tech Talk: YouTube cracks down on deepfakes & China unveils EVs with drones


Dunn added there are risks deepfakes could cause political challenges and confusion among the general public.

“Right now, the most common harmful use of deepfakes has been scamming people out of money or creating non-consensual sexual deepfakes of women,” Dunn said. Italian Prime Minister Giorgia Meloni recently sued two men who allegedly made pornographic deepfakes of her.

“There are risks that a deepfake could cause political challenges and confusion among the general population, as was seen with the (U.S. President Joe) Biden faked audio, so politicians and governments will need to be alert to those risks and be on the ready to debunk deepfakes as they come out.”

Where do Canadian laws stand on deepfakes?

Current Canadian law governing the issue usually vary by province, Dunn told Global News earlier this month.

Story continues below advertisement

The country introduced some criminal provisions against sharing intimate images without someone’s consent in 2015 — which is before deepfakes were publicly available, Dunn pointed out.

And while deepfakes could potentially be prosecuted under extortion or harassment criminal laws, she said that has never been tested, adding that criminal law for now “is limited to actual, real images of people” and not AI-created content.


Click to play video: 'Cyber threats, AI, deepfakes targeting elections on the rise: CSE'


Cyber threats, AI, deepfakes targeting elections on the rise: CSE


Some provinces, such as Saskatchewan, New Brunswick and Prince Edward Island, have introduced civil statutes since 2015 that refer to altered images – which include deepfakes — allowing them to ask a judge for an injunction to have them removed.

Manitoba introduced updated legislation this week and British Columbia has a “fast-track” option where people can request intimate (and altered intimate) images of them be taken down quickly, instead of having to wait for the typical weeks or months of court processing times.

Story continues below advertisement

The Online Harms Act, which the federal Liberal government tabled last month, calls on social media platforms to continuously assess and remove harmful content, including content that incites violence or terrorism, content that could push a child to harm themselves and intimate images shared without consent, including deepfakes.


Click to play video: 'Breaking Down the Online Harms Bill'


Breaking Down the Online Harms Bill


Platforms would need to remove content they or users flag within 24 hours.

Dunn called the Italian prime minister’s case “courageous,” and said if Meloni’s case is successful, it could offer Canadian lawmakers and lawyers a method for prosecuting people who release deepfakes.

— with files from Nathaniel Dove

Leave a Reply

Your email address will not be published. Required fields are marked *