The CEO of the world’s largest advertising group fell victim to an elaborate deepfake scam utilizing artificial intelligence voice clones. In a recent email to the management, WPP CEO Mark Read shared details of the attempted fraud and cautioned employees to be cautious of calls from management. I warned you.
Emails obtained by the Guardian revealed that scammers utilized a publicly available image of Mr. Read to create a WhatsApp account, posing as him and another senior WPP executive. They arranged a Microsoft Teams meeting that appeared legitimate. During the meeting, the imposters used voice clones of executives and YouTube videos to deceive participants. The scammers also impersonated Read through the meeting’s chat window. The failed scam targeted “agency leaders,” soliciting funds and personal information to start a new business.
“Thankfully, the attacker was unsuccessful,” Reed mentioned in the email. “We must all be vigilant against tactics that extend beyond email to manipulate virtual meetings, AI, and deepfakes.”
A WPP spokesperson confirmed that the phishing attempt was unsuccessful, attributing it to the vigilance of employees, including the executives involved. WPP did not disclose the timing of the attack or any other executives beyond Reed who were targeted.
Previously, concerns primarily focused on online harassment, pornography, and political disinformation. However, the number of deepfake attacks in the corporate realm has surged in recent years. AI voice clones are tricking banks, defrauding financial institutions of millions, and raising alarms in cybersecurity circles. Notable cases include an Ozzy executive defrauding Goldman Sachs by impersonating a YouTube executive in 2021.
Fraud attempts at WPP seemed to leverage generative AI for voice cloning while employing aggressive tactics like using a publicly available image as a display picture. These attacks exemplify the array of tools fraudsters now possess to mimic legitimate corporate communications and impersonate management.
“We are observing increasing sophistication in cyberattacks against our colleagues, especially senior executives,” Reed remarked in the email.
Reed’s email outlined several red flags to watch for, including requests for passports, money transfers, and mentions of secretive dealings unknown to others.
“Just because your account has my picture doesn’t mean it’s me,” Reed cautioned in the email.
WPP, a publicly traded company valued at approximately $11.3 billion, also indicated on its website that it was combatting fake sites misusing its brand name and collaborating with authorities to prevent fraud.
A notice on the company’s contact page warns: “Names of WPP and its affiliates have been fraudulently used on unofficial websites and apps by third parties often communicating through messaging services. Please take note.”
Many companies are navigating the rise of generative AI, investing in the technology while grappling with its potential risks. WPP announced a partnership last year with chipmaker Nvidia to create ads using generative AI, showcasing it as a game-changer for the industry.
“Generative AI is revolutionizing the marketing world at a rapid pace. This new technology will revolutionize the way brands develop content for commercial purposes,” Reid stated in a May press release.
Recently, low-cost audio deepfake technology has become more accessible and realistic. Some AI models can produce lifelike imitations of a person’s voice with minimal audio input, enabling scammers to create manipulated recordings of individuals. The proliferation of deepfake audio has targeted political figures globally and infiltrated other unsuspecting circles. For instance, a Baltimore school principal faced controversy after an AI-generated audio clip depicted him making racist and anti-Semitic remarks, later revealed as a deepfake created by a colleague impersonating Joe Biden and former candidate Dean Phillips.
Source: www.theguardian.com