Threat Intelligence

    Malicious Deepfake Technology: A Growing Cyber Threat
     

    Be aware: Attackers now have the capability to clone real people’s voices to increase the success of business email compromise attacks and sway public opinion.
     

    by Mike Faden
    900-getty-woman-in-dark-conf-room.jpg

    Key Points

    • Malicious actors are using AI-based deepfake audio impersonations to enhance the success of business email compromise, though reports of these cyberattacks remain relatively rare.
    • Advances in deepfake technology mean it’s now possible to clone voices at scale using small audio samples that may be readily available to attackers.
    • Governments and social media platforms are taking steps to prevent the use of deepfakes to deceive the public and influence elections, but it remains to be seen how effective those efforts will be. In the never-ending cybersecurity arms race, attackers are turning to artificial intelligence as they seek new ways to evade defenses. One area of particular concern is the use of AI-based “deepfake” technology to create realistic audio or video impersonations of real people that can deceive—and help to defraud—unsuspecting users.

     

    As highlighted in an earlier post by Jonathan Miles, Mimecast’s Head of Strategic Intelligence and Security Research, reports emerged last year of deepfake audio being used in business email compromise phishing attacks, including one cyber fraud that conned a company out of $243,000. The use of deepfake technology in such attacks should worry all organizations, since business email compromise caused more financial losses than any other type of cyberattack in 2019.  

    Since then, reports of a handful of other deepfake audio-powered attacks have surfaced. At the RSA conference in February, Vijay Balasubramaniyan, CEO of voice authentication specialist Pindrop, said his company had investigated about a dozen cases during the past year, and that exploits have attempted to steal up to $17 million.[1]

    So deepfake technology is clearly available to criminals, even if it’s not yet in widespread use. A broad set of stakeholders—including legislators, government agencies, social-media platforms and cybersecurity companies—is concerned enough to investigate deepfake technology and take steps to defend against it. They’re concerned that deepfakes can be used for financial cyber fraud as well as other nefarious purposes, such as influencing elections.

    What is Deepfake Technology?

    Deepfake—the name combines “deep learning” and “fake”—employs machine learning and artificial intelligence to create a synthetic human voice or video. A deep learning model is trained using existing samples of a real person’s voice or image; with enough training, the model can generate an imitation good enough to fool people.

    “People have been mimicking voices for years, but the technology has advanced to the point where we can clone voices at scale using a very small audio sample,” said Laura DeMartino, an associate director at the Federal Trade Commission, at a February FTC workshop that explored the implications of deepfake audio.[2]

    “All you need is five minutes of someone’s audio and you can create a fairly realistic clone,” Balasubramaniyan said at the RSA conference. “If you have five hours or more of their audio, then you can create something that’s not perceptible by humans.”[3]     

    Such audio recordings of executives or other people that may be impersonated in business email compromise scams are often readily available via videos posted online, earnings calls and company briefings.  

    Multifactor Deception: How Deepfake Audio is Used in Business Email Compromise

    Deepfake audio can help malicious actors mount much more convincing business email compromise attacks, said U.S. Department of Justice attorney Mona Sedky, who described herself as the “voice of doom” at the FTC workshop. It enables them to overcome a key obstacle: “It’s difficult to convincingly pose as someone else, especially if you’re a foreign national and have an accent,” she said.

    Using deepfake technology, attackers can combine email phishing with authentic-sounding voice messages. By using multiple vectors for deception, they can increase the likelihood that they’ll be able to deceive users.

    Voice-enhanced business email compromise may be “very fruitful,” Sedky said. “Classic scenario: You’re a university. I send you an email pretending to be your construction company. I say the bank account has changed; please wire funds to the new bank. If I can follow up that fake spear-phishing email with a [deepfake] phone call, it’s huge dollar losses instantly.”[4]

    But Attackers Take the Easiest Path to Compromise

    Still, it’s important to remember that malicious actors tend to use the easiest attack method that will work—and there may be easier ways to deceive users than with deepfake technology. For example, Mimecast’s Miles said that many attacks use phony notifications that appear to contain voicemail messages but in reality carry malicious links. “I think that a majority of the traffic seen is an attempt to lure interaction from targets by clicking on a supposed voice message/mp4 recording,” he said. “While these potentially could be considered as deepfake on first look, the intended attack vector is clicking on a link to download malware.”

    “Although the technology exists for creation of deepfakes, if the source material isn’t available for manipulation, deepfake material will not be produced. It’s easier to send a file claiming to be a missed voice message, getting the target to click on it and downloading malware via that vector. It requires less effort, but has the potential to produce the same results.”

    Governments and Social Media Platforms Try to Restrict Malicious Deepfakes

    The growing sophistication and availability of deepfake technology is leading governments and social media platforms to take steps to prevent use of the technology to mislead consumers, especially during election cycles. California recently passed a bill, AB 730, designed specifically to prevent the use of deepfakes to influence political campaigns, starting this year. It prohibits distributing "with actual malice" materially deceptive audio or visual media showing a candidate for office within 60 days of an election, with the intent to injure the candidate’s reputation or deceive a voter.[5]

    But the effectiveness of such state-level laws may be limited, Miles said. “With the law only applicable in one state, it still leaves 49 states where voters may believe what they see. Then add in the world stage, and hostile nation states intent on causing damage to U.S. democracy, and the problem still remains.”

    Miles also points to a Vietnam cybersecurity law that went into effect in 2019, prohibiting the spread of false information for a variety of purposes.

    However, “policing the entire internet to identify the root of the information will never be achievable, so there will continue to be an influx of material online—that will always be in circulation,” Miles said.

    Social media platforms are also taking steps to deal with potential deepfakes. Facebook and Twitter both announced early this year that they will remove deepfake technology created for malicious intent, and Facebook even held a competition to find algorithms that can spot manipulated videos.[6],[7],[8]

    The Bottom Line

    Even though deepfake technology hasn’t yet become mainstream in cyberattacks on businesses, businesses and consumers need to remain alert to the potential threat. “Everyone should be aware that there is the capability and intent to produce deepfake material. What they see online and in social media may not necessarily be 100% accurate,” said Mimecast’s Miles. “At times of confusion and uncertainty, criminal entities will seek to exploit and sway the opinion of the vulnerable by whatever means possible.”

     

    [1]Is AI-Enabled Voice Cloning the Next Big Security Scam?,” PCMag

    [2]You Don't Say: An FTC Workshop on Voice Cloning Technologies,” FTC transcript.  

    [3]Is AI-Enabled Voice Cloning the Next Big Security Scam?,” PCMag

    [4]You Don't Say: An FTC Workshop on Voice Cloning Technologies,” FTC transcript.  

    [5]Two New California Laws Tackle Deepfake Videos in Politics and Porn,” Davis Wright Tremaine LLP

    [6]Facebook just banned deepfakes, but the policy has loopholes — and a widely circulated deepfake of Mark Zuckerberg is allowed to stay up,” Business Insider

    [7]Facebook contest reveals deepfake detection is still an ‘unsolved problem’,” The Verge

    [8]Twitter Just Released Its Plan To Deal With Deep Fakes,” BuzzFeed News

     

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top