Threat Intelligence

    Countering Synthetic Media Attacks With Security Policy

    Scams That Digitally Impersonate People’s Voices And Likenesses Are Poised To Become The Next Wave Of Cyber Risk. How Should Employees Train Up?

    by Dr. Matthew Canham

    Key Points

    • Deepfakes are infiltrating the business world.
    • Cyber scammers can use advanced technology to impersonate executives on voice and video calls.
    • Employees need to be aware and to double down on some time-tested security measures.

    Deepfakes and other forms of synthetic media are poised to revolutionize social engineering, ushering in a new paradigm of cyberattacks. The FBI anticipates a significant uptick in synthetic media-enabled social engineering attacks over the next 12 to 18 months.[1] Synthetic media will add new capabilities to traditional attacks, such as vishing, by impersonating the voices of senior executives, or it may lead to entirely new attacks such as zishing (zoom phishing).[2]

    Educating employees about these threats now and enacting good security policy will not only reduce the likelihood that the attacks will be successful, but will also reduce the effectiveness of more traditional social engineering attacks.

    The term “deepfakes” refers to a particular category of synthetic media that repurposes existing audio or video clips to impersonate and control the synthetic representation (the syn-puppet) of the impersonated. This technology has been available for several years and used in large budget films such as Rogue One, for the Princess Leia character.

    Deepfakes in the Wild

    At least one example of audio deepfake technology has been reported in the wild through a vishing business identity compromise (BIC) attack on a U.K. company, when criminals impersonated an executive’s voice to convince an employee to wire funds to an unauthorized account. The criminals were so convincing that they successfully repeated this scam three times before discovery on the fourth attempt, when the employee became suspicious.[3] 

    In recent years, this technology has become more prolific and accessible to low-technology users. At the time of this writing, a Pennsylvania woman stands accused of creating deepfake videos of teenaged cheerleaders engaged in prohibited behaviors, attempting to have them removed from the cheer program.[4] This presents an example of how a non-technical individual can potentially use this technology in malicious actions.

    Minimizing the Threat with Good Security Policy

    While several technology-based methods to detect synthetic media currently exist, these are often difficult to apply in real time on personal devices. Fortunately, this is where good “old-fashioned” security policy may provide an effective counter to this new and emerging threat. Here are four easy-to-implement policies that cost very little to rollout:

    • The Shared Secret Policy.
    • The Never Do Policy.
    • The Multi-Person Authorization Policy.
    • The Multi-Factor (Multi-Channel) Authentication Policy.

    The Shared Secret Policy

    This is a quick and easy way to validate the person on the other end of the communication. Spies and their agents have been relying on this form of validation for centuries.

    To implement this policy, simply arrange a signal (probe) and countersignal (response) ahead of time, with the understanding that if one party to the communication hears the probe, that the other party will respond appropriately. A probe question might be, “Which coversheet should go on this sales report?” — to which the recipient would reply with a pre-determined response such as “The two Bobs said sales coversheets are no longer required.”

    To encourage adoption among employees, this could even be turned into an inside joke. Steer clear of using probe-response pairs (such as movie lines, song lyrics or company slogans) that might be guessed by an adversary. To maximize effectiveness the question should avoid alerting the criminal to the probe. In this example, if sales reports are part of the interaction, this exchange would appear normal.

    The Never Do Policy

    After a rash of gift card scams involving employees, the director of one organization stated emphatically and unambiguously that under no circumstances would he ever ask employees to purchase gift cards. Receiving clear direction on what a high-ranking executive will (or will never) request helps employees understand what “normal” requests will be.

    Additionally, this sort of clear direction should include instructions on how an employee should handle a questionable communication. Over the past year, I have received six emails that I was convinced were phishing emails. After reporting these to my Security Incident Response Team (SIRT) I learned that they were in fact legitimate. False alarms like these are bound to happen, but that’s not a bad thing. Quite the contrary: Employees who are on guard for social engineering attempts are invaluable to your organization. Be sure to maximize the utility of these cyber resilience stewards by letting them know how to respond appropriately.

    The Multi-Person Authentication Policy

    While employees need autonomy to do their jobs, certain circumstances are best handled with multiple levels of authorization. BIC scams could be significantly reduced by simply requiring multiple persons to authorize transactions.

    A junior-level employee might be less likely to question a superior, making them more susceptible to scammers impersonating high-level executives. In the case involving the U.K. company, having a supervisor familiar with the executive being impersonated might have prevented multiple victimizations. By relying on a shared secret and being on more equal footing, they would be more willing to invoke the probe question.

    The Multi-Factor (Multi-Channel) Verification Policy

    Using multiple forms of authentication is arguably the single most effective method to foil BIC scams. The important point here is that the second factor (or channel) needs to be distinct from the primary channel being utilized. If a request is received via email (the first channel), for example, then the confirmation should be arranged over the phone (the second channel).

    One victim I interviewed confided that they had sent an email to the address they received the initial request from, asking whether the first email was legitimate. Unfortunately, that account had been compromised, and the criminal sent a reply to the victim stating that the initial request was legitimate and that they should proceed with the funds transfer.

    The Bottom Line

    The greatest challenge to implementing these policies will be overcoming the human tendency to circumvent them because of time pressure, convenience or sympathy for a person in distress. Implementing these policies is only part of the solution; making them regular habits is another (possibly greater) challenge to defeating these attacks.

    [1]Private Industry Notification,” FBI

    [2]Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering,” Matthew Canham

    [3]Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,” Wall Street Journal

    [4]Pennsylvania Woman Accused of Using Deepfake Technology to Harass Cheerleaders,” New York Times

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top