Mimecast Fights Back Against AI-Powered Phishing
Mimecast’s new AI defenses, combined with cutting-edge threat intelligence, help organizations better manage their human risks
When it comes to digital defenses, these are tough times for organizations of all kinds and sizes. Online attacks from cybercriminals and other malicious actors are becoming increasingly sophisticated and are occurring at an unprecedented rate.
The growing availability of technological tools, including many supercharged by artificial intelligence, is making it easier for malicious actors to launch highly customized assaults against more targets.
And it’s not just the usual state-sponsored actors and overseas criminal gangs. Companies, including Mimecast’s customers, also face threats from within their own physical borders.
The Cybersecurity & Infrastructure Security Agency (CISA), in partnership with the FBI and authorities in Canada and Australia, recently issued an updated joint cybersecurity advisory on “Scattered Spider,” a loosely affiliated international cybercriminal group that this year has launched online attacks against everything from grocery store suppliers to airlines and insurance companies.
In just the first six months of this year, retailers including Adidas, Marks & Spencer, Harrods, Cartier, Victoria's Secret, and North Face, along with major Whole Foods supplier United Natural Foods and the UK-based grocery store chain the Co-Operative Group, were all hit with Scattered Spider-attributed cyberattacks that affected their operations.
Authorities in the UK arrested four alleged members of the Scattered Spider group in July. Still, CISA warns that the group and its tactics, which emphasize social engineering rather than exploiting technical vulnerabilities to breach company systems, still pose a danger.
The challenge for organizations remains how to defend against those and other threats, without impacting productivity.
For the experts at Mimecast, the answer lies in combining the latest threat intelligence with human-risk management, says Andrew Williams, the company’s Principal Product Marketing Manager.
“There’s no way to get rid of risk; you can just manage it,” Williams said. “So it’s how we can go ahead and put some measures in place to, let's say, cushion the users to make sure that they don’t do the wrong things.”
The rising threat of AI
According to the FBI, Scattered Spider attackers have been known to use a variety of ransomware variants in data extortion attacks, including recently DragonForce ransomware.
DragonForce emerged in 2023 and uses a “Ransomware-as-a-Service” (RaaS) model, meaning that cybercriminals without much of a technical background can purchase it online, customize it, and launch it against whatever targets they see fit. Originally used in politically motivated attacks, it’s now being used in extortion campaigns motivated by money.
While the FBI says Scattered Spider often changes its tactics, techniques, and procedures to stay below the radar of defenders, there are some hallmarks of its attacks, including its heavy use of social engineering techniques like phishing, push bombing, and SIM swap attacks. Regardless of the method, the aim is to steal account credentials, get around MFA protections, and possibly install spyware.
The FBI’s assertions are backed by Mimecast research released in May. That report announced the discovery of more than 150,000 phishing campaigns impersonating service providers, including SendGrid, HubSpot, Google, and Okta. The campaigns, which the researchers said were likely linked to Scattered Spider, involved fake notifications designed to get users to hand over their login credentials.
Now, those more traditional and less technical attack methods are being boosted with AI tools that allow cybercriminals to do all of that faster and on a much larger scale, Williams said.
Mimecast researchers first started spotting the use of AI in emails of all kinds after the release of ChatGPT, Williams said. It spiked again after Chinese competitor DeepSeek rolled out at the start of this year. This year, sightings of AI-generated malicious emails have already far surpassed 2024’s total.
“So it's definitely very prevalent and relevant to what attackers are doing,” Williams said.
Human social engineering gets an AI boost
Experts will tell you that AI, whether it be used for defense or malicious purposes, still has a long way to go before it can replace humans. But it’s already become a key tool for attackers.
Notably, cybercriminals are using AI to craft phishing emails that are polished, professional, and highly targeted to their recipients. And it’s getting increasingly harder to separate the AI-generated communications from those created by humans.
Mimecast’s Threat Research team recently identified a Business Email Compromise (BEC) campaign that uses automated fake email threads to commit invoice fraud on a massive scale. What’s notable about the campaign is that it combines traditional human-powered social engineering with new AI tools that create legitimate-looking, but fake, conversations between executives and outside companies.
The fabricated email chains include what appear to be legitimate business communications, with each thread crafted to include the need for the company’s CEO or senior executives to urgently approve invoice payments.
Mimecast’s researchers noted that the campaigns show clear signs of automation, pointing to the inclusion of AI-generated content, along with PDF attachments generated with headless browser technology right before the emails were sent. In addition, technical analysis of the campaigns revealed several signs of automated distribution, they said.
Meanwhile, linguistic and structural analysis of the body text of the emails revealed several signs that they were generated by Large Language Models (LLMs), including a high level of fluency in that particular language, appropriate context, and a lack of the usual grammatical errors, the researchers said. In short, they were too perfect to have been created by a human.
The researchers pointed to a specific instance of an AI-generated fake email chain that started with a fake invoice from what looked like a third-party consulting firm. That email was followed by a fake confirmation from the company’s CEO saying that he was forwarding it to the company’s finance department for payment.
The next fake email in the chain appeared to be from the consulting firm, saying that the invoice had not yet been paid, adding a tone of urgency, which was then followed by another fake CEO email saying that the invoice would be paid, then ultimately a “FINAL NOTICE TO ACCOUNTING” email addressed to the company’s finance department.
The “Final Notice” email was the first email actually sent to the company’s finance department, and the fabricated chain behind it gave it the appearance of legitimacy. The idea was to trick the recipient into thinking they missed the previous emails, then make them panic and pay the fake invoice without verifying its authenticity.
Protection through tech and training
The question then becomes, how can companies best protect themselves against those rising threats?
For Mimecast, the solution lies in a combination of threat intelligence and human risk management, Williams said. They’re two sides of the same coin. They’re different, but they work together.
Threat intelligence gives you the “what” and the “how,” Williams said, meaning what is being attacked and how those attacks are happening. That includes any particular social engineering techniques being used or specific vulnerabilities that are being exploited.
Much of that intelligence comes from the data gathered from Mimecast’s 42,000 customers and the 1.8 billion emails the company scans each day, Williams said. In total, the company looked at and scanned 90 billion interaction points in the second half of last year. That includes everything from DLP events to clicks, to BEC messages, to the discovery of AI, to exfiltration events.
“There's so much that we can utilize in terms of an understanding of what is, ultimately, the attack chain of what our customers are being targeted with,” Williams said. “And, reciprocally, what's the insight we can glean from that?”
At the same time, human risk management gives you the “who” and the “why,” meaning which people at a company are being targeted and why they’re being targeted, he said. That lets defenders put proactive measures in place where they matter most.
Cracking down on everyone at a given company isn’t an option, because overly restrictive controls can stymie productivity. And according to Mimecast data, just 8% of employees account for 80% of security incidents. So one-size-fits-all policies just don’t work.
This is where Mimecast’s adaptive policies come in. They’re designed to proactively analyze patterns and spot risky behavior — like clicking on phishing emails, mishandling sensitive data, or interacting with malware — and provide tailored security solutions.
Mimecast announced its new adaptive security features on the trade show floor at the annual Black Hat conference in Las Vegas. The system’s AI-powered controls are designed to automatically adjust security measures based on real-time risk assessment, behavioral science data, and threat intelligence.
The idea is to ensure optimal protection levels and restrictions for the users who need them most, helping organizations stay ahead of threats and prevent the loss of critical data, while also reducing costs.
“Security teams are under constant pressure to do more with less, and that starts with being smarter about how they use their time and tools,” said Ranjan Singh, Chief Product & Technology Officer at Mimecast.
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!