5 Common Examples of Social Engineering
Learn how to spot and prevent common forms of social engineering used by cyberattackers to exploit businesses.
Key Points
- Social engineering attacks manipulate employees into doing a fraudsters’ bidding, often by impersonating a boss, vendor, or partner.
- These attacks try to trick users into planting malware, transferring funds, or sharing sensitive information.
- Security awareness training remains the best protection against social engineering, but organizations also need help from automation and artificial intelligence to keep up with evolving tactics.
The word “social” implies good times, sharing, and community. But in cybersecurity, “social engineering” has a dark, dangerous implication. Social engineering attacks are a rising and increasingly sophisticated threat. At the same time, security vendors like Mimecast are continually innovating the defenses against social engineering with advanced technologies such as artificial intelligence (AI).
What Is a Social Engineering Attack?
As its name implies, social engineering is a method of attack where the fraudster weaponizes personal information to target a user. The information could be a person’s job title or duties, the name of a supervisor or top officer in the organization, or details about some important upcoming event. Often by impersonating other persons or organizations — peers, partners, or supervisors — the fraudster creates a convincing message that makes the receiver go along with malicious activities, such as unintentionally installing malware, transferring funds, or sharing sensitive information with cybercriminals.
Five Types of Social Engineering Attacks
Social engineering has been increasing since 2017, according to Verizon’s 2021 data breach report. Most recently, social engineering has shown a “meteoric” increase in what Verizon called “misrepresentation” tactics, which grew 15 times higher during the Covid-19 pandemic.[1]
Social engineering methods keep evolving along with the channels and technology available to fraudsters. Just as phishing has expanded beyond “click here for a prize” emails to “smishing” (by text), fraudsters have become more sophisticated in their use of social engineering. Thanks to social media and to the sale of databases of stolen information on the Dark Web, cybercriminals can acquire large stores of data to enable their attacks. Their approaches include:
- Whaling: Just as spear phishing uses information to target one user with a personalized message, whaling goes a step further to target a big fish in the organization. It’s also known as CEO or CFO fraud for that reason.
- Pretexting: Usually, pretexting involves an email that appears to be from a vendor or partner, looking to solve an urgent issue. But this pretext is a way for the impostor to con the user out of passwords or sensitive information. For example, the fraudster may send an email, claiming to be a customer who needs access to a business account to pay an invoice.
- Quid pro quo: As the name implies, this kind of attack involves an exchange of information or services. The fraudster can impersonate an admin trying to “resolve” a technical issue, by asking the employee for access to their computer. Once the fraudsters are in, they can move around the network as that user, and access any files without being spotted by security.
- Watering hole: Cybercriminals sometimes target websites frequented by members of a particular industry or organization. The fraudsters infect the site with malware and wait for users to access the site and then carry the malicious code back to their servers.
- Angler phishing: This is a form of man-in-the-middle fraud where fraudsters intercept users posting about customer service issues on social media platforms. They impersonate the company, using a lookalike fake account, and post or DM the users, offering help. When the user responds, the fraudsters make off with their personal information or credentials, or trick them into downloading malware onto their networks. Not only does this attack hurt users, but it also damages the reputation of the company being spoofed.
Examples of Real Social Engineering Attacks
As some of the top phishing attacks in the last decade have shown, high-profile cybercrimes often involve a dose of social engineering:
- Whaling: The CEO and CFO of a European aerospace manufacturer lost their jobs after a whaling incident that cost the company over $47 million. An email, claiming to be from the CEO, asked an employee to transfer funds to support an acquisition. Both the email and the deal were fake, and the money went into an account held by the thieves. In terminating the officers, the board of directors said they should have done a better job protecting their emails.[2]
- Pretexting: A pretexting attack targeted two tech giants when a thief impersonated a hardware vendor and sent fake invoices, which were paid to offshore bank accounts. Nearly $100 million was stolen over a period of years in multiple attacks.[3]
- Quid pro quo: Phony tech support fraud surged along with the rise of remote work, turning what had been more of a consumer scam into a business risk.[4]
- Watering hole: An international aviation trade group affiliated with the United Nations was the unwilling partner of cyberspies. State-sponsored hackers infiltrated its network in 2016 and used it as a watering hole to breach member airlines and aviation authorities around the world for as long as a year.[5]
- Angler phishing: Security professionals in the UK spotted a rash of angler phishing attacks in 2016, targeting a number of British banks. The fraudsters created lookalike Twitter profiles that mimicked the banks’ customer service accounts and used them to collect credit card and PIN numbers and other sensitive information from unsuspecting account holders.[6]
How Technology Can Block Social Engineering Attacks
As in so many cases of cybercrime, the best defense against social engineering attacks is security awareness training. Train all users in the system to be skeptical of any messages requesting sensitive information, payments, or software installations, even if they seem to come from the boss.
As the FBI recommended in a recent alert about business email compromise (BEC), employees should make sure the URLs in any emails actually match the organization they claim to represent, check that any links included in the email are spelled correctly (fraudsters often use lookalike addresses) and never share personal information over email. Organizations should also ensure the settings in their employees' workstations are tuned to see the extensions on email addresses, so they can spot phishing messages that are spoofing a legitimate sender by replacing a “.com”, for example, with a “.org”.
But awareness can only go so far, especially when attackers keep evolving their social engineering tactics. Artificial intelligence (AI) and machine learning are helpful in keeping up with the evolution of the fraudsters, building stronger defenses as they learn from current attacks:
- Automation can screen email traffic, searching for lookalike URLs, misspelled addresses, and suspect websites that can be signs of fraud in progress. Those emails can be flagged as suspect to the receiver with alerts showing their level of risk. The suspect emails can also be quarantined in a virtual “sandbox” where they can’t infect any systems. Tools that prevent email trackers also stop tactics used to identify victims and refine messaging.
- AI can provide some real-time air cover to your company’s network by analyzing the behavior of users. Not only can Al flag any activity that is out of the norm, such as a sender’s location, but it can analyze the text and spot if it does not read as something that person would have sent. Identity graph technology powered by machine learning can match users to their usual context — the server and devices connected to a person’s profile — and notice unusual behaviors that can signal an impostor is on the loose.
- Machine learning and AI can also help adapt to changing tactics as fraud evolves, analyzing patterns and learning from them to continually improve threat detection models and change the rules that apply.
The Bottom Line
Social engineering is a growing issue in cybersecurity, but the tools to counteract this practice are on hand. Security awareness training is the best defense, but a number of automated technologies can also help security teams stay on point and evolve their defenses to block the attackers’ latest tactics. See how Mimecast uses AI to thwart social engineering.
[1] “2021 Data Breach Investigation Report,” Verizon
[2] “Aerospace firm, hit by cyber fraud, fires CEO,” Business Insurance
[3] “How this scammer used phishing emails to steal over $100 million,” CNBC
[4] “Phony Tech Support Scams Target Remote Workers during the Pandemic,” Cognizant
[5] “Montreal-based UN aviation agency tried to cover up 2016 cyberattack, documents show,” CBC News
[6] “Twitter phishing campaign targets customers of all major UK banks,” ZDNet
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!