Insider Risk Management & Data Protection

    (Deep)fake news: Recent data reveals gaps between perception and reality

    Deepfakes today can be so convincing they often lead to widespread fraud, theft, and phishing.  

    by Cheryl Zupan

    Key Points

    • Understanding the critical gap between consumers’ belief in their own ability to uncover a deepfake and being able to pick one out from the crowd. 

    • Why consumer trust in online content is quickly declining, dropping 71% in the last six months due to deepfakes. 

    • How to close the deepfake detection gap with Human Risk Management. 

    Generative AI (GenAI) is fueling the next wave of innovation. And the next wave of deception. While GenAI can do a lot for us, it can also do a lot for bad actors taking advantage of people online. Enter deepfakes.

    The rapid pace of advancement in AI and the increased accessibility of AI tools play a significant role in the growing presence of deepfakes. Where earlier deepfakes had more glaring tells, the quality and realism threat actors can now achieve makes it easier than ever to manipulate consumers.  

    Mimecast surveyed 1,000 U.S. adults to explore and understand consumer perceptions around deepfakes. Our data sheds light on consumer opinions of their own detection skills and how the increasing prevalence of deepfakes fueled by AI is eroding trust and confidence.  

    Consumers know a deepfake when they see one… Or do they?  

    Our data shows: 

    • 52% of respondents say they encountered deepfake content 

    • 64% are either somewhat or very confident they can spot one 

    On the flipside, a 2025 study that exposed consumers to real and deepfake content found only 0.1% of participants could tell what was real.  

    There is a critical gap between consumers’ belief in their own ability to uncover a deepfake and being able to pick one out from the crowd.  

    • A third of Americans are unsure if they have seen more or fewer deepfakes in the past six months 

    • In the same period, half (49%) believe their ability to identify a deepfake improved 

    Consumers are largely confident they’ll know a deepfake when they encounter it, but at the same time, they don’t really know what to look for. This begs the question: Do consumers know how to spot a deepfake? Or do they just assume they will be able to spot one based on old tells? The data is clear: We need better security awareness training to help consumers identify deepfakes. And we need to use the power of AI to fight AI threats.  

    Confidence check: Younger eyes aren’t sharper at spotting fakes  

    The data shows that younger generations are more confident in their ability to identify a deepfake: 

    • 81% of Gen Z  

    • 75% of Millennials   

    • 57% of Gen X  

    • 42% of Baby Boomers   

    Further, 61% of Gen Z respondents and 56% of Millennials think their ability to identify deepfakes improved over the past six months. And Gen Z shows a larger reliance on community consensus than other generations. 

    • 40% of total respondents look at comments to verify content validity  

    • 60% of Gen Z head to the comments and 32% will ask a friend 

    But comment sections and friends can also be misleading as people often use social media, discussion threads, and commentary platforms to share their own opinions. Millennials are more skeptical with 57% turning to their own online research to verify. The reality is, we do not always have the visibility we need into whether someone else’s take is rooted in facts and evidence.    

    Awareness is not action: The deepfake behavior gap 

    When looking at the data across all age demographics, 47% conduct their own online research to verify content, but only a third disengage from content if they think it is fake. 

    Gen X and Gen Z are aligned on research but not engagement: 

    • 45% of each age group do their own research to verify content 

    • But 38% of Gen X will disengage, while only 21% of Gen Z will 

    For Millennials, the gap increases: 

    • 57% do their own online research 

    • But only 24% would disengage from potential deepfakes 

    This shows a significant behavioral gap, and awareness is not always sufficient to push consumers to act.  

    Younger generations are typically more likely to be aware of and more often interact with AI. It is possible they feel more comfortable being exposed to deepfakes so long as they know they are fake and can choose to engage or respond accordingly. This is the inflection point where awareness needs to become action. It is not enough to know that deepfakes are out there. Consumers at every age need to be diligent in how they research to uncover validity and from there, how they choose to move forward in the ways they engage online.  

    The trust fallout: How deepfakes fuel social and security risks   

    Consumer trust in online content is quickly declining, dropping 71% in the last six months due to deepfakes. Additionally, 91% of consumers believe GenAI will only escalate the deepfake problem. 

    The truth is, deepfakes are tough to identify:  

    • 27% of respondents struggle the most with images and text-based content  

    • 25% are least confident in their ability to identify videos as deepfakes  

    • 35% say the presence of deepfakes online are their biggest cause for concern  

    Deepfakes blur the lines between what’s real and what isn’t, undermining the credibility of digital spaces and creating social polarization. As a result, existing social divisions can intensify, and people can become less willing to engage in more collective problem solving around online content.  

    From a security standpoint, deepfakes can be used to bypass security controls, authorize fraudulent transactions, and manipulate processes. Scams and social engineering attacks leveraging deepfakes can break down trust in digital interactions, and lead to real financial losses.  

    Raising confidence in deepfake identification requires moving beyond siloed technical solutions to a holistic security strategy. By integrating targeted, scenario-based security awareness, deploying real-time AI-powered detection, and running distributed incident response training including deepfake scenarios into resilience planning, organizations can build both the technical rigor and trust needed to counter increasingly sophisticated synthetic threats. A layered, adaptive model demonstrates that confidence is achieved not just through detection accuracy, but through continuous training, transparent reporting, and cross-functional readiness. 

    Close the deepfake detection gap with Human Risk Management 

    Just because people are aware of the risks from deepfakes doesn’t automatically mean they’re protected. Though consumers already possess a healthy skepticism of suspicious content, AI continues to disrupt the threat landscape, meaning defensive strategies need to evolve quickly to stay ahead.  

    In organizations, raising confidence in deepfake identification is not a single-technology challenge, but a cross-functional, continuously evolving process. Success depends on the ability to fight AI with AI, and integrate adaptive training. Leveraging AI for predictive threat prevention, automated security controls, and accelerated governance will help close the gap and foster a digital environment that hinges on smart detection and human awareness.  

    Want to know more? Read about our approach to AI, human risk, and future-proofing cybersecurity.  

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top