Request an online demo
Get a personalized online demo of Kount's trust and safety technology at a time and date of your choosing.
Phishing Has a New Face and It’s Powered by AI.
Forget the Nigerian Prince — today's phishing scams have evolved dramatically thanks to the advent of generative adversarial networks (GANs). Since their 2014 inception for image generation, rapid advancements in GAN deep learning now enable AI to convincingly mimic faces, movements, and voices. Alarmingly, a mere 15-second Instagram video can be the source for a new deepfake. Open access and low costs mean malicious actors have readily available open-source AI GAN tutorials, platforms, libraries, and public social media profiles to launch sophisticated deepfake campaigns.

What Are Deepfake Attacks?
Unsurprisingly, deepfake spear phishing attacks have surged over 1000% in the last decade, targeting a wider range of victims. In 2024, around 60% of individuals reported encountering a deepfake video, and 1 in 10 companies admitted to being targeted. So what is a deep fake spear phishing attack? In this type of attack cybercriminals leverage deepfake technology to make their spear phishing attempts even more believable and effective.
Common schemes include:
Deepfake audio
In this scheme, cybercriminals use audio to mimic the voice of a trusted individual like a CEO, manager, or colleague in a phone call or voicemail to request sensitive information or urgent actions like fund transfers. Some of the latest AI models offer “instant voice cloning” with as little as a 3-5 second voice sample. So the next time you get that call from a number you don’t recognize, maybe let it go to voicemail.
Deepfake images
Generating realistic-looking fake IDs (driver's licenses, passports, employee badges) featuring the target's name or a fabricated name, potentially with the likeness of someone the target knows or trusts is a common tactic that can be accomplished with as few as 1-5 clear photos of the target. Today’s advanced AI models can generate a synthetic face or manipulate an existing one with just a few high-quality images. The key is the target's image diversity — getting images of the target from different angles and with varying expressions.
Note: Unfortunately, there isn’t much you can do about deep fake images. Facial Recognition technology doesn’t work as well as it should. Though it is accurate 99% of the time, it’s the 1% of the time where these systems break down. Even the most advanced Facial Recognition solutions can be impacted by poor lighting or low image quality and easily tricked by masks, multiple faces in an image, and more. Techniques like liveness detection and mask detection have failed to significantly impact scammers. If someone wants to create a likeness of you, it’s as easy as grabbing a few photos off of social media.
Deepfake video
With video content, cybercriminals generate fake videos or messages where the impersonated person appears and speaks convincingly, directing the target to take malicious actions. With today’s AI capabilities, attackers need just minutes of high-quality video footage of their target to create a believable visual impersonation.
Why Deepfake Attacks are Successful
The effectiveness of deepfake spear phishing hinges largely on the sluggish implementation of defenses, fueled by our ingrained trust in familiar sources, an inflated sense of our own and our technology's ability to spot fakes, and a general underestimation of the danger. Our natural inclination to believe information from known individuals or authority figures significantly hinders our ability to identify deepfake manipulations, with human detection accuracy averaging a mere 62%.
While AI-powered detection platforms show a wider range of accuracy (49% to 94%), their effectiveness varies with the type of deepfake. This human tendency to trust automatically also creates a false belief in our capacity to distinguish genuine interactions from malicious impersonations. Alarmingly, this lack of preparedness extends to the highest levels, with 61% of executives and 80% of companies admitting they haven't established deepfake fraud mitigation procedures. Given that 72% of companies experienced fraud in 2024, this lag in adopting protective measures is deeply concerning.
Impact fraud and scams is staggering. The FBI documented more than 21K instances of email fraud in 2022 alone, resulting in losses exceeding $2.7 billion. Projections paint an even grimmer picture, anticipating that these financial losses could balloon to $11.5 billion by 2027. As technology advances however, the sophistication and "credibility of content" from scammers is just going to increase.
Real-Life Deepfake Attack Examples
The AI Incident Database is a great source of deep fake information as well as AI incidents. For example , one of the latest incidents reported shows that between 2021 and 2025, California community colleges faced a surge in fraudulent applications—now estimated at 34% of all submissions. Reports indicate that scammers used generative AI tools, including ChatGPT, to produce identity-verifying responses that enabled them to impersonate students and obtain financial aid. The fraud allegedly resulted in over $13 million in losses over the past year alone, straining administrative and instructional systems and displacing legitimate applicants.
And a more chilling example of deep fake phishing, phishers targeting YouTube creator credentials reportedly used an AI-generated deepfake of YouTube CEO Neal Mohan. The fake video announced false changes to YouTube’s monetization policy, which was designed to trick creators into clicking malicious links or to download malware. See Phishing Campaign Using Private Video Sharing
And of course the risk is global. The popular Chinese actor Jin Dong, has been targeted by scammers and is warning fans about criminals using deepfake technology to impersonate him. The scammers are reportedly using his likeness and cloning his voice to deceive and defraud elderly individuals. In addition to his warning, Jin Dong has called for legislation and tech regulations to curb AI abuse.
How to Stop Deepfake Scams
The good news is that the same technology powering AI deepfakes is the same being used to combat and detect them. The key of course will be to instill a healthy skepticism and collectively break our implicit trust biases, and to temper our overconfidence in our detection abilities by replacing them with more narrow and informed mitigation practices.
Here are some things that businesses can do.
1. Stay informed
While reviewing policies, procedures and protections is the number one recommendation of every article like this, it is still important and relevant. The key is staying up-to-date with the latest scams and phishing techniques.
2. Practice open communication
One thing that some forward leaning companies have found “attention grabbing” is periodically sharing example attempts of phishing and scamming attempts with employees. There is nothing more interesting than having Betty from accounting recount the email or phone call urging her to send out a check to a scammer impersonating the CFO. Share statistics with employees as well. Simply knowing the scope of the problem is as important as periodically live testing employees with internal phishing attempts.
3. Use technology
The number one thing that we recommend is employing a true identity proofing solution that offers high-assurance identity verification, synthetic and identity theft detection, document verification, and step-up authentication capabilities.
By adopting these mitigation strategies businesses and individuals alike will greatly reduce the risk of financial loss and other catastrophic fallout from deepfake spear phishing attacks.