Society Isn’t Ready for AI Deepfakes | Are You? How Spoofed Communications Put Everyone at Risk

Society Isn’t Ready for AI Deepfakes
AI deepfakes are no longer limited to viral videos or fake images online.
Today, artificial intelligence can replicate a person’s face, voice, and mannerisms in real time. AI can do it well enough to be used in phone calls, video chats like FaceTime or Teams, and other live interactions where people assume what they see and hear is real.
If you don’t think you can be fooled by AI, think again. Not only are AI deepfakes nearly indistinguishable from real life—you are now a target.
Deepfakes: Synthetic Media That Manipulate Reality
Deepfake technology—AI‑generated synthetic audio, image, or video—is becoming more accessible and more convincing. Cybercriminals can now create hyper‑realistic fake personas, speeches, and video clips using only a few seconds of source material.
These attacks have exploded in scale. Incidents rose by 257% from 2023 to 2024, and by mid‑2025, the number of deepfake cases had already surpassed the previous year’s total.
In real‑world cases, deepfaked executives have convinced employees to transfer millions of dollars, while everyday individuals are increasingly targeted through voice cloning, email spoofing, and phone number impersonation.
To prove it can be done, we made an AI Deepfake of our Solutions Director AND our IT Director.
In just 30 minutes, we generated an AI deepfake of two prominent members of Century & Catalyst, putting words in their mouths that they never said. The most unsettling part wasn’t the speed of the generation, but the simplicity: the entire video was created from a single photograph.
We created these videos using regulated, professional‑grade tools trusted across the creative industry—and were shocked by how easily the content could be generated, downloaded, and shared, with no watermark or clear indication it was AI‑produced. If this is what’s possible with reputable platforms, imagine what bad actors can accomplish using unregulated or dark‑web tools.
As AI becomes faster, cheaper, and more accessible, trust can no longer be guaranteed by seeing a face or hearing a familiar voice.
Personalized Deception at Scale (Email/Phone Number Spoofing)
Experts used to advise you to verify the phone number or check the email address for typos. That is no longer enough.
You can get a scam phishing email from a perfectly matching email address or a fraudulent phone call from a recognized number.
AI has transformed phishing. Modern phishing tactics scrape social media data, and generate emails that are written in flawless language, and tailored to match your communication style. These messages convincingly mimic coworkers, financial institutions, government agencies, or loved ones, often bypassing spam filters and fooling even experienced users.
Job seekers are increasingly targeted as well, receiving calls or emails from scammers posing as legitimate employers to extract money or personal information during fake “interviews.”
- Phone Number Spoofing
- Attackers falsify caller ID information to make calls appear as though they’re coming from trusted numbers like banks, employers, or family members.
- Email Spoofing
- Cybercriminals forge the “from” address of an email, so it looks like it was sent by a trusted person or organization.
- Phishing (Email)
- Scammers send emails that appear legitimate and contain links or attachments designed to steal sensitive information.
- Spear Phishing
- A highly targeted form of phishing where attackers personalize messages using specific details about an individual or organization.
- Smishing (SMS Phishing)
- Phishing attacks delivered via text message, often prompting recipients to click malicious links or call fake support numbers. Have you committed toll-evasion recently?
- Vishing (Voice Phishing)
- Scammers make phone calls posing as legitimate entities to pressure victims into sharing sensitive information or sending money.
- AI Voice‑Cloning Vishing
- Attackers use AI to replicate real voices from short audio samples, making fraudulent calls sound convincingly authentic. (See our video example above.
- Clone Phishing
- A legitimate email previously received is copied and resent with malicious links or attachments swapped in.
- QR Code Phishing
- Malicious QR codes direct victims to fake websites when scanned, often appearing on posters, emails, or payment notices.
We’ve said it before and we’ll say it again. The best time to implement zero-trust was yesterday.
Everyone Is a Target Now—Not Just Celebrities
Deepfakes once seemed like a celebrity‑only problem. But cybercriminals don’t care about fame; they care about vulnerability and opportunity.
Current data shows:
- Individuals and the general public are now primary targets alongside politicians, executives, and public agencies.
- Criminals use cloned voices to deceive everyday people, increasing global losses year after year.
- Only 0.1% of people can reliably detect a deepfake, meaning nearly everyone is susceptible.
Your voice, your photos, and your writing style can be weaponized against you. If you don’t think your voice has been posted online, think deeper: voicemails, radio interviews, and viral social media challenges are just a few examples of usable audio for voice cloning.
The threat is no longer about celebrities or CEOs. It’s about anyone with a smartphone, social media, or digital footprint.
How to Protect Yourself and Your Organization
The reality is that when several of these tactics are combined, even a cautious person could be fooled. A scammer could call your grandmother using your cloned voice and urgently ask for emergency funds, creating panic and bypassing rational verification.
In those moments, traditional instincts like recognizing a voice or a face are no longer reliable safeguards.
Here are five ways that you can help protect yourself and your organization from AI deepfakes and scammers:
- Always verify requests
- For money, sensitive information, or urgent changes, confirm through a known, separate communication channel. Hang up and call the vendor back from their official website. NEVER give out private information over the phone until verified.
- Use “safe words” with family, friends, coworkers
- A simple phrase known only to trusted individuals can stop a voice‑cloning scam.
- “What’s the name of our family group chat?” or “What poster do you have hung up in your office?” are simple questions that could shut down a lot of potential fraud calls.
- Strengthen communication security
- Organizations must implement strict verification protocols and employee training to combat AI‑powered deception. Enable MFA where possible.
- Limit what you share online
- Public audio, video, and personal information fuel AI‑powered attacks. If it doesn’t need to be shared, think twice about posting. Make all profiles private and don’t accept requests from strangers.
- Adopt AI‑based detection tools
- New cybersecurity tools can spot synthetic media artifacts or unusual communication patterns.

Conclusion
Deepfakes, spoofed communications, cloned voices, and AI‑generated phishing attacks are no longer niche threats. They are mainstream, scalable, and highly effective.
Digital trust is eroding, and vigilance is no longer optional. Understanding the threat is the first step. Preparing for it is the next.
Society isn’t ready for AI deepfakes. Are you?
If anyone asks you for ANY information, take their name and number and tell them you’ll call them back using the vendor’s official phone number.



Contact Catalyst IT
Consider reaching out to Catalyst IT for a free consultation.