How attackers use AI to launch phishing campaigns  

In 2024, a finance employee at a multinational firm based in Hong Kong received an urgent message from the company's CFO requesting an immediate fund transfer. The CFO claimed that the transaction was confidential and that details couldn't be disclosed due to its sensitive nature. But thanks to the cybersecurity awareness programs that the employee had previously gone through, they suspected that it may have been a phishing message.

So the employee responded that further verification needed to be done, after which they'd transfer the requested funds. The CFO arranged a video call with other employees in the company. The finance employee joined the call to verify the identity of the other employees. After the call, the worker made the transfer of $200 million Hong Kong dollars, which is roughly $25.6 million USD.

Only later was it revealed that the company was deceived of millions of dollars; the video call was a deepfake where the attackers used employees' avatars to hide their identity.

Sadly, these scams have only grown more innovative and nuanced in the last two years. The use of AI in phishing has become a formidable force that cybersecurity experts are now dealing with on a daily basis. In this article, we'll explore how phishing has evolved, what AI phishing is, how it works, some common use cases of AI phishing, and how organizations can protect themselves.

The evolution of phishing attacks 

Just a couple of decades ago, cybercriminals crafted one phishing email that was sent to thousands of email addresses in bulk. The famous "Nigerian Prince" scam is one such example. As time went on, threat actors became more creative and started drafting more contextual emails, such as delivery updates, security alerts, bank transaction alerts, and more.

As email users became more savvy to these tactics, cybercriminals conducted extensive research on their targets to launch personalized attacks in spear phishing campaigns. These campaigns are time-consuming and effort-intensive.

Threat actors with more expertise started providing phishing kits as a service to less experienced criminals so they could launch attacks at scale. With the advent of AI, gathering personalized information has become much easier, and phishing attacks have taken a new form.

What is AI phishing? 

AI phishing refers to the use of AI by cybercriminals to create phishing campaigns. In this method of phishing, threat actors use AI for every stage of the campaign creation process. It's used to identify the targets of the attack, conduct extensive research on the organization or the employees being deceived, create or spoof a persona that will convince the targets, and craft an email in a style that's convincing enough for the targets to fall prey to.

AI phishing works efficiently because there's deep research involved in creating scenarios that seem familiar and legitimate to the recipients. In the message content, threat actors fabricate a scenario and nudge the recipients to perform a sensitive action such as fund transfer, confidential information sharing, resetting account passwords, and more. Due to the effort that goes into making it seem genuine, the recipients tend to trust the email and go ahead with performing the required action.

How do cybercriminals use AI for phishing? 

Essentially, AI phishing works in the same way most phishing campaigns do. Most attacks use AI in each step. We'll discuss how they're used in this section.

Gathering intelligence  

AI has significantly streamlined the collection and structuring of OSINT (open-source intelligence) for phishing campaigns. Attackers use AI to look up data from public sources such as LinkedIn, corporate websites, job postings, press releases, and social media activity. Large language models (LLMs) are then used to parse this data to identify hierarchies, communication patterns, and active business initiatives.

This, coupled with data from breaches, provides insights into credentials and interaction histories. By correlating these sources, attackers build target profiles that can be directly fed into phishing workflows, eliminating the need for manual reconnaissance.

Targeting at scale 

Traditional phishing campaigns required a trade-off between scale and personalization. AI removes that constraint. Attackers can generate thousands of tailored emails simultaneously by feeding target data into language models and automating delivery workflows. These systems can dynamically adjust tone, content, and context based on specific recipient attributes such as job function or seniority.

Campaign orchestration tools further enable segmentation and scheduling, ensuring that messages are delivered at optimal times to maximize engagement. This allows adversaries to run campaigns that are both taregt-specific and wide-reaching.

Drafting personalized messages 

LLMs enable attackers to generate content without language errors and contextually relevant emails with minimal effort. By conditioning prompts on target-specific data, such as recent company announcements or internal terminology, attackers can produce messages that closely mirror legitimate business communication. These models can also replicate formatting conventions, email writing styles, and tone consistency across emails, making phishing attempts harder to distinguish from real conversations.

Unlike older template-based approaches, AI-generated content avoids repetitive patterns, reducing the likelihood of detection by both users and traditional filtering systems, which operate based on just signature-based detection.

Realistic impersonation  

AI enhances impersonation beyond simple display name spoofing. Attackers can model the writing style of specific individuals by analyzing publicly available communications, generating emails that reflect their tone and phrasing. In more advanced scenarios, this extends to other media, including voice cloning and deepfake video, enabling multi-channel impersonation attacks such as vishing or executive fraud. When combined with compromised or lookalike domains, these techniques create a high level of authenticity that exploits trust within organizations, particularly in finance and executive workflows.

Reiterating and refining campaigns 

AI enables the continuous optimization of phishing campaigns through feedback loops. Attackers can track engagement metrics such as open rates, click-through rates, and response patterns, and feed this data back into their models to refine future outputs.

Variations in subject lines, messaging tone, and call-to-action phrasing can be automatically tested at scale by running A/B experiments without manual intervention. Over time, this process improves both delivery success and user deception rates. Unlike static campaigns, AI-driven phishing evolves dynamically, adapting to defenses and user behavior in real time.

Real-world AI-powered phishing techniques 

Threat actors find innovative ways to implement AI-based campaigns. Some of the most common techniques are discussed here.

Deepfake attacks 

Deepfake phishing uses AI-generated audio or video to impersonate trusted individuals, typically executives or senior stakeholders. Attackers train models on publicly available recordings to replicate facial expressions, voice, and speech patterns, and then deploy these. This shifts phishing from text-based deception to video or audio-based trust exploitation, making verification significantly harder in high-pressure scenarios.

Vishing attacks  

AI-powered vishing leverages voice cloning to impersonate individuals over phone calls or voice messages. Modern models can reproduce accent, tone, and modulation using minimal audio samples, allowing attackers to mimic executives or known contacts convincingly. These attacks are often timed to create urgency, reducing the likelihood of secondary verification.

In a widely reported 2019 incident, attackers used AI-generated voice technology to impersonate the CEO of a UK-based energy firm. The call convinced a senior executive to transfer €220,000 to a fraudulent account, believing the request was legitimate.

Phishing websites 

AI enables the rapid generation of highly realistic phishing websites that replicate legitimate login portals with precision. Attackers use generative models to recreate UI layouts, branding elements, and even dynamic behaviors such as error messages or redirects. These sites are often paired with tailored phishing emails, creating a consistent and convincing user experience from inbox to credential capture.

In campaigns observed during 2023–2024, attackers deployed AI-generated replicas of Microsoft 365 and Google login pages, complete with localized content and device-aware rendering. These sites were difficult to distinguish from legitimate portals, increasing credential harvesting success rates.

Polymorphic phishing attacks  

Polymorphic phishing uses AI to generate continuously varying versions of phishing emails, altering the language, structure, and indicators of compromise with each iteration. This prevents pattern recognition by traditional security systems, as no two emails share identical signatures. AI models automate this variation at scale, enabling campaigns to adapt dynamically and evade detection.

Security researchers have documented campaigns where each phishing email was uniquely generated, with different phrasing, subject lines, and embedded links. This level of variation significantly reduced the effectiveness of conventional filtering techniques.

AI-enhanced BEC 

AI enhances BEC attacks by enabling highly contextual and accurate impersonation of business communication. Attackers analyze prior email threads, organizational language, and transaction patterns to generate messages that align with ongoing workflows. This allows fraudulent requests, often related to payments or sensitive data, to blend seamlessly into legitimate conversations.

Multiple BEC incidents involved AI-generated emails that mimicked executives or vendors, referencing real invoices and active deals. These messages were often part of longer, AI-assisted exchanges that built credibility over time before initiating financial requests. The FBI has warned that AI is significantly increasing the effectiveness and scale of such attacks.

How to protect your company from AI-powered phishing  

People 

Human users remain the primary target in AI-powered phishing attacks, which makes awareness a critical control layer rather than a soft measure.

Employee awareness and training

Training needs to move beyond generic phishing indicators and focus on contextual anomalies, such as unexpected urgency, deviations in communication patterns, and requests that bypass established workflows. As AI-generated emails become linguistically accurate and context-aware, users must be trained to validate intent rather than rely on surface-level cues like grammar or formatting.

Phishing simulations 

Phishing simulations play a key role in putting this awareness into action. Modern simulations should reflect current attack techniques, including highly personalized messaging and multi-step interactions, rather than simplistic, template-based emails. By tracking user behavior, such as click rates and credential submission, organizations can identify high-risk users and continuously refine training. This creates a feedback loop that aligns human response with evolving threat patterns. It's also essential to conduct simulations using AI-based phishing emails to establish its nature to employees.

Process 

Process controls act as a safeguard when social engineering succeeds.

Multi-level approvals 

Multi-level approval systems introduce friction into high-risk actions such as fund transfers, credential changes, or access requests. These controls are most effective when they're enforced consistently and cannot be bypassed through email-based instructions alone. AI-driven phishing often targets moments of urgency, so requiring secondary authorization, especially through independent channels, reduces the likelihood of a single compromised interaction leading to a successful attack.

Verification workflows

Verification workflows further strengthen this layer by formalizing how sensitive requests are validated. Instead of relying on implicit trust in email communication, organizations should mandate secondary verification for critical actions, such as confirming requests via known phone numbers or internal systems. Standardizing these workflows ensures that even highly convincing, AI-generated messages are subjected to the same scrutiny, limiting the attacker's ability to exploit trust and context.

Technology

Technical controls provide the first line of defense by preventing malicious emails from reaching users or limiting their impact post-delivery.

Email authentication protocols

Email authentication mechanisms such as SPF, DKIM, and DMARC establish sender legitimacy and reduce domain spoofing. When properly configured and enforced, these protocols help receiving systems verify that incoming messages align with authorized sending infrastructure, making it harder for attackers to impersonate trusted domains at scale.

Behavioral threat detection 

Behavioral threat detection addresses the limitations of static filtering by analyzing patterns across email content, sender behavior, and user interaction. Instead of relying solely on known indicators of compromise, these systems identify anomalies such as unusual sending patterns, atypical request types, or deviations in communication style. This is particularly important for AI-powered phishing, where each email may be unique and free of known signatures. By continuously learning from existing communication patterns, behavioral systems can detect and respond to threats that would otherwise evade traditional defenses.

Wrapping up 

AI has fundamentally changed how phishing attacks work. What once required time, effort, and skill can now be automated, scaled, tested, and continuously optimized. As attacks become more precise and convincing, the signs that were once used to identify phishing attacks have changed. Defending against this shift requires a combination of building a security culture and using a security solution that offers protection against such attacks.


eProtect is a cloud-based email security and archiving solution that protects your organization from AI-based email threats. The solution offers advanced threat detection mechanisms to protect on-premise and cloud email accounts from evolving email threats. eProtect is the security solution that powers Zoho Mail, a platform that millions of users trust.

Leave a Reply

Your email address will not be published. Required fields are marked

By submitting this form, you agree to the processing of personal data according to our Privacy Policy.

You may also like