AI Generated Attacks - Defence Training

Your people are your
greatest vulnerability

AI-generated deepfakes, voice clones, and hyper-personalised phishing are bypassing every technical safeguard you have. The only defence left is a trained human mind.

$25.6M
Stolen in a single deepfake call
3,000%
Rise in deepfake fraud attempts
21 sec
To click a malicious link
68%
Of breaches involve human error

AI Has Changed the Rules of Fraud

Criminals no longer need technical sophistication. With consumer-grade AI tools, they can impersonate your CEO on a video call, clone a colleague's voice from a podcast, or craft a phishing email indistinguishable from the real thing.

Deepfake Video Calls

Real-time face and voice manipulation lets attackers impersonate executives on live video conferences, authorising fraudulent transfers.

$25.6M
Stolen from engineering firm Arup via a single deepfake video call impersonating the CFO. — CNN, 2024

AI Voice Cloning

A few seconds of publicly available audio is enough to clone someone's voice with near-perfect accuracy, enabling convincing phone-based fraud.

1,600%
Surge in deepfake-enabled voice phishing attacks in Q1 2025 vs Q4 2024. — Keepnet Labs

AI-Generated Phishing

Generative AI creates perfectly written, hyper-personalised phishing emails in seconds — with dramatically higher success rates than human-crafted attacks.

54%
Click-through rate on AI-generated phishing emails, vs 12% for conventional phishing. — Brightside AI, 2024
£14.4B
Annual cost of fraud in England & Wales
UK Home Office, 2024
Every 5 mins
A deepfake attack is attempted globally
Entrust, 2024
$40B
Predicted AI fraud losses by 2027
Deloitte
80%
Of companies have no deepfake defence protocols
Keepnet Labs

Three Stages to Organisational Resilience

A structured approach that moves from awareness to action to ongoing vigilance — building genuine psychological self-defence across your workforce.

1

The Accelerator Talk

100 – 200+ attendees

A high-impact presentation featuring live demonstrations of AI-generated impersonation. Audiences see a convincing deepfake of a trusted figure — then experience the reveal. This is the moment that creates urgency.

  • Live deepfake demonstration
  • Real-world case studies of AI fraud
  • The psychology of why people fall for it
  • Designed to be talked about afterwards
2

Intensive Training

Small groups · Half or full day

Hands-on workshops exploring how the human mind processes information, creates trust, and becomes vulnerable to manipulation under pressure.

  • How urgency and authority bypass rational thinking
  • Recognising emotional manipulation tactics
  • Verification protocols that actually work
  • Building intellectual humility as a defence
3

Simulated Testing

Ongoing · Organisation-wide

Controlled, bespoke deepfake attacks deployed across the organisation over the following months. Those who are caught out receive targeted refresher training.

  • Custom deepfakes tailored to your organisation
  • Unannounced simulated attacks over time
  • Measurable results and vulnerability reports
  • Refresher training for those who need it

Human Interaction You Can't Download

eLearning can teach facts. It cannot change behaviour under pressure. That requires a very different kind of training.

Behaviour Changes in the Room

Psychological self-defence isn't a concept you learn — it's a reflex you build. Live facilitation creates the emotional context needed for real behavioural change. eLearning creates awareness. We create resilience.

Shared Experience Builds Culture

When a team goes through a deepfake reveal together, it becomes a shared reference point — a moment they talk about. That cultural memory is what keeps people vigilant long after the training ends.

Emotional Impact Drives Retention

We remember what shocks us. A live demonstration where your own colleague — or your own voice — appears as a deepfake creates the kind of visceral understanding that no video module ever could.

Tailored to Your Organisation

A skilled facilitator reads the room, adapts to your industry's specific threat profile, and answers the questions your people are actually asking. No algorithm does that.

Practice Under Pressure

Role-play scenarios, live simulations, and group challenges let people practise the “pause and verify” habit in a safe environment — so it becomes instinctive when it matters most.

Measurable Results

Our simulated deepfake testing phase provides hard data on vulnerability before and after training — giving your board concrete evidence that the investment is working.

Why Smart People Get Fooled

The uncomfortable truth about AI-generated attacks is that intelligence is no protection. Fraudsters don't exploit ignorance — they exploit the very cognitive shortcuts that make high-performing people effective.

Understanding your own psychological architecture is the first step to defending it.

01

Authority Bias

When a request appears to come from the CEO, the brain downgrades scepticism. AI attackers exploit this weakness by creating flawless impersonations of senior leadership, deliberately triggering deference before rational thought can kick in.

02

Urgency & Scarcity

Phrases like “this must be done today” or “don't discuss this with anyone yet” create acquiescence. Urgency narrows attention, suppresses doubt, and dramatically increases compliance.

03

Familiarity & Trust

Attackers study their targets for weeks before striking — referencing real projects, known colleagues, and recent events. Familiarity creates comfort. Comfort overrides verification. That's the attack vector.

04

Confirmation Bias

Once people believe something is legitimate — a credible email, a familiar face on a call — they unconsciously filter out signals that contradict it. Working Voices' training teaches people to actively seek out evidence that this might be a scam.

The goal isn't to make your people paranoid. It's to give them the intellectual humility to pause, the emotional presence to notice when something feels off, and the confidence to verify — even when under pressure from apparent authority.

Psychological Self-Defence

Traditional cybersecurity training tells people what to watch for. But AI-generated attacks don't look like attacks. They look completely genuine.

Our programme is built on a different principle: training the human mind to stay emotionally present under pressure, to question authority with intellectual humility, and to build verification reflexes that become second nature.

Because when urgency, authority, and realism combine, technical awareness isn't enough. You need psychological resilience.

People who can identify deepfake videos are barely better than a coin flip — just 24.5% accuracy. The answer isn't better eyes. It's better thinking.

The idea is to keep people emotionally present, taking responsibility with intellectual humility and being so well trained they can see through and sidestep these attacks.
Nick Smallman — Founder, Working Voices

How a £25 Million Deepfake Fraud Unfolds

In January 2024, an employee at engineering firm Arup joined what appeared to be a routine video call with senior colleagues. Every face, every voice was AI-generated.

Week 1–2: Reconnaissance

The attackers gather intelligence

Publicly available LinkedIn profiles, conference videos, podcast appearances, and company announcements are scraped to build detailed profiles of key executives. This is spear-phishing at its most targeted.

Week 3: Building trust

Legitimate-looking emails arrive

The target receives credible emails that appear to come from senior leadership, establishing context for an upcoming “confidential” financial discussion. Nothing seems unusual.

The attack: Live deepfake call

A video conference with AI-generated colleagues

The employee joins a video call where the CFO and multiple colleagues appear to be present. Real-time face and voice manipulation makes the impersonation convincing. A £20 million transfer is authorised.

Aftermath

$25.6 million lost before detection

Multiple transfers were made across 15 transactions before the fraud was discovered. The employee had followed what appeared to be legitimate instructions from trusted superiors.

Built for Organisations Under Threat

Particularly relevant for defence, finance, professional services, and any organisation where a single fraudulent authorisation could cause catastrophic loss.

Defence & Government

Where information security is paramount and state-sponsored actors use increasingly sophisticated AI tools for social engineering.

Financial Services

$2.77 billion in Business Email Compromise losses reported to the FBI in 2024 alone. Finance teams are the primary target for AI-enabled fraud.

Enterprise & Professional Services

Any organisation where executives are public-facing, decisions move fast, and a single compromised employee can authorise significant transactions.

85%
Of organisations experienced a social engineering attack in 2024
PhishLabs
5 mins
For AI to create a phishing campaign vs 16 hours for a human
IBM X-Force
49%
Of businesses globally have experienced a deepfake incident
Keepnet Labs
£5.2B
Annual cost of fraud against UK businesses
UK Home Office

Protect Your Organisation

Book the Accelerator Talk for your leadership team and see how AI-generated attacks could target your people — before a real attacker does.

Get in Touch

robert@workingvoices.com