Security builder & leader

Awareness Training Won't Protect Employees from Their Own AI Tools

When an AI tool influences an employee's decision, audit logs record the human's action and miss the AI's role. Addressing that blind spot requires escalation procedures and engineering controls that go beyond what awareness programs can deliver.

Awareness Training Won't Protect Employees from Their Own AI Tools - illustration

AI tools that employees use every day shape their decisions, but that influence is hard to recognize. Addressing this through AI awareness training risks repeating the mistakes we made with security awareness. We told colleagues to “be suspicious” of links and attachments they needed for work. We extolled the virtues of vigilance, setting unrealistic expectations rather than explaining a specific process, such as reporting a security anomaly.

Now, as enterprises embed AI into daily workflows, employees build trust in systems that speak insightfully and project confidence. Many organizations offer responsible AI training that covers data privacy, acceptable use, and intellectual property. Employees are told they’re responsible for verifying AI output. But accountability rules don’t help people recognize when a trusted tool is shaping their judgment.

A large-scale survey found that 66% of respondents rely on AI output without checking its accuracy. Employees using AI tools their organization chose and deployed have even less reason to question the results. The natural response will be to add “be careful with AI” to the awareness curriculum. But “be careful” is the same vigilance instruction that didn’t work before.

Trusted AI tools are harder to question than trusted colleagues.

An AI tool that helps a person do better work every day earns their trust. This amplifies the negative effects of a compromised agent, a poisoned model, or a misaligned recommendation. Even more than phishing emails that appear legitimate, guidance from a trusted tool arrives with credibility already established. For example:

When something goes wrong, audit logs miss the AI’s role.

Traditional social engineering leaves forensic traces if we know where to look. A phishing email sits in an inbox, a pretexting call shows up in phone logs, and an unauthorized access attempt appears in authentication records.

In most enterprises, AI-driven influence doesn’t appear in audit logs. The AI recommends an action, and the employee carries it out. Audit logs of the downstream application capture the employee’s decision as a legitimate human action. The AI interaction is rarely linked to the action it influenced, if it’s recorded at all. OWASP’s Top 10 for Agentic Applications recognizes this issue, describing the agent as an untraceable influence that manipulates humans into performing the final, audited action.

Awareness frameworks don’t address AI-driven influence as of this writing. CIS Control 14, for example, trains employees to recognize “phishing, business email compromise, pretexting, and tailgating,” all human-to-human persuasion tactics.

Teach specific procedures, not general suspicion.

Telling employees “don’t trust your AI tools” fails for the same reason “be suspicious of links” isn’t practical. People who interact with AI tools throughout the day can’t maintain a constant state of skepticism. Even employees who know AI can still be influenced by it.

The response to this risk has four parts, and only one of them involves training.

Teach when to escalate, not what to fear. If an AI tool recommends something outside normal parameters or suggests circumventing a process, employees should contact security. Escalating to a person matters more than debating the tool. This mirrors what works for other awareness topics. Tell people when and how to ask for help, not just to “be cautious.”

Require confirmation for high-impact actions. Financial transactions, permission changes, and data exports recommended by AI need human confirmation steps that the agent can’t bypass. Organizations already require dual approval for wire transfers, and AI-recommended actions with comparable consequences deserve the same control.

Close the audit trail gap. Investigative teams need to see what the agent suggested, not just what the employee did. Without that visibility, they’ll attribute AI-driven decisions to employees. This is an engineering and product feature problem.

Test AI interactions in exercises. Add AI-driven scenarios to red team and tabletop exercises. Measure whether employees reported anomalous AI behavior, not whether they “fell for it.” Phishing exercises should reward reporting over punishing clicks, and AI exercises should do the same.

The AI audit trail and confirmation controls require engineering investment and partnership with the teams that own AI agent infrastructure and products. This is a cross-functional challenge security leaders have navigated before.

Awareness training works when it tells people what to do, not what to fear. For AI tools, that means teaching escalation and building the engineering controls that training alone can’t replace.

About the Author

Lenny Zeltser is a cybersecurity executive with deep technical roots, product management experience, and a business mindset. He has built security products and programs from early stage to enterprise scale. He is also a Faculty Fellow at SANS Institute and the creator of REMnux, a popular Linux toolkit for malware analysis. Lenny shares his perspectives on security leadership and technology at zeltser.com.

Learn more →