Introducing Our Ethical Simulation Standards

Today we're publicly releasing the Mirage Ethical Simulation Standards - a set of commitments that define how we simulate, the lines we won't cross, and the principles behind every test we run.

February 27, 2026
Ross Lazerowitz
Ross Lazerowitz
Co-Founder and CEO
Introducing Our Ethical Simulation Standards

Social engineering simulation is a weird business. We build realistic, AI-powered simulations that help organizations understand their human risk and train employees to recognize the threats they'll actually face. When done ethically and responsibly, simulations are one of the most effective tools available for building genuine security resilience. Not through fear, but through experience.

We've seen simulations in the industry that cross lines we wouldn't touch. Entrapment bribery simulations. Profiling employees' social media. Violating platform terms of service and FCC regulations. Creating deepfakes of individuals without consent. These tactics might mirror what real attackers do, but simulating them causes real harm to real people who didn't sign up for it.

So we wrote down the lines we won't cross. Not as a legal document or a marketing exercise, but as an operating standard that's embedded directly into our AI prompts and guardrails. Every simulation Mirage runs stays within these boundaries by design, not just by policy.

Below are our Ethical Simulation Standards in full. They're also available as a standalone page on our website.


Mirage Ethical Simulation Standards

Version 1.0 · February 2026

Mirage is a human security company. Our mission is to protect every person in the world from social engineering attacks. One of the ways we do this is through realistic, AI-powered simulations that help organizations understand their human risk and train employees to recognize the threats they'll actually face. When done ethically and responsibly, simulations are one of the most effective tools available for building genuine security resilience. Not through fear, but through experience.

These standards define how we simulate. They establish the boundaries we won't cross, the principles that guide our work, and the commitments we make to every organization and every individual who encounters a Mirage simulation. They reflect our operating principles. Specific obligations are defined in customer agreements.

Our Core Principles

Five principles guide every simulation we design, every product decision we make, and every customer agreement.

Principle 1: Simulate to Protect, Never to Harm Our simulations exist to reduce human risk, not exploit it. Every test is designed to generate actionable insight that makes organizations and their people safer. We never design simulations intended to embarrass, punish, or manipulate.

Principle 2: Respect the Individual The people who encounter our simulations are not adversaries. They are the people we're trying to protect. We treat every employee with dignity. Falling for a well-crafted social engineering attack is a human response, not a personal failure.

Principle 3: Draw Clear Lines and Hold Them Not every real-world attack vector should be simulated. Some tactics, even when used by actual threat actors, carry psychological, legal, or ethical risks that outweigh any training value. We draw clear lines and refuse to cross them, even when asked.

Principle 4: Operate With Transparency Our customers know exactly what we will and won't do. Our methods, our boundaries, and our data practices are documented and available. We don't conduct simulations outside agreed-upon scopes.

Principle 5: Safeguard Data Like It's Our Own Social engineering simulation requires access to sensitive data: voice samples, organizational structures, employee information. We treat this data with the same rigor we apply to threats, with strict controls on collection, retention, and destruction.

What We Will Never Do

The following commitments are absolute. They are not subject to customer override, contractual exception, or situational flexibility.

Prohibited Simulation Scenarios

Real attackers use these tactics. We won't simulate them. Not because they don't happen, but because the psychological and legal risk of simulating them outweighs any training value. We train your people to recognize these attacks without subjecting them to the experience.

No bribery or corruption simulations We won't simulate scenarios that offer financial or material incentives in exchange for access, credentials, or sensitive information. Even in a test context, bribery simulations create legal exposure and normalize corrupt behavior.

No extortion or blackmail We won't use threats of exposure, compromising material, or coercion as simulation vectors. These tactics cause genuine psychological harm regardless of whether they're "just a test."

No threats of violence or physical harm We won't reference harm to an employee or their family as a social engineering mechanism. No training value justifies that level of distress.

No fake personal emergencies We won't simulate fabricated family emergencies, medical diagnoses, or personal crises. The emotional cost is disproportionate to any security insight gained.

No impersonation of law enforcement or government officials We won't claim to be the FBI, IRS, SEC, police, or any government entity. In many jurisdictions, this is illegal regardless of intent. In all cases, it exploits a power dynamic that has no place in security testing.

No fake legal threats We won't simulate lawsuits, subpoenas, regulatory actions, or legal proceedings as phishing or vishing lures.

Channel Boundaries

Real attackers operate across every communication channel. LinkedIn, personal email, WhatsApp, you name it. But when an employee gets a simulated social engineering attack on their personal LinkedIn from a vendor their employer hired, it doesn't build security awareness. It destroys trust. We educate about these attack surfaces without simulating on them.

No personal social media or personal email We won't conduct simulations via LinkedIn, Facebook, Instagram, X (Twitter), TikTok, or any other personal social media platform. We won't target personal email addresses, even if discovered through OSINT or breach data.

Personal phone numbers only with explicit consent We recognize that personal mobile numbers are increasingly used for work. We will only simulate on personal phone numbers or messaging apps when the agreement includes explicit written consent for this scope.

Corporate channels by default All simulations default to corporate-managed channels: work email, work phone/VoIP, corporate Slack or Microsoft Teams, and other employer-controlled communication systems where the organization has a reasonable right to test.

If a simulation encounters someone in crisis, we stop. If during a simulation an employee indicates they are experiencing a personal emergency, medical crisis, or emotional distress, the simulation ends immediately. We don't push through a scenario when someone is vulnerable. Our systems are designed to detect these signals and disengage.

Compliance with laws, regulations, and platform terms of service We comply with all applicable laws, regulations, and platform terms of service when conducting simulations. This includes telecommunications regulations, data protection laws, and the terms of service of any platform involved in a simulation.

Data and Biometric Protections

Our deepfake voice cloning and AI simulation capabilities are powerful. That power demands clear constraints on how we collect, use, and destroy biometric and behavioral data.

No voice cloning or deepfakes unless explicitly requested and authorized We will only create a voice clone or deepfake likeness of an individual when a customer explicitly requests it and the individual has provided written authorization. We never generate synthetic voice or likeness proactively. When authorization is granted (for example, a CISO consenting to have their voice used in a vishing simulation) it is scoped to the specific simulation.

No biometric data retention beyond the agreement Voice samples, deepfake training data, and any biometric artifacts are destroyed upon agreement termination according to a documented data destruction schedule. We don't maintain libraries of employee voices or likenesses.

No sale or transfer of simulation data Employee behavioral data, simulation results, and simulation artifacts are never sold, shared with third parties, or used for purposes beyond the customer agreement without explicit customer authorization.