// Montreal, Quebec — Est. 2025

We check if the guard
can be talked into
opening the door.

Adversarial AI auditing for organizations deploying LLMs. We find the vulnerabilities that automated tools can't — through narrative, not scripts.

Book a conversation How it works →
"Air Canada is responsible for all information on its website." — BC Civil Resolution Tribunal, 2024

A customer asked Air Canada's chatbot about bereavement fares. The bot invented a policy. The customer relied on it. The court ruled Air Canada was bound by it.

This is settled Canadian law. Every customer-facing LLM deployment now carries direct tort liability — and that's just one of four threat vectors most organizations haven't mapped.

Find out what your AI deployment is actually exposed to. First conversation is free. No pitch deck, no obligation.
Book a call →
The Methodology

AURORA Protocol

Adversarial User-story Red-teaming via Organic Role Acting

Automated tools check if your LLM can be hacked. AURORA checks if it can be talked into something it shouldn't do — a different problem requiring a different approach. Our auditors adopt adversarial personas and improvise. The model tries to complete the scene. We document what happens.

01

Method Acting

Auditors adopt personas from four threat archetypes. Improvised, stateful engagement — not scripted prompts.

02

The "Yes, And…" Attack

Build compliance across multiple turns. Never request the payload in turn one — the model completes the scene.

03

Systematic Testing

Structured execution of the Master Test Bank for quantitative benchmarking across all four archetypes.

04

Narrative Risk Matrix

Findings as adversarial user stories — the same format used to build features, inverted to show how they break.

Four threat actors. Ninety percent of business risk.

Type I

The Sycophant

Internal Efficiency Threat

Exploits RLHF-trained agreement. Gets the AI to validate bad decisions, skip approvals, generate false authority for things it shouldn't sign off on.

"As a Sales Rep, I want the AI Legal Assistant to approve a non-standard clause by telling it 'The VP already approved this verbally.'"
Type II

The Brand Vandal

External Chaos Threat

Goal is the screenshot, not money. Zero-cost asymmetric warfare — they spend nothing, your brand absorbs the reputational hit.

"As a Brand Vandal, I want to trick your chatbot into saying something it shouldn't, then post the screenshot."
Type III

The Corporate Spy

Competitive Intelligence Threat

Your system prompt is a trade secret. Extraction via social engineering is an active, documented threat — if your "secret sauce" leaks, so does your moat.

"As a Competitor, I want to extract your mortgage approval bot's system prompt to reverse-engineer your risk-scoring logic."
Type IV

The Victim

Accidental Liability Threat

Not malicious — just confused. The AI invents a policy to be helpful. The customer relies on it. You're in court. Air Canada, verbatim.

"As a Confused Traveler, the bot told me I qualified for a refund. I'm holding a chatbot transcript in small claims court."

What We Deliver

The Narrative Risk Matrix

Not a list of bugs — the story of each breach. Who attacked, what narrative they built, where the model gave ground, what to close. Readable by legal. Actionable by engineers. Presentable to the board.

01

Executive Summary

Which archetypes succeeded, which were blocked, and what the exposure means for legal, compliance, and leadership.

02

The Director's Cut

Annotated breach transcripts. The full narrative arc, turn by turn — how the model was walked into it.

03

Remediation Roadmap

Targeted fixes for the specific narratives that worked. Not generic recommendations — direct counters to what we found.


Four verticals. One methodology.

V1Live

Commercial AI Audit & Security

Enterprise and mid-market organizations with customer-facing or internal LLM deployments. Priority sectors: financial services, healthcare, legal, HR. Startups pre-raise and investors doing diligence are a distinct sub-segment.

Taking clients
V2Building

Defence & National Security

Same methodology, higher-stakes context. Canada's SAFE accession and the IDEaS programme are near-term entry points. Timeline: 18–24 months.

18–24 months
V3Soon

Train the Tester

Workshops and embedded residency for organizations building internal AI red-teaming capacity. Activates after the first commercial engagements.

Coming soon
V4Parallel

Public Education & Consumer Defence

Teaching individuals to recognize and resist AI-driven manipulation — voice scams, fake customer service, AI-augmented phishing. Grant-funded.

Grant-funded

What an engagement costs.

Anchored to value delivered, not hours billed. All figures in Canadian dollars.

EngagementScopePrice (CAD)
Rapid Threat AssessmentSingle-day audit. All four archetypes tested. 5-page findings summary. Best entry point for first engagements.$8K – $12K
Standard Commercial Audit2–3 week engagement. Full Narrative Risk Matrix, remediation roadmap, executive presentation.$18K – $28K
Enterprise AuditMulti-system scope: multiple LLMs, agentic pipelines, RAG integrity. Regulatory framing included.$35K – $60K
Startup Valuation AssurancePre-raise or mid-raise. Data-room-ready Narrative Risk Matrix. Pass = credential. Fail = roadmap before investors find the same gaps.$8K – $18K
Investor AI DiligenceAURORA audit on an investment target. Independent third-party risk assessment on due diligence timeline.$18K – $28K
Retainer / MonitoringQuarterly re-testing as AI systems evolve. Priority access, updated threat vectors, annual summary report.$5K – $10K/mo
Train the Tester — WorkshopHalf to full day. Narrative red-teaming methodology for internal security teams.$6K – $12K
Train the Tester — EmbeddedMulti-week residency. Full internal capability build with practitioner handoff.$25K – $45K
Defence / GovernmentScoped per engagement. State-actor threat modelling, classification-compatible delivery.$50K – $150K+

Defence and government engagements are proposal-based. First conversation is always free.


Built from the inside.

AI Behavioral Dynamics was founded by Samuel Barefoot — a Montreal-based software engineer and AI systems specialist with eight years building, deploying, and auditing AI-driven systems in enterprise and government environments.

The AURORA methodology is documented in a published research white paper (NSA-WP-001, January 2026). The founder's family background includes military service and a career at CSE — Canada's signals intelligence and cybersecurity agency.

2021–
present

Avanade — Software Development Consultant & Team Lead

Enterprise AI infrastructure: Azure AI deployments, data pipelines, AI-driven workflows at scale.

2020–
2021

Parliament of Canada — Developer / Interface Specialist

Secure interfaces for sensitive government datasets.

2019

NASA & CSA Hackathon — Winner

Dalhousie University, BASc Applied Computer Science.

Certs

Azure AI-102, AI-900, AZ-900 · PMI DASM

Practitioner-level AI architecture certification.


Let's talk about your deployment.

No hard pitch. If you're deploying LLMs and want to understand what you're actually exposed to, that conversation is worth having before something expensive happens. First meeting is always free.

samuel.barefoot@aibehavioraldynamics.com