AI Security Architecture Logo

AI Security Architecture


Securing AI Systems in Regulated Environments

When AI meets compliance, architecture decisions become security decisions


The Problem

Regulated organizations — banks, defense contractors, payment processors, critical infrastructure operators — are adopting AI and LLM-based systems under intense pressure to move fast. But their compliance obligations have not changed. FIPS, Common Criteria, PCI, and government security frameworks still apply, and AI introduces attack surfaces that traditional security models were not designed to address.

The gap between “AI capability” and “AI security” is where risk accumulates.


What I Do

I design security architectures for AI-integrated systems that satisfy both innovation goals and regulatory requirements. My approach draws on 30 years of building hardened systems in environments where compromise is not acceptable.

Threat Modeling for AI Systems

  • Structured threat analysis of LLM/ML pipelines (STRIDE, MITRE ATLAS, FAIR)
  • Adversarial input and prompt injection risk assessment
  • Data poisoning and model extraction attack surface mapping
  • Supply chain risk analysis for model dependencies and training data

Secure AI Integration Architecture

  • Isolation boundaries between AI components and trusted systems
  • Cryptographic protection of model assets, inference data, and API channels
  • Privilege separation and least-privilege enforcement for AI agents
  • Secure deployment patterns for on-premise and hybrid environments

Compliance-Aligned AI Deployment

  • Mapping AI system risks to existing compliance frameworks (NIST AI RMF, FIPS, PCI)
  • Audit trail and explainability architecture for regulated decision-making
  • Data governance design to satisfy privacy and retention requirements
  • Documentation to support certification and regulatory review

AI-Enhanced Security Operations

  • Automated threat analysis using MITRE ATT&CK and AI-driven classification
  • AI-assisted vulnerability triage and risk prioritization
  • Intelligent monitoring and anomaly detection architecture
  • Productivity and quality gains through AI-augmented security workflows

Relevant Experience

Bank of Canada — AI-Automated Threat Analysis

Researched and prototyped automation of mobile threat analysis using the MITRE ATT&CK framework, OpenAI API, and GPT-4. Evaluated how AI could accelerate threat classification and risk assessment for the CBDC initiative’s mobile endpoint security program.

Ciena — AI-Augmented Security Engineering

Applied AI tools (ChatGPT, Claude Code) to achieve measurable productivity and quality gains across requirements analysis, feature design, implementation, testing, and certification documentation for FIPS 140-3 and Common Criteria readiness.

Irdeto — Application Protection at Scale

Designed security architectures for software protection and DRM systems where adversarial reverse engineering was the primary threat model — the same class of attack surface now facing AI model deployments.


Standards & Frameworks

Area Standards
AI Security NIST AI RMF, MITRE ATLAS, OWASP ML Top 10
Threat Modeling STRIDE, FAIR, MITRE ATT&CK
Cryptography FIPS 140-3, Common Criteria, NIST PQC
Payment & Compliance PCI-MPoC, PCI-DSS, EMV

Engagement Models

  • Architecture review — assess AI integration security for a new or existing system
  • Threat model — structured analysis of AI/ML attack surfaces with prioritized mitigations
  • Compliance mapping — align AI deployment with regulatory and certification requirements
  • Advisory retainer — ongoing technical guidance during design and implementation

Contact me to discuss your AI security architecture needs.