Back to articles
January 11, 2026AI Governance11 min read

Shadow AI Audit Framework: 5-Step Enterprise Detection Methodology

Comprehensive Shadow AI audit framework for enterprises. Learn our 5-step methodology to discover, assess, and govern unauthorized AI tools across your organization.

Q

QAIZEN

AI Governance Team

📖What is this?

Shadow AI Audit

A systematic assessment to identify, classify, and evaluate unauthorized AI tools within an organization. Includes technical discovery, risk assessment, and the development of remediation and governance plans to bring AI usage under organizational control.

67

average AI tools per enterprise

Source: Prompt Security 2025

68%

using AI through personal accounts

Source: Telus Digital 2025

57%

have shared sensitive data with AI

Source: Telus Digital 2025

Key Takeaways
  • The average enterprise has 67 AI tools - most unmanaged
  • Our 5-step framework: Discovery, Classification, Risk Assessment, Remediation, Governance
  • Start with network analysis and endpoint discovery for quick wins
  • Employee surveys reveal tools technical scans miss
  • Remediation should offer approved alternatives, not just blocking

The Shadow AI Reality

The average enterprise has 67 AI tools in use across the organization. Most IT and security teams are aware of fewer than 10.

This gap between perceived and actual AI usage is Shadow AI - the unauthorized use of AI tools that creates security, compliance, and operational risks.

Why auditing matters:

  • 68% of employees use AI through personal accounts
  • 57% have shared sensitive data with AI tools
  • Most Shadow AI goes undetected by standard security tools

The 5-Step Audit Framework

Our methodology provides a structured approach to discovering and governing Shadow AI:

1
Discovery
2
Classification
3
Risk Assessment
4
Remediation
5
Governance

Each step builds on the previous, creating a comprehensive view of AI usage and a path to control.


Step 1: Discovery

Objective

Identify all AI tools in use across the organization, regardless of how they were acquired or deployed.

Discovery Methods

MethodWhat It FindsEffortAccuracy
Network analysisAPI calls to AI servicesMediumHigh
Endpoint monitoringAI applications, browser extensionsMediumHigh
SaaS discoveryCloud AI subscriptionsLowMedium
Expense analysisPaid AI toolsLowMedium
Employee surveysSelf-reported usageMediumVariable
IT ticket analysisAI-related requestsLowLow

Network Analysis Checklist

TargetIndicatorsTools
OpenAI/ChatGPTapi.openai.com, chat.openai.comFirewall logs, proxy
Anthropic/Claudeapi.anthropic.com, claude.aiFirewall logs, proxy
Google Geminigenerativelanguage.googleapis.comFirewall logs, proxy
Microsoft Copilotcopilot.microsoft.comM365 admin
Midjourneymidjourney.com, discord.comFirewall logs
Other AI services/api/, /v1/chat, /completionsPattern matching

Endpoint Discovery Checklist

TargetDetection Method
AI browser extensionsExtension inventory
Desktop AI appsApplication inventory
AI dev toolsCode repository scan
Mobile AI appsMDM inventory
AI browser tabsBrowser history (with consent)

Employee Survey Template

Section 1: AI Tool Usage

1. Do you use AI tools for work? (Y/N)
2. Which AI tools do you use? (Select all)
ChatGPTClaudeCopilotGeminiOther: ___
3. How often? (Daily/Weekly/Monthly/Rarely)
4. What tasks do you use AI for? (Free text)

Section 2: Data Handling

5. What types of data do you input into AI tools?
Public infoInternal docsCustomer dataCodeOther
6. Are you aware of any AI usage policies? (Y/N)
7. Would you use an IT-approved AI tool if available? (Y/N)

Discovery Deliverables

DeliverableContents
AI InventoryAll discovered tools, users, usage patterns
Traffic analysisVolume, frequency, data flows
Survey resultsSelf-reported usage summary
Gap analysisKnown vs. discovered tools

Step 2: Classification

Objective

Categorize discovered AI tools by type, business function, and organizational status.

Classification Taxonomy

CategoryDescriptionExamples
Enterprise-approvedIT sanctioned, contracts in placeLicensed Copilot, enterprise ChatGPT
Shadow - KnownDetected but not approvedFree ChatGPT, Claude
Shadow - PersonalPersonal accounts for workPersonal OpenAI account
Shadow - EmbeddedAI in other SaaS toolsAI features in Notion, Slack
Shadow - DevelopmentAI in code/productsGitHub Copilot, AI APIs

Business Function Classification

FunctionTypical AI Use CasesRisk Level
EngineeringCode generation, debuggingHigh
MarketingContent creation, analysisMedium
SalesEmail drafting, researchMedium
HRResume screening, policiesHigh
FinanceAnalysis, reportingHigh
LegalContract review, researchCritical
Customer ServiceResponse draftingHigh
ExecutiveBriefings, presentationsMedium

Data Sensitivity Classification

LevelDescriptionExample Data
CriticalRegulated, high-valuePII, PHI, financial data
ConfidentialBusiness sensitiveStrategy, IP, contracts
InternalNot public but not sensitiveInternal communications
PublicNo sensitivityMarketing content

Classification Deliverables

DeliverableContents
Classified inventoryTools by category, function, sensitivity
Heat mapUsage by department and risk
Data flow mapWhat data goes where

Step 3: Risk Assessment

Objective

Evaluate risks for each discovered AI tool and prioritize remediation.

Risk Assessment Criteria

CriterionWeightAssessment Questions
Data sensitivity30%What data types flow to this tool?
Regulatory exposure25%GDPR, HIPAA, SOX implications?
Vendor security20%Data handling, training policies?
Usage volume15%How many users, how often?
Business criticality10%Impact if removed?

Risk Scoring Matrix

ScoreData SensitivityRegulatoryVendorVolumeCriticality
5PII/PHI/financialHigh (HIPAA, SOX)No agreements>100 usersCritical
4ConfidentialMedium (GDPR)Consumer ToS50-100High
3InternalLowEnterprise ToS20-50Medium
2Low sensitivityMinimalBAA/DPA available5-20Low
1PublicNoneSigned agreements<5Minimal

Composite Risk Score

Risk Score = (Data × 0.30) + (Regulatory × 0.25) + (Vendor × 0.20) +
(Volume × 0.15) + (Criticality × 0.10)
Composite ScoreRisk LevelAction Timeline
4.0-5.0CriticalImmediate (24-48 hours)
3.0-3.9HighShort-term (1-2 weeks)
2.0-2.9MediumMedium-term (1 month)
1.0-1.9LowLong-term (quarterly)

Risk Assessment Deliverables

DeliverableContents
Risk registerAll tools with risk scores
Priority rankingRemediation order
Risk reportExecutive summary

Step 4: Remediation

Objective

Address identified Shadow AI risks through a combination of controls, alternatives, and policy.

Remediation Options

OptionWhen to UseImplementation
BlockCritical risk, no legitimate needNetwork/endpoint controls
ReplaceLegitimate need, unsafe toolProvide approved alternative
GovernAcceptable with controlsPolicies, monitoring
AcceptLow risk, business valueDocument acceptance

Remediation Decision Framework

Is there a legitimate business need?
No → Block
Yes → Is an approved alternative available?
Yes → Replace (migrate users)
No → Can we make it safe?
Yes → Govern (policies, controls)
No → Can we build/buy an alternative?
Yes → Replace (timeline)
No → Risk accept with controls

Remediation Priorities

PriorityActionTarget
P1Block critical risk tools24-48 hours
P2Migrate to approved alternatives2 weeks
P3Implement governance controls1 month
P4Document risk acceptanceQuarterly

Approved Alternative Requirements

When selecting approved AI tools:

RequirementDescriptionVerification
Enterprise agreementsDPA, BAA, security addendumLegal review
No training on dataData not used to improve modelsVendor confirmation
Data residencyMeets geographic requirementsArchitecture review
Audit loggingUsage can be monitoredTechnical testing
SSO/SCIMIntegrates with identityTechnical testing
Admin controlsCentralized managementFeature review

Remediation Deliverables

DeliverableContents
Remediation planActions, owners, timelines
Approved tools listSanctioned AI tools with use cases
Policy updatesAI acceptable use policy
User communicationsMigration guidance

Step 5: Governance

Objective

Establish ongoing governance to prevent Shadow AI recurrence and enable secure AI adoption.

Governance Framework

ComponentPurposeOwner
AI PolicyClear rules for AI usageLegal/Compliance
Approved catalogList of sanctioned toolsIT
Request processHow to get new tools approvedIT/Security
MonitoringContinuous Shadow AI detectionSecurity
TrainingUser awarenessHR/Security
Review cadenceRegular reassessmentGovernance committee

AI Acceptable Use Policy Elements

ElementCoverage
ScopeWho and what is covered
Approved toolsList or link to catalog
Prohibited usesWhat's never allowed
Data handlingWhat can/cannot go into AI
AccountabilityWho's responsible
ConsequencesWhat happens for violations

Continuous Monitoring

Monitoring ActivityFrequencyOwner
Network traffic analysisContinuousSecurity
SaaS discovery scansWeeklyIT
Employee surveysQuarterlyHR
Policy compliance auditsQuarterlyCompliance
Risk reassessmentSemi-annualSecurity

Governance Metrics

MetricTargetPurpose
Shadow AI tool countDecreasingGovernance effectiveness
Approved tool adoptionIncreasingAlternative success
Policy awareness>90%Training effectiveness
Request-to-approval time<5 daysProcess efficiency
Incident countDecreasingRisk reduction

Governance Deliverables

DeliverableContents
Governance charterStructure, roles, responsibilities
Policy documentsAI acceptable use, data handling
Monitoring planTools, frequency, escalation
Training materialsUser awareness content
DashboardOngoing metrics visibility

Implementation Timeline

Week 1: Discovery

  • Configure network monitoring
  • Deploy endpoint discovery
  • Launch employee survey
  • Analyze SaaS inventory

Week 2: Classification

  • Categorize discovered tools
  • Map to business functions
  • Assess data sensitivity
  • Create classified inventory

Week 3: Risk Assessment

  • Score each tool
  • Prioritize risks
  • Document findings
  • Present to stakeholders

Week 4: Remediation Planning

  • Define remediation actions
  • Identify approved alternatives
  • Create migration plan
  • Draft policy updates

Weeks 5-8: Remediation Execution

  • Block critical risks
  • Migrate users to approved tools
  • Implement controls
  • Deploy policies

Ongoing: Governance

  • Continuous monitoring
  • Regular reassessment
  • Policy enforcement
  • Training delivery

The Bottom Line

Shadow AI auditing isn't a one-time project - it's an ongoing program. The 5-step framework provides structure:

Key takeaways:

  1. Discovery is foundational - You can't govern what you don't know about
  2. Classification drives prioritization - Not all Shadow AI is equal risk
  3. Risk scoring enables decisions - Objective criteria for remediation
  4. Alternatives beat blocking - Users need tools that work
  5. Governance prevents recurrence - Build sustainable controls

Organizations that master this framework will transform Shadow AI from a hidden risk to a governed capability.

Free • 5 min

Assess Your Shadow AI Risk

20%

of breaches linked to Shadow AI

+$670K

average cost per incident

40%

of companies affected by 2026

5-dimension risk score. Financial exposure quantified. EU AI Act roadmap included.

Assess My Risks

No email required • Instant results

Sources

  1. [1]Prompt Security. "State of AI Security Report 2025". Prompt Security, March 20, 2025.
  2. [2]Telus Digital. "GenAI in the Workplace Report". Telus Digital, June 15, 2025.
  3. [3]CyberArk. "Shadow AI: The Hidden Cybersecurity Threat". CyberArk, September 10, 2025.
  4. [4]Gartner. "Managing Shadow AI in the Enterprise". Gartner, August 22, 2025.

Related Articles