Shadow AI Audit Framework: 5-Step Enterprise Detection Methodology
Comprehensive Shadow AI audit framework for enterprises. Learn our 5-step methodology to discover, assess, and govern unauthorized AI tools across your organization.
QAIZEN
AI Governance Team
Shadow AI Audit
A systematic assessment to identify, classify, and evaluate unauthorized AI tools within an organization. Includes technical discovery, risk assessment, and the development of remediation and governance plans to bring AI usage under organizational control.
67
average AI tools per enterprise
Source: Prompt Security 2025
68%
using AI through personal accounts
Source: Telus Digital 2025
57%
have shared sensitive data with AI
Source: Telus Digital 2025
- The average enterprise has 67 AI tools - most unmanaged
- Our 5-step framework: Discovery, Classification, Risk Assessment, Remediation, Governance
- Start with network analysis and endpoint discovery for quick wins
- Employee surveys reveal tools technical scans miss
- Remediation should offer approved alternatives, not just blocking
The Shadow AI Reality
The average enterprise has 67 AI tools in use across the organization. Most IT and security teams are aware of fewer than 10.
This gap between perceived and actual AI usage is Shadow AI - the unauthorized use of AI tools that creates security, compliance, and operational risks.
Why auditing matters:
- 68% of employees use AI through personal accounts
- 57% have shared sensitive data with AI tools
- Most Shadow AI goes undetected by standard security tools
The 5-Step Audit Framework
Our methodology provides a structured approach to discovering and governing Shadow AI:
1Discovery2Classification3Risk Assessment4Remediation5Governance
Each step builds on the previous, creating a comprehensive view of AI usage and a path to control.
Step 1: Discovery
Objective
Identify all AI tools in use across the organization, regardless of how they were acquired or deployed.
Discovery Methods
| Method | What It Finds | Effort | Accuracy |
|---|---|---|---|
| Network analysis | API calls to AI services | Medium | High |
| Endpoint monitoring | AI applications, browser extensions | Medium | High |
| SaaS discovery | Cloud AI subscriptions | Low | Medium |
| Expense analysis | Paid AI tools | Low | Medium |
| Employee surveys | Self-reported usage | Medium | Variable |
| IT ticket analysis | AI-related requests | Low | Low |
Network Analysis Checklist
| Target | Indicators | Tools |
|---|---|---|
| OpenAI/ChatGPT | api.openai.com, chat.openai.com | Firewall logs, proxy |
| Anthropic/Claude | api.anthropic.com, claude.ai | Firewall logs, proxy |
| Google Gemini | generativelanguage.googleapis.com | Firewall logs, proxy |
| Microsoft Copilot | copilot.microsoft.com | M365 admin |
| Midjourney | midjourney.com, discord.com | Firewall logs |
| Other AI services | /api/, /v1/chat, /completions | Pattern matching |
Endpoint Discovery Checklist
| Target | Detection Method |
|---|---|
| AI browser extensions | Extension inventory |
| Desktop AI apps | Application inventory |
| AI dev tools | Code repository scan |
| Mobile AI apps | MDM inventory |
| AI browser tabs | Browser history (with consent) |
Employee Survey Template
Section 1: AI Tool Usage
1. Do you use AI tools for work? (Y/N)2. Which AI tools do you use? (Select all)ChatGPTClaudeCopilotGeminiOther: ___3. How often? (Daily/Weekly/Monthly/Rarely)4. What tasks do you use AI for? (Free text)
Section 2: Data Handling
5. What types of data do you input into AI tools?Public infoInternal docsCustomer dataCodeOther6. Are you aware of any AI usage policies? (Y/N)7. Would you use an IT-approved AI tool if available? (Y/N)
Discovery Deliverables
| Deliverable | Contents |
|---|---|
| AI Inventory | All discovered tools, users, usage patterns |
| Traffic analysis | Volume, frequency, data flows |
| Survey results | Self-reported usage summary |
| Gap analysis | Known vs. discovered tools |
Step 2: Classification
Objective
Categorize discovered AI tools by type, business function, and organizational status.
Classification Taxonomy
| Category | Description | Examples |
|---|---|---|
| Enterprise-approved | IT sanctioned, contracts in place | Licensed Copilot, enterprise ChatGPT |
| Shadow - Known | Detected but not approved | Free ChatGPT, Claude |
| Shadow - Personal | Personal accounts for work | Personal OpenAI account |
| Shadow - Embedded | AI in other SaaS tools | AI features in Notion, Slack |
| Shadow - Development | AI in code/products | GitHub Copilot, AI APIs |
Business Function Classification
| Function | Typical AI Use Cases | Risk Level |
|---|---|---|
| Engineering | Code generation, debugging | High |
| Marketing | Content creation, analysis | Medium |
| Sales | Email drafting, research | Medium |
| HR | Resume screening, policies | High |
| Finance | Analysis, reporting | High |
| Legal | Contract review, research | Critical |
| Customer Service | Response drafting | High |
| Executive | Briefings, presentations | Medium |
Data Sensitivity Classification
| Level | Description | Example Data |
|---|---|---|
| Critical | Regulated, high-value | PII, PHI, financial data |
| Confidential | Business sensitive | Strategy, IP, contracts |
| Internal | Not public but not sensitive | Internal communications |
| Public | No sensitivity | Marketing content |
Classification Deliverables
| Deliverable | Contents |
|---|---|
| Classified inventory | Tools by category, function, sensitivity |
| Heat map | Usage by department and risk |
| Data flow map | What data goes where |
Step 3: Risk Assessment
Objective
Evaluate risks for each discovered AI tool and prioritize remediation.
Risk Assessment Criteria
| Criterion | Weight | Assessment Questions |
|---|---|---|
| Data sensitivity | 30% | What data types flow to this tool? |
| Regulatory exposure | 25% | GDPR, HIPAA, SOX implications? |
| Vendor security | 20% | Data handling, training policies? |
| Usage volume | 15% | How many users, how often? |
| Business criticality | 10% | Impact if removed? |
Risk Scoring Matrix
| Score | Data Sensitivity | Regulatory | Vendor | Volume | Criticality |
|---|---|---|---|---|---|
| 5 | PII/PHI/financial | High (HIPAA, SOX) | No agreements | >100 users | Critical |
| 4 | Confidential | Medium (GDPR) | Consumer ToS | 50-100 | High |
| 3 | Internal | Low | Enterprise ToS | 20-50 | Medium |
| 2 | Low sensitivity | Minimal | BAA/DPA available | 5-20 | Low |
| 1 | Public | None | Signed agreements | <5 | Minimal |
Composite Risk Score
Risk Score = (Data × 0.30) + (Regulatory × 0.25) + (Vendor × 0.20) +(Volume × 0.15) + (Criticality × 0.10)
| Composite Score | Risk Level | Action Timeline |
|---|---|---|
| 4.0-5.0 | Critical | Immediate (24-48 hours) |
| 3.0-3.9 | High | Short-term (1-2 weeks) |
| 2.0-2.9 | Medium | Medium-term (1 month) |
| 1.0-1.9 | Low | Long-term (quarterly) |
Risk Assessment Deliverables
| Deliverable | Contents |
|---|---|
| Risk register | All tools with risk scores |
| Priority ranking | Remediation order |
| Risk report | Executive summary |
Step 4: Remediation
Objective
Address identified Shadow AI risks through a combination of controls, alternatives, and policy.
Remediation Options
| Option | When to Use | Implementation |
|---|---|---|
| Block | Critical risk, no legitimate need | Network/endpoint controls |
| Replace | Legitimate need, unsafe tool | Provide approved alternative |
| Govern | Acceptable with controls | Policies, monitoring |
| Accept | Low risk, business value | Document acceptance |
Remediation Decision Framework
Is there a legitimate business need?No → BlockYes → Is an approved alternative available?Yes → Replace (migrate users)No → Can we make it safe?Yes → Govern (policies, controls)No → Can we build/buy an alternative?Yes → Replace (timeline)No → Risk accept with controls
Remediation Priorities
| Priority | Action | Target |
|---|---|---|
| P1 | Block critical risk tools | 24-48 hours |
| P2 | Migrate to approved alternatives | 2 weeks |
| P3 | Implement governance controls | 1 month |
| P4 | Document risk acceptance | Quarterly |
Approved Alternative Requirements
When selecting approved AI tools:
| Requirement | Description | Verification |
|---|---|---|
| Enterprise agreements | DPA, BAA, security addendum | Legal review |
| No training on data | Data not used to improve models | Vendor confirmation |
| Data residency | Meets geographic requirements | Architecture review |
| Audit logging | Usage can be monitored | Technical testing |
| SSO/SCIM | Integrates with identity | Technical testing |
| Admin controls | Centralized management | Feature review |
Remediation Deliverables
| Deliverable | Contents |
|---|---|
| Remediation plan | Actions, owners, timelines |
| Approved tools list | Sanctioned AI tools with use cases |
| Policy updates | AI acceptable use policy |
| User communications | Migration guidance |
Step 5: Governance
Objective
Establish ongoing governance to prevent Shadow AI recurrence and enable secure AI adoption.
Governance Framework
| Component | Purpose | Owner |
|---|---|---|
| AI Policy | Clear rules for AI usage | Legal/Compliance |
| Approved catalog | List of sanctioned tools | IT |
| Request process | How to get new tools approved | IT/Security |
| Monitoring | Continuous Shadow AI detection | Security |
| Training | User awareness | HR/Security |
| Review cadence | Regular reassessment | Governance committee |
AI Acceptable Use Policy Elements
| Element | Coverage |
|---|---|
| Scope | Who and what is covered |
| Approved tools | List or link to catalog |
| Prohibited uses | What's never allowed |
| Data handling | What can/cannot go into AI |
| Accountability | Who's responsible |
| Consequences | What happens for violations |
Continuous Monitoring
| Monitoring Activity | Frequency | Owner |
|---|---|---|
| Network traffic analysis | Continuous | Security |
| SaaS discovery scans | Weekly | IT |
| Employee surveys | Quarterly | HR |
| Policy compliance audits | Quarterly | Compliance |
| Risk reassessment | Semi-annual | Security |
Governance Metrics
| Metric | Target | Purpose |
|---|---|---|
| Shadow AI tool count | Decreasing | Governance effectiveness |
| Approved tool adoption | Increasing | Alternative success |
| Policy awareness | >90% | Training effectiveness |
| Request-to-approval time | <5 days | Process efficiency |
| Incident count | Decreasing | Risk reduction |
Governance Deliverables
| Deliverable | Contents |
|---|---|
| Governance charter | Structure, roles, responsibilities |
| Policy documents | AI acceptable use, data handling |
| Monitoring plan | Tools, frequency, escalation |
| Training materials | User awareness content |
| Dashboard | Ongoing metrics visibility |
Implementation Timeline
Week 1: Discovery
- Configure network monitoring
- Deploy endpoint discovery
- Launch employee survey
- Analyze SaaS inventory
Week 2: Classification
- Categorize discovered tools
- Map to business functions
- Assess data sensitivity
- Create classified inventory
Week 3: Risk Assessment
- Score each tool
- Prioritize risks
- Document findings
- Present to stakeholders
Week 4: Remediation Planning
- Define remediation actions
- Identify approved alternatives
- Create migration plan
- Draft policy updates
Weeks 5-8: Remediation Execution
- Block critical risks
- Migrate users to approved tools
- Implement controls
- Deploy policies
Ongoing: Governance
- Continuous monitoring
- Regular reassessment
- Policy enforcement
- Training delivery
The Bottom Line
Shadow AI auditing isn't a one-time project - it's an ongoing program. The 5-step framework provides structure:
Key takeaways:
- Discovery is foundational - You can't govern what you don't know about
- Classification drives prioritization - Not all Shadow AI is equal risk
- Risk scoring enables decisions - Objective criteria for remediation
- Alternatives beat blocking - Users need tools that work
- Governance prevents recurrence - Build sustainable controls
Organizations that master this framework will transform Shadow AI from a hidden risk to a governed capability.
Assess Your Shadow AI Risk
20%
of breaches linked to Shadow AI
+$670K
average cost per incident
40%
of companies affected by 2026
5-dimension risk score. Financial exposure quantified. EU AI Act roadmap included.
No email required • Instant results
Sources
- [1]Prompt Security. "State of AI Security Report 2025". Prompt Security, March 20, 2025.Link
- [2]Telus Digital. "GenAI in the Workplace Report". Telus Digital, June 15, 2025.Link
- [3]CyberArk. "Shadow AI: The Hidden Cybersecurity Threat". CyberArk, September 10, 2025.Link
- [4]Gartner. "Managing Shadow AI in the Enterprise". Gartner, August 22, 2025.Link