1. What Is Shadow AI?
Shadow AI refers to artificial intelligence tools, services, and models that are used within an organization without the knowledge, approval, or governance oversight of IT and security teams. It is the AI-era evolution of Shadow IT — and it is far more dangerous.
When an employee signs up for ChatGPT with their work email and pastes customer data into a prompt, that is Shadow AI. When a development team deploys a fine-tuned model on a personal cloud account, that is Shadow AI. When a business analyst uses an AI-powered spreadsheet tool that sends data to a third-party API, that is Shadow AI.
The core risk is not that employees are using AI — it is that they are using it outside the organization's security perimeter, compliance controls, and data governance policies. Every Shadow AI interaction is an unmonitored, unaudited data exchange with a third-party AI system.
2. The Scale of the Problem
The numbers are stark. Multiple industry surveys from late 2025 and early 2026 converge on similar findings:
- 60% of enterprises have employees using unauthorized AI tools with corporate data (Gartner, Q4 2025).
- Average of 12 unauthorized AI tools per 1,000 employees across industries (Forrester AI Security Survey, 2026).
- 43% of sensitive data exposures in 2025 involved data shared with AI services (IBM X-Force Threat Intelligence Index).
- Only 28% of organizations have any visibility into employee AI tool usage (McKinsey AI Governance Report, 2026).
The problem is growing exponentially. The number of commercially available AI tools grew from roughly 200 in 2023 to over 15,000 in 2025. Every one of these tools is a potential Shadow AI endpoint. And unlike traditional SaaS applications that require IT procurement, most AI tools can be accessed with just an email address and a credit card — or are entirely free.
3. Discovery Methods
Finding Shadow AI requires looking at multiple data sources, because no single signal provides complete visibility. ASTRA BASTION uses four complementary discovery methods:
3.1 Network Traffic Analysis
The most direct method: analyze outbound network traffic for connections to known AI service endpoints. This includes API calls to api.openai.com, anthropic.com, cohere.ai, and hundreds of other AI providers, as well as connections to AI-powered SaaS applications.
Known AI API Endpoints (maintained list of 500+):
api.openai.com → ChatGPT, GPT API
api.anthropic.com → Claude API
generativelanguage.googleapis.com → Gemini
api.cohere.ai → Cohere
api-inference.huggingface.co → HuggingFace Inference
Detection signals:
- POST requests with JSON payloads to AI endpoints
- Bearer token authentication to AI APIs
- WebSocket connections to chat-based AI services
- Large request bodies (indicating prompt + context data)
- Response streaming patterns characteristic of LLM output3.2 OAuth Application Scanning
Many AI tools integrate with enterprise systems through OAuth. By scanning the organization's OAuth grants (Google Workspace, Microsoft 365, Slack, GitHub), ASTRA BASTION identifies AI applications that have been granted access to corporate data — often with broad permissions.
3.3 DNS Log Ingestion
DNS queries reveal intent before data is transferred. By analyzing DNS logs for lookups to AI service domains, organizations can detect Shadow AI usage even when the actual API traffic is encrypted and invisible to content inspection.
3.4 Browser Extension Detection
AI-powered browser extensions (writing assistants, code completers, summarizers) are a rapidly growing Shadow AI vector. These extensions often have access to all page content and can exfiltrate data to third-party servers. Endpoint management tools can inventory installed extensions and flag those with AI capabilities.
4. ASTRA BASTION's Shadow AI Discovery Engine
ASTRA BASTION's Shadow AI Discovery Engine combines all four discovery methods into a unified platform that provides continuous, automated visibility into Shadow AI usage across the organization.
- Automated inventory: Discovered AI tools are automatically cataloged with metadata including provider, data access scope, user count, and risk classification.
- Risk scoring: Each Shadow AI tool is scored based on data sensitivity exposure, regulatory implications, and provider security posture.
- User attribution: Discovery links to specific users and departments, enabling targeted remediation without disrupting the entire organization.
- Continuous monitoring: Discovery is not a one-time scan. The engine runs continuously, detecting new Shadow AI usage as it appears.
- Alert escalation: High-risk discoveries (e.g., AI tools accessing customer PII, financial data, or health records) trigger immediate alerts to security and compliance teams.
5. Case Study: How a Fortune 500 Bank Found 47 Unauthorized AI Tools
A major financial institution engaged ASTRA BASTION to assess their Shadow AI exposure. With 35,000 employees across 12 countries, they believed their AI governance was solid — they had approved 3 AI tools for enterprise use and communicated clear usage policies.
The 30-day discovery period revealed a dramatically different reality:
Shadow AI Tools Discovered: 47
├── Generative AI (ChatGPT, Claude, Gemini, etc.): 8 tools
├── AI-powered code assistants: 6 tools
├── AI writing/editing tools: 9 tools
├── AI data analysis platforms: 7 tools
├── AI-powered browser extensions: 12 tools
└── AI meeting transcription services: 5 tools
Data Exposure:
├── Customer PII shared with AI services: 3,200+ instances
├── Internal financial data in AI prompts: 1,800+ instances
├── Source code shared with AI assistants: 4,500+ instances
└── Confidential documents summarized by AI: 900+ instances
Users involved: 4,100 (11.7% of workforce)
Departments: All 12 major departments had Shadow AI usage
Regulatory violations identified: 23 (FINRA, SEC, GDPR)The discovery revealed that Shadow AI was not a marginal problem confined to tech-savvy employees — it was pervasive across every department. The remediation plan prioritized high-risk exposures (customer PII, financial data) and resulted in a structured AI governance program that reduced unauthorized usage by 89% within 60 days.
6. From Discovery to Governance
Finding Shadow AI is only step one. The harder — and more important — work is bringing it under governance without killing productivity. Employees use Shadow AI because it makes them more effective. A governance program that simply blocks all unauthorized AI tools will drive usage underground and destroy trust.
- Categorize, do not just block: Classify discovered tools into "approve," "monitor," and "block" categories. Many Shadow AI tools can be legitimately approved with proper data handling controls.
- Provide alternatives: If employees are using ChatGPT because they need AI assistance, provide an approved AI tool (routed through ASTRA BASTION's gateway) rather than leaving them with no option.
- Educate, do not punish: Most Shadow AI usage is not malicious. Employees are trying to be more productive. Education about data risks is more effective than punitive measures.
- Measure continuously: Shadow AI governance is not a project — it is a program. Continuous monitoring through ASTRA BASTION ensures that new Shadow AI usage is detected and addressed in real time.
7. Building an AI Usage Policy That Works
An effective AI usage policy balances security requirements with employee productivity. Based on our experience with dozens of enterprise deployments, here are the key elements of a policy that employees actually follow:
- Clear data classification: Define which data categories can and cannot be used with AI tools. "No customer PII in external AI tools" is clear and actionable. "Use AI responsibly" is not.
- Approved tool list: Maintain a current list of approved AI tools with documented use cases. Update it monthly as new tools are evaluated and approved.
- Request process: Make it easy for employees to request approval for new AI tools. A 2-week approval process drives Shadow AI; a 48-hour process reduces it.
- Incident reporting: Provide a blame-free channel for employees to report accidental data exposure to AI tools. Early detection of data leaks is more valuable than punishment.
- Regular training: Quarterly AI security awareness training that includes real examples of data exposure risks. Make it practical, not theoretical.
Shadow AI is not going away. As AI tools become more capable and more numerous, the pressure on employees to use them — with or without approval — will only increase. The organizations that thrive will be those that build governance frameworks that embrace AI adoption while controlling its risks. ASTRA BASTION provides the visibility and controls to make that possible.