All Articles
Beth McDaniel
Risks

Only 1 in 5 Employees Know Your AI Policy. That's Everyone's Problem.

Only 1 in 5 employees using AI at work can point to a company policy that tells them how to do it safely. That means 4 out of 5 are making their own decisions about what to share, which tools to use, and where your data goes. And most of them aren't doing it maliciously. They just have no idea there's anything to worry about.

That gap, between what your workforce is doing every day and what your governance program actually covers, is where your biggest AI compliance risk lives right now.

Most organizations have a general acceptable use policy. A document that covers appropriate use of company systems, data handling expectations, and approved software. It was written before generative AI existed and probably hasn't been updated since ChatGPT became a household name.

Some companies have taken steps toward a dedicated AI policy, whether written or informal. Many haven't. And a significant number are still trying to figure out where they stand, unsure whether to address AI usage directly, add a clause to existing policy, or wait and see how the landscape settles before making any formal decisions.

The problem with waiting is that your employees aren't waiting. They're using AI tools right now, with or without guidance, and the compliance and governance implications of that are accumulating whether your policy reflects it or not.

The Gap Between Policy and Reality

A 2025 survey of over 12,000 white-collar employees found that 60% had used AI tools at work, but only 18.5% were aware of any official company policy on AI use. Read that again: fewer than 1 in 5 employees using AI at work could point to a governance policy that told them how to do it safely. That number makes sense when you consider how many organizations haven't yet written one. Employees are filling the governance vacuum on their own, defaulting to whatever tools make their jobs easier and assuming that if it wasn't explicitly prohibited, it must be fine.

That 18.5% number isn't just an internal governance problem. It's a third-party compliance problem too. When an employee submits a vendor contract to an AI tool to speed up a review, that vendor's data is now in a system neither party agreed to. When a customer service rep pastes a client's account details into a chatbot to draft a response faster, that customer's data has just been processed by a third party outside any data processing agreement your organization has in place. GDPR, HIPAA, CCPA, and most enterprise data agreements require organizations to document and control where third-party data goes. An employee who doesn't know your AI governance policy exists has no way to honor those obligations, and your organization is on the hook for what they do anyway.

The compliance exposure doesn't stop at your own data. Every partner, customer, and vendor whose information touches your systems is affected by the governance gaps your employees are unknowingly creating every day.

Even among organizations that have made some attempt at AI policy, enforcement is almost nonexistent. According to research from Kiteworks, 83% of organizations lack automated controls to prevent sensitive data from entering public AI tools, and 86% have no visibility into their AI data flows at all. The remaining controls? Mostly training sessions and email reminders. Not exactly a compliance infrastructure.

The perception gap at the leadership level makes this worse. One-third of executives believe their company tracks all AI usage. Only 9% actually have working governance systems in place. That disconnect means many organizations are carrying far more AI-related risk than leadership realizes, and far more than any general acceptable use policy was designed to address.

What Employees Are Actually Doing

Shadow AI, the use of AI tools that haven't been approved or vetted by IT and security teams, is now the norm rather than the exception and it's one of the biggest governance blind spots in the enterprise today. A 2026 BlackFog survey of 2,000 workers at companies with more than 500 employees found that nearly half of employees admit to using AI tools without employer approval. And the problem starts at the top: 69% of presidents and C-suite members said they prioritize speed over privacy when adopting AI tools, which means AI governance is being undermined by the very people who are supposed to champion it.

The data being shared is not trivial. Research shows that 77% of employees who use AI tools paste sensitive business data into them. Customer records. Financial projections. Source code. Legal documents. Contract terms. It flows into consumer AI platforms with no audit trail, no access controls, and no way to retrieve it once it's gone. Every one of those interactions is a compliance event that most organizations have no record of.

According to Cisco's 2025 study, 46% of organizations have already experienced internal data leaks through generative AI, meaning data that walked out the door through employee prompts rather than traditional exfiltration methods. This is a category of data loss that most DLP tools weren't designed to detect, because AI inference looks nothing like traditional file transfer or email forwarding.

The Regulatory Problem Is Getting Harder to Ignore

Here's where this stops being a security problem and starts being a legal one. Regulators don't distinguish between a data exposure that happened through a phishing attack and one that happened because an employee pasted customer records into ChatGPT. The obligation to protect that data is the same either way.

U.S. federal agencies issued 59 new AI-related regulations in 2024 alone, more than double the year before, and new frameworks targeting AI data handling have continued to emerge through 2025 and into 2026. GDPR requires organizations to maintain records of all data processing activities, but you can't document what you can't see. HIPAA demands audit trails for any access to protected health information, but shadow AI usage makes those trails impossible to produce. SOX and other financial reporting controls face the same problem.

In practice, most organizations can't answer basic questions that a regulator might ask: which AI tools have access to customer data, what was submitted to those tools, and how would you delete it if required. Without visibility, every unmonitored employee prompt is a potential compliance failure sitting in a queue.

Gartner projects that 40% of enterprises will experience a data breach attributable to shadow AI by 2030. That's not a distant risk. The groundwork for those incidents is being laid today, one unmonitored prompt at a time.

Why Policies Without Enforcement Don't Work

The reason most AI policies fail in practice comes down to three things. First, there's no technical enforcement. A policy that relies on employees remembering to follow it is not a control, it's a suggestion. People default to whatever tool gets the job done fastest, and right now, that's often a consumer AI product with no enterprise security guardrails.

Second, there's no auditability. If AI usage isn't logged and monitored, there's no way to prove compliance, investigate an incident, or respond to a regulator. The policy exists as a document, but there's no evidence trail to back it up.

Third, the policies don't fit how people actually work. Employees are told what not to do, but they still need to do their jobs. Without sanctioned tools that actually meet their needs, they find workarounds. Blocking access to ChatGPT doesn't make the underlying productivity need go away. It just pushes the usage somewhere harder to see.

Less than one-third of organizations have deployed comprehensive AI governance frameworks, according to ISACA's 2025 research. Of those that have made some progress, only one in five have reached the level of governance maturity that includes access logs, model version control, and audit policies. For security and compliance teams, that means the vast majority of organizations are carrying governance gaps they can't fully see or measure.

What Governance Actually Requires

Real AI governance isn't a document. It's visibility, enforceability, and an audit trail that can survive scrutiny. Without all three, a governance program is just a policy waiting to be tested.

That starts at the endpoint. Security teams need to know which AI tools employees are actually using, not just which ones are on the approved list. They need to see what data is being submitted to those tools, whether that's a sanctioned enterprise platform or a consumer tool someone found on their own. And they need to be able to act on that information before a regulator asks for it, not after.

It also means connecting AI governance to the broader security stack. An employee who submits proprietary code to an AI tool and then forwards the output to a personal email account is showing a pattern that only becomes visible when those data points are seen together. Endpoint-level monitoring that covers AI activity alongside cloud storage, SaaS applications, and traditional data movement is the only way to get a complete compliance picture.

For compliance teams specifically, governance is about being able to answer questions, not just have policies. Which tools hold customer data? What was submitted in the last 90 days? If a GDPR deletion request comes in, can you trace everywhere that customer's data may have gone? These are not hypothetical questions. They are the questions that follow an incident, an audit, or a regulatory inquiry, and right now most organizations don't have the answers.

The Window Is Narrowing

The organizations that build real AI governance programs now are the ones that will be in a position to prove their compliance posture when it matters. Those that don't will eventually be forced into it by an incident, a fine, or a regulator demanding answers they can't provide.

AI adoption is not slowing down. Employees are going to keep using AI tools because those tools make them more productive. The goal isn't to stop that. It's to make sure your governance program is strong enough that when someone asks what your employees are doing with AI, and they will ask, you can actually answer the question.

If you can't do that today, the gap between your compliance policy and your compliance reality is wider than you think.


Want to see how InnerActiv gives security and compliance teams real-time visibility into AI tool usage across every endpoint? Learn more about our AI governance capabilities.

read next
Risks

Layoffs Are an HR Event. They’re Also a Security Event.

March 5, 2026

The moment a termination notice goes out, the clock starts ticking. Employees who are about to lose their jobs, or who already know they're on the list, don't always wait to be escorted out before they start moving data. And in a workplace where AI tools can summarize, package, and transfer large volumes of information in minutes, that data moves faster and leaves a smaller trail than it ever has before.

Risks

Paid Insiders Are the New Attack Vector. AI Is Making It Worse

February 26, 2026

A recent article from Cybersecurity Insiders highlighted an emerging trend: threat actors are actively recruiting employees inside telecommunications providers, banks, and technology companies, offering direct payment for access to systems, data, or operational assistance. Rather than hacking in from the outside, attackers are increasingly buying legitimate access from within.

Shadow AI Isn't the Problem. Blind AI Is.

February 25, 2026

A 2025 survey of over 12,000 white-collar employees found that 60% had used AI tools at work, but only 18.5% were aware of any official company policy regarding AI use. That's not a workforce acting in defiance. That's a workforce operating without guidance in an environment that never gave them any.