All Articles
Jim Mazotas
Risks

Layoffs Are an HR Event. They’re Also a Security Event.

Every reduction in force creates a short but dangerous window where employees still have access to sensitive systems - and in the AI era, data can move faster than security teams can react.

Layoffs are a business decision. But for security team, they're also a starting gun.

The moment a termination notice goes out, the clock starts ticking. Employees who are about to lose their jobs, or who already know they're on the list, don't always wait to be escorted out before they start moving data. And in a workplace where AI tools can summarize, package, and transfer large volumes of information in minutes, that data moves faster and leaves a smaller trail than it ever has before. Sometimes the intent is malicious. Sometimes it's not. Either way, the result is the same: sensitive files, customer records, source code, or intellectual property walking out the door before anyone notices.

This isn't a theory. It's a pattern that plays out every time organizations go through workforce reductions, and the data is hard to ignore.

The Numbers Behind the Risk

Insider threat incidents spike by roughly 40% during layoffs and 35% during broader organizational changes, according to research from the Insider Risk Index. That's not a marginal increase. It means your threat landscape changes the moment a reduction in force is announced.

The scale of the problem in 2025 makes this more urgent than ever. The U.S. saw more than 1.17 million job cuts announced through November 2025 alone, with the tech sector contributing roughly 245,000 of those. Every one of those organizations had employees with legitimate access to sensitive systems trying to navigate uncertainty, financial stress, and in many cases, a sense they had little left to lose.

The cost reflects that. According to the Ponemon Institute's 2025 Cost of Insider Risks report, the average malicious insider incident now runs $715,366 per event. And that's just the direct hit. Insider threats account for approximately 34% of all data breaches in 2025, up from 28% in 2023.

Why the Transition Window Is So Dangerous

Most organizations have offboarding processes. What they often lack is the bandwidth to execute them well under pressure. During large-scale layoffs, HR and IT teams are stretched thin across dozens or hundreds of departures at once. Access revocation gets delayed. Unusual activity doesn't get flagged because nobody is watching closely enough. The offboarding checklist becomes a formality rather than a real control.

At the same time, outside threat actors are paying close attention. Layoff announcements are public. Disgruntled employees become recruitment targets. In 2025 alone, Flashpoint observed 91,321 instances of insider recruiting activity, with an average of more than 1,100 insider-related posts per month on platforms like Telegram. Ransomware groups actively recruit employees who have just received termination notices, dangling financial incentives at the exact moment people are most vulnerable.

So you end up with internal risk and external pressure hitting at the same time, and security teams are often the last to know.

Now Add AI Into the Mix

The insider threat problem doesn't exist in a vacuum. It's running parallel to a rapid and often ungoverned expansion of AI tools in the workplace, and the two trends are making each other worse.

Employees are using AI tools, both approved and unsanctioned, to move and process data faster than ever. A departing employee can use AI to summarize, reformat, or package large amounts of sensitive information before walking out. That makes the exfiltration harder to catch, because the files look different, the volume appears smaller, and the activity can register as routine productivity rather than a red flag.

There's also the question of what happens to data that gets pasted into public AI tools in the first place. An employee who drops customer records or proprietary source code into a chatbot isn't just breaking policy. They're creating a permanent exposure that no legal team can fully remediate. According to Secureframe's Q4 2025 research, 60% of security leaders are concerned about AI enabling automated data exfiltration, and that number keeps climbing as more AI tools enter the enterprise environment.

For organizations going through layoffs, this creates a specific problem: employees who know they're on their way out have much less reason to care about AI usage policies. They may take shortcuts, pull outputs from internal AI tools, or use AI to help them organize and package the data they plan to take with them.

What Exfiltration Actually Looks Like

It rarely looks like the movie version. There's no shadowy figure plugging in a USB drive at 2 a.m. More often, it looks like an employee doing things they do every day: downloading files, syncing to cloud storage, forwarding documents to a personal email. The difference is volume, timing, and destination.

A sudden spike in downloads during an employee's final two weeks. Access to folders they've never touched before. A large file sent to a personal Gmail account the day after a termination notice was issued. These are the signals that matter, and they're invisible to organizations relying on traditional DLP tools or periodic access reviews.

The Intel case from mid-2024 is a good example of how this plays out. A software engineer who received a termination notice during Intel's mass layoffs allegedly exfiltrated approximately 18,000 files, some marked Top Secret, before disappearing. Intel's DLP tools actually blocked one of his attempts, but he switched to a different device and completed the transfer anyway. The company filed a $250,000 lawsuit, but the data was already gone.

That's the thing about data loss: once it's out, you can't take it back.

The Detection Window Is Short

Speed matters more in insider threat cases than most security teams realize. According to the Ponemon Institute, incidents contained in under 31 days cost organizations an average of $10.6 million. Let that drag past 91 days and the average climbs to $18.7 million. The gap isn't just financial. It's the difference between catching a problem before the data gets used and dealing with the fallout after a competitor files a patent based on your research, or a regulator asks where the customer records went.

The challenge is that most organizations aren't set up to move fast. Only 40% of companies can detect an insider threat within a week, which means the majority of incidents give bad actors plenty of time to exfiltrate data, cover their tracks, and move on before anyone connects the dots.

Closing that gap takes behavioral context. Knowing that an employee who just received a termination notice is downloading files at three times their normal rate, accessing systems outside their usual scope, and syncing to an external drive is not something a static rule can surface. It takes analytics that understand what normal looks like for each person and flag deviations the moment they happen, across endpoints, SaaS tools, cloud storage, and AI applications.

What Effective Protection Actually Looks Like

Traditional DLP tools were built for a different era. They block specific file types or flag known-sensitive data patterns, but they were never designed to catch a trusted employee with legitimate access doing something wrong. Context is everything here, and most legacy security stacks simply can't provide it.

Catching insider threats during workforce transitions requires full-environment visibility, not just coverage of the managed endpoint. Behavioral analytics need to establish a baseline for each user and surface deviations in real time. That monitoring has to include AI tool usage, not just email and cloud storage, because that's where a growing share of data movement is actually happening.

It also needs to be connected to HR workflows. When a termination event gets entered into a system, the security posture for that employee should shift automatically. Elevated monitoring, tightened access, and real-time activity flagging before the forensic review, not after.

The Intel engineer bypassed a DLP block by switching devices. That's a coverage failure, not just a policy failure. Visibility that follows the user across devices and environments would have caught what the traditional tool missed.

The Bigger Picture

Layoffs are going to keep happening. Economic pressure, automation reshaping roles, and ongoing market volatility mean workforce reductions are a permanent feature of the business landscape, not a phase to weather and move on from. Organizations that treat every reduction in force as a security event, not just an HR event, are the ones that hold onto their data.

AI proliferation is only accelerating the risk. Data that leaves your environment today may show up in a competitor's product next quarter, in a regulatory inquiry next year, or in a breach notification your customers never expected to receive.

The signals are almost always there before data walks out the door. The question is whether your security program is built to see them in time.

read next
Risks

Paid Insiders Are the New Attack Vector. AI Is Making It Worse

February 26, 2026

A recent article from Cybersecurity Insiders highlighted an emerging trend: threat actors are actively recruiting employees inside telecommunications providers, banks, and technology companies, offering direct payment for access to systems, data, or operational assistance. Rather than hacking in from the outside, attackers are increasingly buying legitimate access from within.

Shadow AI Isn't the Problem. Blind AI Is.

February 25, 2026

A 2025 survey of over 12,000 white-collar employees found that 60% had used AI tools at work, but only 18.5% were aware of any official company policy regarding AI use. That's not a workforce acting in defiance. That's a workforce operating without guidance in an environment that never gave them any.

In the News

77% Are Pasting Data Into GenAI. Most Companies Won't Know for 247 Days.

February 23, 2026

77% of employees are pasting company data into generative AI tools. When that exposure becomes a breach, organizations take an average of 247 days to detect it. That pairing tells you everything you need to know about where enterprise AI risk stands right now.