AI and AI tools are talked about and seem to be almost everywhere today. In today’s world, we are bombarded with advertisements and glowing statements about what AI can do to improve both our personal and business worlds. AI tool usage in businesses is increasing significantly, with adoption rates varying across different company sizes and industries.
As of 2024, approximately 40% of global companies report using AI in their business operations, while another 42% are exploring AI use. Over 80% of companies are either using or exploring AI in their operations. AI adoption in large organizations (10,000+ employees) exceeds 60%. However, these statistics don’t reveal the full picture of AI usage.
Problems Due to ‘Shadow AI’
‘Shadow AI’ refers to the unauthorized use of AI tools outside an organization’s control. Many AI tools are being used without the permission of company management. Some scary statistics for businesses include:
- 80% of employees at small and medium-sized companies are using AI discreetly, without a go-ahead from higher-ups.
- 74% of ChatGPT AI tool usage at work occurs through non-corporate accounts.
- Nearly 83% of company legal documents shared with AI tools are done so through unauthorized channels.
The statistics above should be alarming to company leaders, at least to informed leaders. Employing the most modern technologies such as AI tools doesn’t come without a cost. With the explosion of AI tools and AI usage has come many new problems.
The Shadow AI phenomenon increases the likelihood of data breaches and complicates compliance with data protection regulations. AI tool usage of all types requires new corporate governance to properly control the risks associated with their use.
Shadow AI Can Lead to Security Vulnerabilities
Shadow AI can expose sensitive data, intellectual property, and confidential information to unauthorized access or breaches. Employees may inadvertently upload proprietary information to unsecured platforms, making it vulnerable to exposure or misuse
AI Usage, Corporate Governance, and the old adage “If you don’t look you won’t see”
AI tool usage in companies is difficult to keep up with. Employees often lack awareness about what happens to their data once it is shared with these tools, leading to unintentional security violations. While utilizing AI tools in the workplace helps efficiency, the high percentage of workers who are keeping AI use a secret puts company data at risk in an environment where leaders’ No. 1 concern for the year ahead is cybersecurity and data privacy.
- Most AI tools are cloud-based and have EULA or SLA user agreements. These user agreements state that any data entered into the AI tool by a user can be stored and used by the company producing the AI tools.
- As with the recent explosion of data and data governance, many businesses are already far behind in understanding what data is being captured by the AI tools their employees use, and some of this data is company IP that is leaking outside of the company.
- Some companies do have network management tools such as gateways that inspect network traffic to and from the Internet and can tell what external AI tools are being used by their employees. But many companies do not.
Prevalence of Data Leaks Due to Shadow AI Tool Usage
It is not uncommon for company employees to use AI tools for software debugging software or to write software. Many companies are only partially aware of the AI tools their employees are using and what company IP is being leaked and captured by the AI tools if they are even aware. Many employees also lack awareness about what happens to their data once it is shared with these tools, leading to unintentional security violations.
Many companies are already behind on this issue and falling farther behind fast. The risks associated with company data leaking to AI tools are growing. Some additional scary statistics for business leaders from recent reports include:
- A 2024 Work Trend Index Annual Report released by Microsoft and LinkedIn found usage of GenAI tools like ChatGPT among knowledge workers across the globe has nearly doubled over the past six months, with only 75% of employees acknowledging they use AI tools.
- About half of the group (46%) that use AI recently started using it, within the past six months, and the majority of them (78%) are using AI tools at work “without guidance or clearance from the top.”
- A survey conducted by the National Cybersecurity Alliance (NCA) and CybSafe found that 38% of employees using AI tools admitted to submitting sensitive work-related information to these applications without their employer’s knowledge. This behavior is particularly prevalent among younger employees, with 46% of Gen Z and 43% of Millennials reporting similar actions.
- At small and medium-sized companies, the percentage of workers taking this “bring your own AI” approach is even higher: 80% of employees use AI discreetly, without a go-ahead from higher-ups.
- Research from Cyberhaven indicates that 74% of ChatGPT usage at work occurs through non-corporate accounts, which poses a significant risk as these tools can potentially utilize or train on the data provided. Furthermore, nearly 83% of legal documents shared with AI tools are done so through unauthorized channels, including personal accounts.
- These trends apply across generations — 73% of boomers and 85% of Gen Z reported using AI tools not provided by their companies. Of the workers who use AI at work, 78% said they brought their tools to the workplace. And the “bring-your-own-AI” (BYOAI) trend is not just happening among young folks. The study found it crossed all generations of workers.
Lack of Training, Awareness, and Data Governance contributing to AI tool data leaks
A significant contributing factor to these data leaks is the lack of training on secure AI use. A NCA report revealed that only 48% of employees had received any form of AI training, highlighting a critical gap in organizational preparedness.
“Employers are faced with the challenge of locking down access to the tools that could expose a company to a data breach, but also with finding a way to bring new technology into the workplace. “This is an imperative part of the AI governance and security posture of companies, and creating a framework that can adapt to impending regulations will help protect company data, limit restrictions, and alleviate concerns as employees look to use these new tools.” IP data breaches are a major risk when organizations use external AI tools, for example:
- In 2023 Samsung suffered from a serious IP breach when engineers pasted confidential software code written by other Samsung teams into ChatGPT in order to understand it, and OpenAI’s system then used that code as part of its training data set.
- 42% of organizations are concerned that GenAI jeopardizes control of data and intellectual property assets. This is why 54% of organizations are experimenting with private versions of GenAI models rather than using publicly available versions
Shadow AI Can Increase Data Integrity Risks
Unauthorized Shadow AI tools may also compromise the integrity of business data. This can negatively impact business efficiency. This may increase the risks of tampering or inaccurate information and negatively impact many aspects other of businesses, including customer support and company reputation.
Shadow AI Can Create Compliance and Legal Risks
Unauthorized AI tools often violate data privacy laws and licensing agreements, exposing organizations to regulatory fines and legal action. For example, using unapproved AI in healthcare could lead to HIPAA violations, while improper data handling may breach GDPR requirements.
Shadow AI Can Create Cybersecurity Threats
Unapproved and unvetted AI solutions can introduce bugs, malware, or faulty code into business processes, expanding the attack surface for potential cyber threats.
Shadow AI Mitigations Recommended for Organizations
To mitigate the risks of Shadow AI, organizations should consider implementing the following strategies:
- Regular Audits and Monitoring of AI tool usage: Conduct regular audits to monitor how AI tools are being used within the organization and identify any unauthorized usage patterns.
- Inclusion of AI tools in Data Governance and Risk Management efforts.
- Strict Access Controls: Limit access to sensitive data based on employee roles to minimize exposure risks.
- Comprehensive Training Programs: Develop training that emphasizes the potential consequences of unsafe AI use and educates employees on best practices for handling sensitive information.
- AI Acceptable Use Policies: Establish clear guidelines regarding the use of AI tools within the workplace, ensuring that employees understand what is permissible and what is not.
Summary
Companies should not allow ‘Shadow AI’ usage due to significant risks and potential consequences, including:
- Security vulnerabilities: Shadow AI can expose sensitive data, intellectual property, and confidential information.
- Compliance and legal risks
- Data integrity risks: Unauthorized AI tools may compromise the integrity of business data.
- Cybersecurity threats: Unapproved and unvetted AI solutions can introduce bugs, malware, or faulty code into business processes.
- Compliance challenges: Shadow AI usage can violate industry regulations, such as the EU AI Act.
By allowing Shadow AI, companies expose themselves to these risks and more.
References
- ‘AI adoption statistics by industries and countries: 2024 snapshot’
- ‘93% of IT leaders will implement AI agents in the next two years’
- ‘Agents, shadow AI and AI factories: Making sense of it all in 2025’
- ‘Knowledge management takes center stage in the AI journey’
- ‘How Many Companies Use AI? (New Data)’
- ‘Joint survey report from LinkedIn and Microsoft’ (May, 2024)


Leave a Reply