The advent of Zero Trust spawned the mantra ‘Trust no one, verify everyone.’ After a spate of serious data breaches involving their employees, organizations began to admit that insider threats were no less a danger than threats from malicious external actors. Insider threats, however, could still be categorized based on their intention. There were intentional threats that could be attributed to disgruntled employees eager to get their own back for some organizational misgiving, and others looking to secure pecuniary gains illicitly.
The second category comprises employees who were a weak link in the organization due to their lack of cyber awareness or sheer negligence when working in the company networks. The Ponemon Institute’s 2020 Cost of Insider Threats study found that internal threats due to negligence far exceeded those of an intentional or malicious nature. Ponemon put the portion of internal threats caused by negligence at 63% of all internal breaches.
After November 2022, when Microsoft released ChatGPT via OpenAI, and Google followed with Bard, a wave of hysteria over the LLM-driven AI bots ensued. Within months, CISOs the world over were grappling with a new kind of insider threat arising from negligent and injudicious use of generative AI tools.
What is Shadow AI?
Shadow AI is almost akin to Shadow IT, a situation that arose with the unfettered use of IoT devices and apps by employees. What ensued was a herculean task that till today causes SOCs and CISOs to struggle to assert governance over the technology tools and apps used by their employees in the office workspace. Fraught with the possibilities of exfiltration of sensitive organizational data, and looking to optimize data security, access, reliability, backups et al, Shadow IT resulted in draconian policies for the use of devices and apps by employees when connected to the company networks.
Shadow AI is similar to Shadow IT in so much as it involves the uncontrolled and negligent use of generative AI tools like ChatGPT and Bard by employees. Gartner(1) predicted the current situation way back in 2020, when it presented its Top 10 Strategic Technology Trends. It defines Shadow AI as an offshoot of the Generative AI explosion that is taking place due to disruptive technology. The report predicted that Shadow AI would be a key issue that organizations were going to have to contend with in the coming years.
What is the risk?
In April 2023, barely 5 months after the launch of ChatGPT, IT leaders(2) woke up to the situation they may face at their organizations when Samsung temporarily banned the use of the tools after some employees inadvertently leaked sensitive internal data to ChatGPT.
The highly compelling nature of the tools arising from their ease of operation and quick-fire results is central to Shadow AI. Generative AI can expose sensitive and proprietary information to public view, compromising intellectual property and inviting regulatory penalties. Moreover, generative AI results can sometimes be inaccurate and what experts term “hallucinatory.” Additionally, considering that generative AI tools draw from multiple sources, incorporating generative AI output in an organization’s content could result in copyright infringement.
An additional danger comes from the use of apps that come with AI built into them. InvGate(3) reported that 92 percent of the top technology companies were engaged in AI integrations, arguably exposing the data of entire industries through the use of AI systems.
Under these circumstances, the mere use of AI tools – whether inadvertently or negligently – becomes a matter of grave concern for organizations.
When we consider Gartner’s other prediction that 30% of organizations using AI for decision-making would have to address Shadow AI as the biggest risk materialized, it’s easy to see exactly why alarm bells are sounding today in generative AI circles.
Managing Shadow AI
Smarter organizations(1) realize that going forward, they need to concern themselves not merely with expanding AI models, but also with increasing Shadow AI. Well-defined policies and thoroughly thought-out governance strategies will be called for, especially since more functions will use the tools extensively(3)
Along with ongoing scrutiny of new AI tools, fool-proofing and customizing of tools to the extent possible, centralizing of tools, and monitoring/remediation of breaches by IT, effective policies may consider, amongst other areas, issues such as:
- Access control covering AI user rights and user management
- Monitoring of AI usage by employees
- AI Awareness and training initiatives supported by mandatory compliance certifications that employees need to periodically complete
Organizations dedicated to learning and development may more readily embrace the perspective that strategically enabling AI could serve their interests more effectively than the outright prohibition of its use. Accordingly, they would do well to consider formulating committees to address issues arising from the policies concerning AI tools. For obvious reasons, IT departments in organizations would be best equipped to front these committees, which would include representation from HR, Admin, Legal, and Communications.
Shadow AI in the future
It’s now clear that as workforces and opportunities increase, the use of AI will increase even further. Managing Shadow AI will therefore present another gigantic challenge for organizations. Imperva(4) suggests that as the usage of AI tools intensifies, organizations without an insider risk strategy, as is the case with many organizations today, will be hard-pressed going forward.
With a very high percentage of data breaches still likely to be caused by negligence and inadvertence, organizations have an uphill task of mitigating the risks involved. Imperva’s senior vice president Terry Ray feels that it’s too late in the day to summarily put an end to its use by forbidding it. He puts it succinctly: “People don’t need to have malicious intent to cause a data breach. Most of the time, they are just trying to be more efficient in doing their jobs. But if companies are blind to LLMs accessing their back-end code or sensitive data stores, it’s just a matter of time before it blows up in their faces.”
Last words
How Shadow AI pans out for organizations depends on how effectively they deal with the three key aspects namely generative AI tools, Shadow IT policies, and Insider Risk Management Strategies. If Imperva’s finding that 33% of organizations don’t perceive insiders as a significant threat is anything to go by, things may seem unfavorable.
There is however a light at the end of the tunnel. CIO Forum(2) says that CIOs are now leading the charge for addressing Shadow AI. In March 2023, 54% said they were not yet doing anything about generative AI, but in June 2023, only 23% made that admission.
As Plurilock says in its blog (3), only time will tell if the “AI revolution” is ultimately as productive and beneficial for the companies of today as the “IT revolution” was for companies of the past.
If the track record of IT and cybersecurity however are anything to go by, chances are the challenges posed by Shadow AI, even given the vagaries of generative AI and LLMs, will sooner rather than later be a thing of the past.
Discover the unstoppable power of DEFEND and PlurilockAI, the ultimate AI-generated tools that crush security threats.
Get in touch with sales@aurorait.com or call (888) 282-0696 to experience the unmatched protection that Aurora, a proud member of the Plurilock family, delivers through these groundbreaking solutions.
References:
- https://www.bmc.com/blogs/shadow-ai
- https://www.cio.com/article/647725/it-leaders-grapple-with-shadow-ai.html
- https://plurilock.com/blog/shadow-ai-is-becoming-a-problem-in-it-and-its-going-to-get-worse/
- https://www.computerweekly.com/news/366542890/Shadow-AI-use-becoming-a-driver-of-insider-cyber-risk