Generative AI holds the promise of a transformative role in the modern workplace, offering an opportunity to change various aspects of business operations. With its ability to generate human-like text, it can streamline content creation, improve customer service and enhance communications.
Those are things many of us are aware of. What many may not be aware of is that employee usage of generative AI can lead to big security trouble.
ExtraHop, a player in cloud-native network detection and response, or NDR, has taken a deeper look in the challenges faced by enterprises when it comes to grappling with the security implications tied to employee usage of generative AI technology through its latest research report today, "The Generative AI Tipping Point."
The report scrutinizes organizational strategies for securing and regulating the utilization of generative AI tools. It lays bare a stark cognitive dissonance prevailing among security leaders as generative AI increasingly cements its presence within the workplace. According to the report, 73% of IT and security leaders concede to their employees' regular or intermittent utilization of generative AI tools, such as LLMs, yet remain bewildered about the optimal approach to tackle the inherent security risks.
Interestingly, when analyzed about their primary concerns, IT and security leaders exhibit a higher level of apprehension regarding receiving inaccurate or nonsensical responses, eclipsing their concerns over security-centric issues. These issues include the inadvertent exposure of sensitive customer and employee personal identifiable information, disclosure of closely guarded trade secrets and financial losses.
The report casts a shadow on the prevailing security posture in many organizations. While a reassuring 82% claim to be very or somewhat confident in their current security stack's capacity to fend off threats stemming from generative AI tools, only a minority has invested in technology to monitor generative AI usage within their organizations.
Monitoring generative AI usage is essential for organizations as it helps protect sensitive data, maintain compliance and mitigate security risks. It enables the detection of anomalies and policy enforcement while ensuring employees use AI tools responsibly. It supports data loss prevention and provides valuable insights into usage patterns, aiding in informed decision-making.
Another noteworthy find is the inherent inefficacy of generative AI bans, with 32% of respondents confirming their organizations have imposed such bans. This proportion mirrors the percentage of those professing great confidence in their ability to thwart AI-related threats. However, these prohibitions seem to bear little fruit, as only 5% assert that their employees abstain from the use of generative AI tools, underscoring the need for more effective approaches.
Although a majority surveyed have invested or are planning to invest in generative AI protections or security measures this year, IT and security leaders still want more guidance. They want the government involved in some way, with 60% favoring mandatory regulations and 30% supporting government standards that businesses can adopt at their own discretion, according to the report.
Mandatory regulations can set clear guidelines for responsible usage, data protection and security practices. Government-enforced standards also offer a framework for businesses to follow voluntarily, fostering a more secure and accountable environment for generative AI deployment while promoting innovation and ethical AI development.
“There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace,” said Raja Mukerji, co-founder and Chief Scientist, ExtraHop. “However, leaders need more guidance and education to understand how generative AI can be applied across their organization and the potential risks associated with it. By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”
The backdrop to these findings is the launch of ChatGPT in November 2022, which has afforded enterprises less than a year to fully assess the trade-offs associated with generative AI tools. It is imperative for business leaders to comprehend their employees' generative AI usage, thus enabling them to pinpoint potential weaknesses in their security armor and safeguard against unauthorized data or intellectual property dissemination.
Be part of the discussion about the latest trends and developments in the Generative AI space at Generative AI Expo, taking place February 13-15, 2024, in Fort Lauderdale, Florida. Generative AI Expo discusses the evolution of GenAI and feature conversations focused on the potential for GenAI across industries and how the technology is already being used to create new opportunities for businesses to improve operations, enhance customer experiences, and create new growth opportunities.
Future of Work Contributor
Intelligent Workflows helps machine learning (ML) engineers proactively rectify errors and ensure the reliability and robustness of AI model performan…
Amazon Web Services, Inc. (AWS) recently launched Amazon Q, a generative-AI powered assistant that is specifically tailored for business operations.
Lacework announced a generative AI assistant that gives enterprise customers a new way to engage with the Lacework platform by providing customized co…
Voicify integrated with Chowly, an all-in-one digital ordering platform, to make its technology available to any of Chowly's customers.
Verta launched the Verta GenAI Workbench, an all-in-one platform to accelerate the GenAI builder's journey from idea to product.