Since the launch of ChatGPT in November 2022, generative AI (genAI) has seen rapid enterprise adoption. According to researchers in the Netskope Threat Labs, as of June 2024, an astonishing 96% of organizations are using various types of genAI apps. This widespread adoption is transforming how businesses operate, but with great power comes great responsibility—and risk.
The double-edged sword of genAI
GenAI offers numerous benefits. For example, genAI tools are revolutionizing productivity by assisting with coding, content creation, and data analysis. These tools can automate repetitive tasks, foster innovation, and provide employees with new capabilities that drive efficiency and creativity. However, the high usage of genAI also brings significant risks, which call for attentive management.
The new Netskope Threat Labs AI report emphasizes several concerns around data security and the misuse of AI technologies. One of the most pressing issues is data leakage.
Data leakage is the sharing of sensitive information with genAI tools that lead to unintended data leakage, exposing valuable intellectual property, and compromising the security of proprietary systems. The report reveals that proprietary source code sharing with genAI applications accounts for 46% of all genAI data policy violations identified by Netskope’s DLP platform.
Consider a scenario where a software engineer at a multinational technology firm uses a genAI app to optimize code. Without realizing it, the engineer shares a piece of proprietary code that contains sensitive algorithms. This code could be accessed by unauthorized parties, leading to potential data breaches and competitive disadvantage.
(You can learn more about some of the issues here in this episode of the Security Visionaries podcast, where IP lawyer Suzanne Oliver, CISO Neil Thacker and AI Labs lead Yihua Liao discuss the implications of AI).
General recommendations:
To protect against these risks, organizations should:
→ Regularly review AI app activity, trends, behaviors and data sensitivity to identify potential threats
→ Block access to apps that do not serve legitimate business purposes or pose a disproportionate risk
→ Use Data Loss Prevention policies to detect content/prompts containing sensitive information, including source code, regulated data, and passwords
Balancing innovation and security
While genAI brings about efficiency and innovation, there are ongoing debates about the right way to control the use of certain genAI applications. For instance, many are trying to determine whether blocking access to certain applications—such as GitHub Copilot—is really an appropriate security measure or if the cost of losing the productivity advantages is too great.
And these concerns are valid. Blocking such applications might not be the best approach. Rather, organizations should consult the Netskope Cloud Confidence Index (CCI) to help decide which applications to allow and which to restrict, based on their security and compliance profiles. Apps are evaluated based on objective criteria adapted from Cloud Security Alliance Guidance. These criteria measure apps enterprise-readiness, taking into consideration security credentials, audit-ability, and business continuity.
At time of writing, GitHub Copilot currently has a high CCI score of 80 (out of 100; remember this number is dynamic), indicating a higher than average level of trustworthiness, however, 19% of organizations who use genAI applications, have placed a complete ban on GitHub Copilot. Implementing more granular controls to monitor its usage and ensure data compliance, instead of imposing a blanket block, could enable organizations to benefit from its capabilities while mitigating risks.
Navigating the future
Generative AI holds incredible potential for driving innovation and efficiency in enterprises. However, it is crucial to be aware of, and work to reduce, the associated risks. By understanding the threats, such as data leakage and deep fakes, and by implementing robust security measures, organizations can safely harness the power of genAI.
Stay informed and stay secure. Read the complete Netskope Threat Labs report to arm your enterprise with the knowledge it needs to navigate the future of AI. Access the full Cloud and Threat Report: AI Apps in the Enterprise here.