Co-authored by James Robinson and Jason Clark
No sooner did ChatGPT and the topic of generative artificial intelligence (AI) go mainstream than every enterprise business technology leader started asking the same question.
Is it safe?
At Netskope, our answer is yes—provided we are doing all the right things with sensitive data protection and the responsible use of AI/ML in our own platforms and products, and effectively conveying an understanding of that use to our customers, prospects, partners, and third- and fourth-party suppliers to help build programs that are governance-driven.
The managed allowance of ChatGPT and other generative AI tools is a necessity. Organizations that simply “shut off” access to it may feel initially more secure, but are also denying ChatGPT’s many productive uses and putting themselves—and their entire teams—behind the innovation curve.
Managed allowance of ChatGPT
Netskope has been deeply focused on the productive use of AI and machine learning (ML) since our founding in 2012, and many AI/ML innovations—dozens of them patented—are already part of our Intelligent SSE platform. Our Netskope AI Labs team routinely discusses AI/ML and data science innovation with Netskope customers and our internal community.
Like everyone, we’ve just observed an inflection point. Before November 2022, if you weren’t a security practitioner, developer, data scientist, futurist, or technology enthusiast, you likely weren’t doing much with generative AI. But what’s happened since the public release of ChatGPT is that these services and technologies are now available for a layperson to access. Anyone with a browser today, right now, can go in and understand what ChatGPT can do.
When something is so pervasive that it becomes the dominant topic of conversation in business and technology this quickly—and ChatGPT definitely has—leaders have essentially two choices:
- Prohibit or severely limit its use
- Create a culture where they allow people to understand the use of this technology—and embrace its use—without putting the business at risk
At Netskope, for those on our team who should be allowed access to ChatGPT, we today enable responsible access. Here at the dawn of mainstream generative AI adoption, we’re going to see at least as much disruptive behavior as we did at the dawn of the online search engine decades ago, and where we saw different threats and a lot of data made publicly available that really should not have been.
But remember: now, as then, the grand strategy of security is to protect sensitive data from getting accessed by sources that shouldn’t have access to it. Today, with ChatGPT and other generative AI applications, this can be done with the right cultural orientation—that is, allow it responsibly—combined with the right technology orientation, meaning modern data loss prevention (DLP) controls that prevent misuse and exfiltration of data, and are also part of an infrastructure that enables teams to respond quickly in the event of that data’s misuse.
A recent blog, “Modern Data Protection Safeguards for ChatGPT and Other Generative Applications,” touches on how the Netskope platform specifically–and our modern DLP—helps prevent the cyber risks inherent in generative AI applications. Read it for a deeper dive, but to summarize: your DLP needs to be able to set policies at the level of “this should never go out”—data sets or cohorts that would be dangerous to the business if exposed—and “this shouldn’t go out,” meaning data sets or cohorts that you don’t want compromised but that would not be material to business disruption.
Netskope advanced DLP can automatically identify flows of sensitive data, categorizing sensitive data with the most exacting precision. That includes AI/ML-based image classification and the ability to build custom ML-based classifiers, plus real-time enforcement—applicable to every user connection—that combines the selective stopping of sensitive information posts with visual coaching messages to help users effect better behavior.
So, the big question again: is ChatGPT safe?
Yes. With the right mentality, the right education for users, and the right, modern DLP technology to protect you.
A word on third- and fourth-party risk
As more companies seek to enable the productive business use of generative AI by appropriate employees, more technology leaders will be asking even more pointed questions about relevant third- and fourth-party risk.
Copilots, for example, will see rapid adoption, but how many companies in their RFP process today are asking their third-party or fourth-party suppliers for critical information on how their tools leverage copilots, or any AI-associated tools? Could you today, for example, identify which of your suppliers have communicated how much of any code in their software was written by an AI app or not? And can you review it? If your suppliers have AI applications generating content, do you know who owns the technology they are using? And who owns the content they produce, and is shift-left licensing involved, and is that problem?
Netskope now puts these questions into our governance process as standard, and all companies should. We are stronger as a security industry when we’re upfront and specific about what we’re looking to achieve and how we might achieve it.
To learn more, please review these additional resources:
- https://www.netskope.com/products/security-service-edge
- https://www.netskope.com/products/data-loss-prevention
- https://www.netskope.com/netskope-ai-labs
Additionally, if you’re interested in hearing more about generative AI and data security, please register for our LinkedIn Live event ChatGPT: Fortifying Data Security in the AI App-ocalypse, featuring Krishna Narayanaswamy, Naveen Palavalli, and Neil Thacker, on Thursday, June 22 at 9:30am PT.