Over the past year, AI innovation has swept through the workplace. Across industries and all team functions, we are seeing employees using AI assistants to streamline various tasks, including taking minutes, writing emails, developing code, crafting marketing strategies and even helping with managing company finances. As a CISO, I’m already envisaging an AI assistant which will help me with compliance strategy by actively monitoring regulatory changes, evaluating an organisation’s compliance status, and identifying areas for improvement.
However, amidst all this enthusiasm, there is a very real challenge facing CISOs and DPOs: how to protect corporate data and IP from leakage through these generative AI platforms and to third party providers.
Curiosity won’t kill the CISO
While many enterprises have contemplated completely blocking these tools on their systems, to do so could limit innovation, create a culture of distrust toward those in the workforce, or even lead to “Shadow AI”—the unapproved use of third-party AI applications outside of the corporate network. To a certain extent, the horse has already bolted. Data shows that within enterprises, AI assistants are already integrated into day-to-day tasks. Writing assistant Grammarly, the second most popular generative AI app, is currently used by 3.1% of employees, and I’ve noticed around a third of the conference calls I attend now have an AI assistant on the guest list. With the increasing availability of AI assistants like Microsoft Copilot Motion, the researchers at Netskope Threat Labs are clear that they expect AI assistants to grow in popularity in 2024.
Instead of blocking the tools outright, CISOs can deploy continuous protection policies using intelligent Data Loss Prevention (DLP) tools to safely use AI applications. DLP tools can ensure no sensitive information is used within input queries to AI applications, protecting critical data and preventing unauthorised access, leaks, or misuse.
CISOs should also take an active role in evaluating the applications used by employees, restricting access to those that do not align with business needs or pose an undue risk.
Once a CISO identifies an AI assistant as relevant to their organisation, the next step involves vetting the vendor and assessing its data policies. During this process, CISOs should equip themselves with an extensive list of questions, including:
- Data handling practices: What becomes of the data an employee inputs?
Understanding how the vendor manages and protects the data is crucial for ensuring data privacy and security. A study by The World Economic Forum found that a staggering 95% of cybersecurity incidents stem from human error–and entrusting sensitive data to a third-party AI assistant can exacerbate this risk.
There’s even greater cause for pause; by feeding data into these tools, organisations may be inadvertently contributing to the training of potentially competitive AI models. This can lead to a scenario where proprietary information or insights about the organisation’s operations can be leveraged by competitors, posing significant risks to the organisation’s competitive advantage and market position.
- Is the model used for additional services privately or publicly? Is the model developed by the company itself or based upon a third-party solution?
Many AI assistant apps used by employees often depend on third-party and fourth-party services. It’s common for employees to use apps without being aware that the backend infrastructure operates on a publicly accessible platform. As CISOs, we are particularly mindful of the significant costs associated with AI technology and so we know that free or inexpensive options make their money in other ways—selling data or the AI intelligence that it has contributed towards. In such cases, a thorough examination of the fine print becomes imperative for CISOs to ensure the protection and privacy of sensitive data.
- What happens to the output? Are these outputs employed to train subsequent models?
Many AI vendors do not just use the data input to train their models—they also use the data output. This loop creates ever more tangled ways in which the apps could inadvertently expose sensitive company information or lead to copyright infringement—and can be hard to untangle in supply chain data protection planning.
Looking within
As private enterprises await stronger legislative guidance on AI, it falls on CISOs and DPOs to promote self-regulation and ethical AI practices within their organisations. With the proliferation of AI assistants, it is crucial they act now to evaluate the implications of AI tools in the workplace. Every employee will soon be performing many day-to-day tasks in conjunction with their AI assistants. This should motivate companies to set up internal governance committees not just to evaluate tools and their applications, but to also discuss AI ethics, review processes, and discuss strategy in advance of more widespread adoption and regulation. This is exactly how we are approaching the challenge within the security teams here at Netskope; with an AI governance committee responsible for our AI strategy and who have built the mechanisms to properly inspect emerging vendors and their data processing approaches.
Employees across all industries and all levels can benefit from an AI assistant, with Bill Gates saying “they will utterly change how we live our lives, online and off.” For CISOs, the key to unlocking their potential starts with responsible governance.