Quantify the value of Netskope One SSE – Get the 2024 Forrester Total Economic Impact™ study

close
close
  • Why Netskope chevron

    Changing the way networking and security work together.

  • Our Customers chevron

    Netskope serves more than 3,400 customers worldwide including more than 30 of the Fortune 100

  • Our Partners chevron

    We partner with security leaders to help you secure your journey to the cloud.

A Leader in SSE. Now a Leader in Single-Vendor SASE.

Learn why Netskope debuted as a leader in the 2024 Gartner® Magic Quadrant™️ for Single-Vendor Secure Access Service Edge

Get the report
Customer Visionary Spotlights

Read how innovative customers are successfully navigating today’s changing networking & security landscape through the Netskope One platform.

Get the eBook
Customer Visionary Spotlights
Netskope’s partner-centric go-to-market strategy enables our partners to maximize their growth and profitability while transforming enterprise security.

Learn about Netskope Partners
Group of diverse young professionals smiling
Your Network of Tomorrow

Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.

Get the white paper
Your Network of Tomorrow
Netskope Cloud Exchange

The Netskope Cloud Exchange (CE) provides customers with powerful integration tools to leverage investments across their security posture.

Learn about Cloud Exchange
Aerial view of a city
  • Security Service Edge chevron

    Protect against advanced and cloud-enabled threats and safeguard data across all vectors.

  • SD-WAN chevron

    Confidently provide secure, high-performance access to every remote user, device, site, and cloud.

  • Secure Access Service Edge chevron

    Netskope One SASE provides a cloud-native, fully-converged and single-vendor SASE solution.

The platform of the future is Netskope

Security Service Edge (SSE), Cloud Access Security Broker (CASB), Cloud Firewall, Next Generation Secure Web Gateway (SWG), and Private Access for ZTNA built natively into a single solution to help every business on its journey to Secure Access Service Edge (SASE) architecture.

Go to Products Overview
Netskope video
Next Gen SASE Branch is hybrid — connected, secured, and automated

Netskope Next Gen SASE Branch converges Context-Aware SASE Fabric, Zero-Trust Hybrid Security, and SkopeAI-powered Cloud Orchestrator into a unified cloud offering, ushering in a fully modernized branch experience for the borderless enterprise.

Learn about Next Gen SASE Branch
People at the open space office
SASE Architecture For Dummies

Get your complimentary copy of the only guide to SASE design you’ll ever need.

Get the eBook
SASE Architecture For Dummies eBook
Make the move to market-leading cloud security services with minimal latency and high reliability.

Learn about NewEdge
Lighted highway through mountainside switchbacks
Safely enable the use of generative AI applications with application access control, real-time user coaching, and best-in-class data protection.

Learn how we secure generative AI use
Safely Enable ChatGPT and Generative AI
Zero trust solutions for SSE and SASE deployments

Learn about Zero Trust
Boat driving through open sea
Netskope achieves FedRAMP High Authorization

Choose Netskope GovCloud to accelerate your agency’s transformation.

Learn about Netskope GovCloud
Netskope GovCloud
  • Resources chevron

    Learn more about how Netskope can help you secure your journey to the cloud.

  • Blog chevron

    Learn how Netskope enables security and networking transformation through secure access service edge (SASE)

  • Events and Workshops chevron

    Stay ahead of the latest security trends and connect with your peers.

  • Security Defined chevron

    Everything you need to know in our cybersecurity encyclopedia.

Security Visionaries Podcast

2025 Predictions
In this episode of Security Visionaries, we're joined by Kiersten Todt, President at Wondros and former Chief of Staff for the Cybersecurity and Infrastructure Security Agency (CISA) to discuss predictions for 2025 and beyond.

Play the podcast Browse all podcasts
2025 Predictions
Latest Blogs

Read how Netskope can enable the Zero Trust and SASE journey through secure access service edge (SASE) capabilities.

Read the blog
Sunrise and cloudy sky
SASE Week 2024 On-Demand

Learn how to navigate the latest advancements in SASE and zero trust and explore how these frameworks are adapting to address cybersecurity and infrastructure challenges

Explore sessions
SASE Week 2024
What is SASE?

Learn about the future convergence of networking and security tools in today’s cloud dominant business model.

Learn about SASE
  • Company chevron

    We help you stay ahead of cloud, data, and network security challenges.

  • Careers chevron

    Join Netskope's 3,000+ amazing team members building the industry’s leading cloud-native security platform.

  • Customer Solutions chevron

    We are here for you and with you every step of the way, ensuring your success with Netskope.

  • Training and Accreditations chevron

    Netskope training will help you become a cloud security expert.

Supporting sustainability through data security

Netskope is proud to participate in Vision 2045: an initiative aimed to raise awareness on private industry’s role in sustainability.

Find out more
Supporting Sustainability Through Data Security
Help shape the future of cloud security

At Netskope, founders and leaders work shoulder-to-shoulder with their colleagues, even the most renowned experts check their egos at the door, and the best ideas win.

Join the team
Careers at Netskope
Netskope dedicated service and support professionals will ensure you successful deploy and experience the full value of our platform.

Go to Customer Solutions
Netskope Professional Services
Secure your digital transformation journey and make the most of your cloud, web, and private applications with Netskope training.

Learn about Training and Certifications
Group of young professionals working

Cloud and Threat Report:
AI Apps in the Enterprise

light blue plus
This report examines how organizations are balancing the benefits of generative AI apps while also managing the associated risks, highlighting an increasingly popular strategy that involves DLP and interactive user coaching.
Dark cloud over the sunset
28 min read

Executive summary link link

This year’s Cloud and Threat Report on AI apps focuses specifically on genAI application trends and risks, as genAI use has been rapidly growing with broad reach and wide impact on enterprise users. 96% of organizations surveyed have users using genAI with the number of users tripling over the past 12 months. The real-world use of genAI apps includes help with coding, writing assistance, creating presentations, and video and image generation. These use cases present data security challenges, specifically how to prevent sensitive data, such as source code, regulated data, and intellectual property, from being sent to unapproved apps.

We start this report off by looking at usage trends of genAI applications, then we analyze the broad-based risks introduced by the use of genAI, discuss specific controls that are effective and may help organizations improve in the face of incomplete data or new threat areas, and end with a look at future trends and implications.

Based on millions of anonymized user activities, genAI app usage has gone through significant changes from June 2023 to June 2024:

  • Virtually all organizations now use genAI applications, with use increasing from 74% to 96% of organizations over the past year.
  • GenAI adoption is rapidly growing and has yet to reach a steady state. The average organization uses more than three times the number of genAI apps and has nearly three times the number of users actively using those apps than one year ago.
  • Data risk is top of mind for early adopters of genAI apps, with proprietary source code sharing with genAI apps accounting for 46% of all data policy violations.
  • Adoption of security controls to safely enable genAI apps is on the rise, with more than three-quarters of organizations using block/allow policies, DLP, live user coaching, and other controls to enable genAI apps while safeguarding data.

AI in general has been popular and attracted considerable investment, with funding totaling over $28B across 240+ equity deals from 2020 through 3/22/2024.[1]

AI 100 Top companies by equity funding

With OpenAI and Anthropic totaling nearly two-thirds (64%) of the total funding, AI funding is dominated and driven by genAI. This reflects the increasing genAI excitement since OpenAI’s ChatGPT release in November 2022. In addition to startups, multiple AI-focused ETFs and mutual funds have been created, indicating another level of funding from public market investors. The large amount of investment will provide a tailwind for research and development, product releases, and associated risks and abuses.

Outsized price-to-sales ratios indicate that execution is lagging lofty investor expectations. Hugging Face has a 150x multiple of a $4.5B valuation on $30M revenues and Perplexity a 65x multiple of $520M valuation on $8M revenues[1]:

AI 100 Revenue multiple by company

Although real revenue is lagging, product release activity is high, indicating that we are still early in the AI innovation cycle with heavy R&D investment. As an example, there have been 34 feature releases (minor and major) of ChatGPT since Nov 2022[2], or approximately two per month.

It’s clear that genAI will be the driver of AI investment in the short-term and will introduce the broadest risk and impact to enterprise users. It is or will be bundled by default on major application, search, and device platforms, with use cases such as search, copy-editing, style/tone adjustment, and content creation. The primary risk stems from the data users send to the apps, including data loss, unintentional sharing of confidential information, and inappropriate use of information (legal rights) from genAI services. Currently, text (LLMs) are used more, with their broader use cases, although genAI apps for video, images, and other media are also a factor.

This report summarizes genAI usage and trends based on anonymized customer data over the past 12 months, detailing application use, user actions, risk areas, and early controls while providing prescriptive guidance for the next 12 months.

 

test answer

About this report link link

Netskope provides threat and data protection to millions of users worldwide. Information presented in this report is based on anonymized usage data collected by the Netskope Security Cloud platform relating to a subset of Netskope customers with prior authorization. The statistics in this report are based on the thirteen-month period from June 1, 2023, through June 30, 2024.

This report includes millions of users in hundreds of organizations worldwide in multiple sectors, including financial services, healthcare, manufacturing, telecom, and retail. Organizations included in this report each have more than 1,000 active users.

 

sdofjsfojefgejelosij

Netskope Threat Labs link link

Staffed by the industry’s foremost cloud threat and malware researchers, Netskope Threat Labs discovers, analyzes, and designs defenses against the latest web, cloud, and data threats affecting enterprises. Our researchers are regular presenters and volunteers at top security conferences, including DEF CON, Black Hat, and RSAC.

 

Trends link link

Nearly all organizations use genAI apps

In the approximately 18 months since the public release of ChatGPT in November 2022, a high majority of organizations have users using some type of genAI application. This has risen steadily from 74% in June 2023, to 96% as of June 2024. Nearly every organization uses genAI apps today.

 

Organizations are using more genAI apps

The number of genAI apps used in each organization is significantly increasing, more than tripling from a median of 3 different genAI apps in June 2023 to over 9.6 apps in June 2024. There is even more significant growth when looking at the upper extremes. The top 25% of organizations grew from 6 apps to 24 apps, and the top 1% (not pictured) grew from 14 to 80 apps.

This trend is understandable due to the increase in genAI offerings early in a technology innovation cycle, fueled by the significant investment and excitement of the opportunities they offer for increasing organization efficiencies. The implication for organizations managing risk from their users is that the number of GenAI offerings in use continues to grow, presenting a moving target when looking at risk controls, which we discuss later in this report.

 

Top genAI apps

The top AI apps in use have changed over the past year. In June 2023, ChatGPT, Grammarly, and Google Bard (now Gemini) were the only significant genAI apps with significant numbers of enterprise users. As of June 2024, there are more GenAI apps available with significant use. This report covers nearly 200 different apps being tracked by Netskope Threat Labs. ChatGPT retains its dominance in popularity, with 80% of organizations using it, while Microsoft Copilot, which became generally available in January 2024, is third with 57% of organizations using it. Grammarly and Google Gemini (formerly Bard) retain high rankings.

The growth over time shows mostly steady increases for all applications, with the notable exception of Microsoft Copilot, which has grown in usage to 57% of all organizations surveyed in the six months since release. This shows, in part, the high adoption rates of the Microsoft enterprise installed base for new Microsoft offerings.

The list of most popular genAI apps includes a variety of newcomers, which will fluctuate over the coming year. The categorization of each of these apps is also interesting, as it indicates which are the most popular use cases for genAI apps from enterprise users.

Most Popular GenAI App Categories

GenAI ApplicationCategory
ChatGPTSearch, General
GrammarlyWriting
GeminiSearch, General
Microsoft CopilotSearch, General
Perplexity AISearch, General
QuillBotWriting
VEEDResearch
ChatbaseGeneral, Search
WritesonicWriting
GammaPresentations

We expect that the top applications will shift significantly in the coming year and look very different in next year’s report. There will also be additional consolidation, as well as original equipment manufacturer (OEM) relationships. For example, Chatbase offers ChatGPT and Gemini models as choices. From a market share perspective, we may want to group application activity by underlying technology. However, from an organization risk viewpoint, grouping by user-facing application is more important because security controls often differ by application and incorporate domains/URLs at some level to distinguish applications. That is, there could very well be a policy to ban ChatGPT proper but allow Chatbase using ChatGPT underneath. Because their use cases are different, the risk management and controls differ.

 

User activity is increasing

Not only are organizations using more genAI apps, but the amount of user activity with those apps is also increasing. While the overall percentage of users using genAI apps is still relatively low, the rate of increase is significant, going from 1.7% in June 2023 to over 5% in June 2024, nearly tripling in 12 months for the average organization. Even organizations with an above-average number of users per month saw significant year-over-year genAI adoption: the top 25% of organizations grew from 7% to 15% using genAI apps. Regardless of organization size, we continue to see growth in genAI adoption that will continue over the next year, as we have not yet seen signs of flattening growth rates.

 

Risks link link

Data is still the most critical asset to protect when genAI apps are in use. Users are still the key actors in causing and preventing data risk and today, the most urgent security risks for GenAI users are all data-related.
It is helpful to consider the data risk from the user viewpoint along two dimensions:

  • What data do users send into genAI services?
  • What data do users receive and use from genAI services?

Input: Data Submission

Output: Data Results

What data do users send into genAI services?

What data do users receive and use from genAI services?

Risks:

  • Unknown/suspicious apps

  • Data leakage: PII, credentials, copyright, trade secrets, HIPAA/GDPR/PCI

Risks:

  • Correctness: hallucinations, misinformation

  • Legal: copyright violations

  • Economic: job efficiency, replacement

  • Social engineering: phishing, deepfakes

  • Blackmail

  • Objectionable content

Among the organizations included in this study, the recognized risks and threats over the past 12 months are focused on application usage and data risks, which is often the case in the early stages of markets focused on applications or services. Furthermore, the risks that are being acted upon are on the left side of the table: the data risks associated with users submitting prompts to genAI apps as opposed to the right side of the table which relates to risk of using data coming or appearing to come from genAI services. This prioritization makes sense for most organizations: the first priority tends to be protecting an organization’s information assets and what will follow later is liability or correctness issues with use of content from genAI apps.

By further enumerating and prioritizing risks at this level, organizations will not only be able to understand their genAI app-specific risks better but importantly, determine which controls and policies are required to address those risks.

 

Controls link link

Since early risks in the genAI market have been focused on users’ submission of data, the controls on the left-hand side of the table have also been the priority for organizations. These controls are discussed in more detail below.

Input: Data Submission

Output: Data Results

What data do users send into genAI services?

What data do users receive and use from genAI services?

Risks:

  • Unknown/suspicious apps

  • Data leakage: PII, credentials, copyright, trade secrets, HIPAA/GDPR/PCI

Risks:

  • Correctness: hallucinations, misinformation

  • Legal: copyright violations

  • Economic: job efficiency, replacement

  • Social engineering: phishing, deepfakes

  • Blackmail

  • Objectionable content

Controls:

  • AUP: restrict which apps are being used

  • DLP: prevention/blocking

  • User training/coaching

  • Advanced detection of suspicious data movement

Controls:

  • AUP: which data from which apps, for what purpose

  • Data reference/source policies

  • Job role clarifications/tools/processes

  • Anti-phishing

  • Deepfake/hallucination detection (data auditing)

  • Data traceability/fingerprinting

 

Applications

The starting point for genAI app risk are the applications themselves. Practically, application controls are also the starting point for controlling that risk, typically implemented as allow or block lists within an inline SWG or proxy.

Most organizations have restricted the use of genAI apps to protect their data, with 77% of organizations blocking at least one genAI app as of June 2024, which is a 45% increase from 53% of organizations in June 2023.

This trend indicates good maturity and adoption in basic controls around genAI application usage. Controlling which apps are used in an organization is a required starting point for reducing risk. However more granular controls will be required to be effective. The specific use of an application often determines whether the activity should be allowed or not. For example, a general search in ChatGPT should be allowed, while the submission of source code should not.

Looking in more detail at blocked applications, the median number of applications blocked for all users in an organization has also been increasing, from 0.6 apps in June 2023 to over 2.6 apps in June 2024. Having so few organization-wide bans on genAI apps compared to the hundreds of genAI apps on the market points toward the popularity of other more nuanced controls, which we will discuss in more detail later in this report.

The most blocked genAI applications do track somewhat to popularity, but a fair number of less popular apps are the most blocked. Of those organizations blocking genAI apps, 28% block Beautifal.ai (making it the most commonly blocked genAI app), and 19% block Perplexity AI, which is the 10th most commonly blocked app. Blocks can often be a temporary measure as new apps are evaluated to determine if they serve any legitimate business purposes and are safe for certain use cases.

The specific applications that are blocked will vary by organizational policy, but when percentages are high enough, as in the above top 10 list, it’s worthwhile for all organizations to review if the specific applications are used within their own environment, as well as whether to adjust controls around the category of applications. The following table shows that the most blocked apps span a variety of different use cases in the genAI space.

Most Blocked GenAI App Categories

GenAI ApplicationCategory
Beautiful.aiPresentations
WritesonicWriting
CraiyonImages
TactiqBusiness Meeting Transcription (Zoom, Meet)
AIChattingGeneral, Search, Writing, PDF Summarization
Github CopilotCoding
DeepAIGeneral, Search, Chat, Image, Video
sciteResearch
Poe AIGeneral, Search
Perplexity AIGeneral, Search

 

Data loss prevention (DLP)

As organizations mature beyond allowed applications lists, they tend to begin enacting more fine-grained controls around the usage of the allowed applications. Unsurprisingly, DLP controls are growing in popularity as a genAI data risk control. Data loss prevention has increased in popularity from 24% in June 2023 to over 42% of organizations using DLP to control the types of data sent to genAI apps in June 2024, more than 75% year-over-year growth.

The increase in DLP controls reflects an understanding across organizations about how to effectively mitigate data risk amidst the larger, broad trend of increasing genAI app use. Within DLP policies, organizations are looking to control specific data flows, especially to block genAI prompts containing sensitive or confidential information. In organizations with data protection policies, source code accounts for nearly half (46%) of all DLP violations, with regulated data driven by industry regulations or compliance requirements at 35%, and intellectual property at 15%. Regulated data has been a top violation area pre-genAI and reinforces the challenges with manual training of users in the improper sharing of regulated data.


Posting sensitive data to genAI apps not only reflects the current priorities among organizations, but also shows the utility of various genAI apps. For example, GitHub Copilot’s popularity, coupled with Microsoft’s market share as a software development platform, may drive higher use of genAI for code generation in the future.

 

Coaching

While obviously malicious applications or activities are served well by controls that block and alert, data risk is often more of a gray area. Coaching controls can be a highly effective technique to deal with gray areas, especially in early fast-moving technology cycles like genAI. Furthermore, organizations can use coaching to inform and refine security controls without blocking productivity with false positives and slow approval processes.

Coaching controls provide a warning dialog to the user while interacting with genAI apps, allowing them to cancel or proceed with their actions. It functions like safe browsing features built into browsers. The advantages are it provides a friendlier experience for the user, not blocking their work, but informing and enabling users to improve their behavior.

For organizations with policies to control genAI application usage, 31% of them used coaching dialogs as of June 2024 compared to 20% of organizations in June 2023, an over 50% increase in adoption.

This tracks with growing sophistication of organizations in applying similar coaching controls from other security domains to newer ones such as genAI app risk. While the growth has flattened, coaching controls are relatively new as compared to outright blocks and alerts based on DLP or application. We expect the adoption to continue growing as more organizations understand how to use coaching to manage the grayer risks associated with data.

When we break down the actual user response to the coaching dialog alerts that have occurred, we see a measure of efficacy. For all users that received coaching dialog alerts for genAI apps, in 57% of those cases, users chose to stop the action they were performing, which reduced risk by users avoiding sending sensitive data in genAI app prompts or by avoiding the use of unknown or new genAI apps. 57% is high enough to bolster the case that coaching can be an effective control to complement explicit application blocks and DLP policies. Furthermore, coaching also enables feedback. When deciding to proceed, most organizations put policies in place that require users to justify their actions during the coaching interaction.

This is not to suggest that user decision-making be the basis for security policy. Rather, it indicates that for organizations that utilize coaching, approximately half of the genAI app usage that is not being outright blocked may be further reduced by user decisions. Just as important, user decisions in response to coaching dialogs can and should inform security policy review and adjustment. While user decisions can be faulty and risky, the two categorizations form the basis for further review. Applications that a user has decided not to use based on a coaching dialog should be analyzed for an outright block list. Applications that a user decided to use should be reviewed to be put on an allowed corporate standards list or perhaps blocked if there’s a better or more acceptable app. User responses to coaching prompts could also be used to refine more nuanced policies, like DLP policies, to be more targeted in their application.

 

Behavioral analysis

We are seeing early signs of advanced detections of suspicious user behavior with regards to data movement that were detected by behavioral detection engines. Suspicious data movement often comprises multiple indicators of suspicious or unexpected behavior by a user with respect to the user’s or organization’s normal activity baseline. The indicators might involve anomalous download or upload activity, new or suspicious sources or targets of data such as new genAI apps, as well as other suspicious behavior such as use of an unexpected or obfuscated IP address or user agent.

One example of a detected, advanced behavioral alert for suspicious data movement involves the movement of data from an organization-managed application and uploaded to an unapproved, personal genAI app.

Similar to coaching, the list of apps appearing in these alerts can also be used to prioritize control efforts with genAI application usage. For example, two commonly used apps may overlap in functionality, and that alone might prioritize efforts to reduce the number of apps.

The use of behavioral alerts show the early awareness and adoption of organizations to find harder to detect threats, which has traditionally included compromised credentials, insider threats, lateral movement, and data exfiltration activities by malicious actors.

When we look closer at the sensitive data movements, we find that the top applications where the sensitive data originated reflect popularity of corporate cloud apps with OneDrive (34%) and Google Drive (29%) at the top, followed by SharePoint (21%) and Outlook (8%) and Gmail (6%).

The top 3 apps are all cloud storage and collaboration apps and were the source of the sensitive data 84% of the time, while the major email cloud apps were the source 14% of the time. By knowing which specific applications are involved more frequently in sensitive data movements, security teams can then adjust controls appropriately, for example, placing additional DLP controls around file-sharing applications. Organizations should prioritize security risk assessment and control implementation around data movement between apps, especially moving from managed apps to unmanaged genAI apps, as it is both becoming more common and potentially having large data loss impact.

 

Guidance link link

Based on the past twelve month trends, we recommend reviewing current security operations and risk assessment with a specific focus on AI and genAI specific changes required.

The framework for understanding and managing genAI risk involves reviewing security operations in five major areas with specific customization to genAI-specific risks:

  • Analysis, particularly risk assessment, of the current state of genAI app usage and user behavior.
  • Planning for risk analysis and implementation of controls.
  • Prevention controls including allowed genAI applications and DLP policies.
  • Detection controls, such as coaching and behavioral analysis.
  • Remediation/mitigation controls, including blocking new, inappropriate applications.

The first steps in analysis will be to inventory the genAI apps being used to form a baseline from which to plan. Then prevention and detection controls can be implemented with restriction of the allowed apps (acceptable application lists) and restriction of the data sent to those apps (DLP controls).

 

Create app usage baseline

By utilizing logs of secure web gateways or proxies and local DNS resolution, gather key metrics in:

  • App Popularity: which apps, how many times per week/month
  • Number of Users: with a minimum volume per week/month
  • User Volume: how much use (i.e. user transactions) per day/week/month, how much data (prompt/response size/volume)

Simple spreadsheet analysis can generate similar graphs as those shown in this report. With some simple automation, this can then be reviewed on a weekly or monthly basis to identify changes over time and outliers at any point in time.

 

Reduce unknowns

From the application and usage baseline, the next step will be to remove the unknown or suspicious apps and enforce an acceptable app list. This reduces both attack surface and attack vectors. Specific measures include identifying an acceptable app list based on user work requirements, blocking unknown apps that are suspicious and training users to self-monitor with coaching techniques within your secure web gateways and proxies.

 

Reduce sensitive data loss

Detecting advanced threats is non-trivial, but since insider threats and compromised credentials remain common, but difficult challenges, planning should be done for genAI threats as well. Although behavioral detection alerts are not part of genAI vendor offerings, they are part of existing cybersecurity platforms and solutions. These behavioral detection capabilities should be evaluated with a focus on genAI-specific threats. The number and type of specific alerts are the starting point.

Product evaluations should also include risk scoring and tracking in addition to alert detections. Behavioral detection is generally more effective with some concept of granular risk tracking of users and apps over time so that root cause analysis can be done at the user-level and app-level. For example, certain apps or users may cause more risk, and targeted remediation can help reduce risk.

 

Refine risk framework

Risk frameworks need to be reviewed and adapted or tailored specifically to AI or genAI. Efforts such as the NIST AI Risk Management Framework[4] can be used in internal efforts.

Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector by U.S. Department of Treasury, March 2024[5] is an excellent example of managing AI-risk within the financial services sector, but can also be adapted to your sector or organization.

A good risk management framework will support future planning for up-and-coming threat areas, such as the liability and legal risk from the use of information from genAI apps. This can be captured in an acceptable use policy (AUP) along with the guidelines for data submitted to genAI apps. This policy could be implemented as a combination of manual controls (training) and technical controls in a SWG/proxy.

Much of the improvement in long-term risk management will come from a consistent and regular iteration of gap analysis, prioritization, planning, and execution.

 

Outlook link link

Beyond specific technical measures to address genAI risk, organizations need to devote some time to tracking major trends in order to get ahead of probable changes in the next 12 months. We recommend tracking trends in five major areas: best practices, the big three enterprise vendors, genAI investment, infrastructure company growth and investment, and chat/LLM service provider adoption.

 

Best practices

Best practices and compliance frameworks tend to be lagging indicators in the sense that they are often created after a product or technology has widespread adoption and a large, established user base. Tracking these best practices is worthwhile, for prioritization based on common threat areas, gap analysis, and to assist in the implementation of concrete technical controls or risk management initiatives.

Two documents that are useful guidelines are the NIST AI Risk Management Framework[4] and the Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector by U.S. Department of Treasury, March 2024[5].
In addition, track the ISAC for your industry, as best practices or sharing of indicators for AI-related risks will likely be discussed in those forums.

 

Big three enterprise impact

Although there is a lot of discussion about startups including funding, valuations, revenues, and product releases, ultimately enterprises and their users will be most affected by what the major application, platform, and device vendors do.

The big three reach the highest number (100M to 1B+) of both consumer and enterprise users with their aggregate apps/platforms/devices:

  • Microsoft: Apps/AD/Azure/Surface/AI PCs
  • Google: Apps/Workspace/GCP/Android
  • Apple: iPhone/iPad/Mac

They have or are likely to make genAI free and built into existing services:

  • Microsoft: Copilot as part of Windows, Git, and Visual Studio
  • Google: Gemini as part of Google Search
  • Apple: Apple Intelligence and ChatGPT as part of iOS/MacOS

Organizations should track what Microsoft, Google,and Apple do in genAI. By focusing on the use cases and functionality used by their users, organizations will be better prepared for both the expected and probable over the next 12 months. For example, as engineering departments standardize on using GitHub Copilot for coding assistance, security policies should proactively implement app, DLP, and advanced detection controls around Copilot and similar apps.

 

More investment, innovation, and threats

Venture capital does not sit on the sidelines (or in the bank) very long, and the $10B+ from Microsoft will assuredly be spent by OpenAI sooner rather than later. Companies with the most cash will ultimately drive the most R&D and product releases. These startups and their genAI services should be prioritized, as they represent the most probable source of risk, simply by virtue of their focus on rapid growth of market share and user count.

Some of the investment has flowed into companies offering new genAI services based on domain-specific datasets, which might be by profession (e.g. legal or medical information), or by language (e.g. Japanese to English professional text translation), or other areas of expertise. In the short-term, this causes more challenges for security teams due to the sheer number of apps to control, but ultimately, it will help as there is more specificity in the purpose of genAI apps, which will make app-level controls, and the associated DLP controls for an app, more effective and targeted. It is much more challenging to manage risk with a do-it-all general genAI app, such as ChatGPT, since the use cases and datasets could be almost anything and policies are likely to be too general.

Other companies, often in an effort to compete, will release fast, often, and “test in production.” And because of competitive pressures or market dynamics, large companies, in addition to startups, can also prioritize feature development and release over testing:

“It is important that we don’t hold back features just because there might be occasional problems, but more as we find the problems, we address them,” Liz Reid, who was promoted to the role of vice president of search at Google in March, said in a company-wide meeting, according to audio obtained by CNBC.[3]

This is not a judgment on the philosophy of moving fast to market, rather it is an observation of the thinking of vendors early in an innovation cycle. By watching the product and functionality releases of genAI vendors, one can anticipate risk areas before they become obvious. At a minimum, this should set expectations for rapid release cycles, trigger internal triage/assessments to evaluate new functionality, and raise questions of organizational control over the “version” of a genAI app they are using.

 

Infrastructure Companies

Many enterprises may focus on genAI apps and services as their users use them more. However, the underlying infrastructure companies, often public, and those providing hardware should be tracked to identify the future macro trends, especially investment. Much like the router business during the internet boom in the 1990s, infrastructure companies and their financial performance will be leading indicators of application and software investment and identify coming threat areas to analyze and plan for.

For example, looking at NVDA investment (e.g. data centers, SOC, PCs) and revenue/customer base expansion, one can see trends in the genAI applications or services markets.

Some financial tracking of public market investment from mutual funds and ETFs, as well as tracking of metrics such as market cap vs. revenues/earnings (price/sales), divergence in infrastructure revenues vs startup/software revenues, can also determine trends in R&D. R&D typically leads to product releases which lead to usage and risks.

 

Chat/LLM Services

It is clear that if an organization deploys their own chat service or search facility using genAI technology, there is a risk of other threats not discussed in this report, namely prompt attacks to bypass guardrails resulting in hallucinations or biased results, or other data injection attacks such as data poisoning. Organizations should spend time on these risks as they plan those projects.

As more organizations widely implement search or chat services on their websites that use LLM or other genAI apps, there will be an increase in attacks against all genAI services, because attackers generally follow increasing usage and the associated money.

These threat trends may then cause adjustment or review of an organization’s own genAI services, such as a sales chat service or support knowledge base search facility. Organizations who are, or are planning to be, service providers should periodically review their risk profiles based on this broader macro trend to see if their risk posture has changed and whether that affects their control framework for protection, detection, or mitigation of their exposed genAI services.

 

A CISO Perspective

While genAI holds immense promise for innovations and efficiency, it also introduces substantial risks that organizations must address proactively. The use of governance, technology, processes, and people should all be applied, leveraging a framework that supports a strong backstop for your initiatives. As we saw with this research, there were some surprises such as regulatory information being shared with services. While many organizations are adopting genAI solutions, the use of genAI as a shadow IT service when combined with regulatory information and sensitive data, such as secrets and passwords, is an exposure no organization wants to find themselves in. The only approach is to adopt a programmatic plan of action to address both tactical and strategic use and adoption. The research showed services being adopted followed by fast responses in valuation and funding. Often it is good to remember Newton’s second law can also be applied to accelerating adoption cycles and your organization can find itself quickly managing a threat that has changed overnight. While the landscape may change, the fast-paced adoption and trends can still be useful as a “forecast” where the “genAI weather ahead” can be used to drive conversation with industry peers and be used as a lens for other threat reports and research.

 

Conclusion link link

The proliferation of artificial intelligence (AI) technologies, particularly those driven by generative AI (genAI), has significantly impacted risk management across enterprises. GenAI, while promising innovation and efficiency gains, also introduces substantial risks that organizations must address proactively.

GenAI, with its ability to autonomously generate content, poses unique challenges. Enterprises must recognize that genAI-generated outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. As such, it becomes crucial to assess and mitigate these risks comprehensively.

The focus should be on data risk from genAI app usage, as data is at the core of genAI systems. Here are some specific tactical steps to addressing risk from genAI:

  • Know your current state: Begin by assessing your existing AI infrastructure, data pipelines, and genAI applications. Identify vulnerabilities and gaps in security controls.
  • Implement core controls: Establish fundamental security measures, such as access controls, authentication mechanisms, and encryption. These foundational controls form the basis for a secure AI environment.
  • Plan for advanced controls: Beyond the basics, develop a roadmap for advanced security controls. Consider threat modeling, anomaly detection, and continuous monitoring.
  • Measure, start, revise, iterate: Regularly evaluate the effectiveness of your security measures. Adapt and refine them based on real-world experiences and emerging threats.

When engaging with AI/genAI vendors, organizations should inquire about their data protection measures, encryption protocols, and compliance with privacy regulations.
Organizations must consider broader risk management issues including legal, ethical, and liability implications. Collaborate with existing internal risk management teams including legal, compliance, and risk stakeholders to review your risk management framework and adapt to AI and genAI risk areas. Leverage the NIST AI Risk Management Framework[4] to guide your organization’s own efforts.

Finally, stay informed about macro trends in AI and cybersecurity. Monitor developments in AI ethics, regulatory changes, and adversarial attacks. By anticipating emerging pain points and threat areas, enterprises can proactively adjust their risk mitigation strategies.
In summary, enterprises must navigate the evolving landscape of AI-related risk by combining technical expertise, strategic planning, and vigilance. genAI, while transformative, demands a robust risk management approach to safeguard data, reputation, and business continuity.

 

References link link

[1] The most promising artificial intelligence startups of 2024, CB Insights, https://www.cbinsights.com/research/report/artificial-intelligence-top-startups-2024/
[2] ChatGPT — Release Notes, OpenAI. https://help.openai.com/en/articles/6825453-chatgpt-release-notes
[3] TECH Google search head says ‘we won’t always find’ mistakes with AI products and have to take risks rolling them, CNBC. https://www.cnbc.com/2024/06/13/google-wont-always-find-mistakes-in-ai-search-vp-reid-tells-staff.html
[4] NIST AI Risk Management Framework, NIST. https://www.nist.gov/itl/ai-risk-management-framework
[5] Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector by U.S. Department of Treasury, March 2024. https://home.treasury.gov/system/files/136/Managing-Artificial-Intelligence-Specific-Cybersecurity-Risks-In-The-Financial-Services-Sector.pdf

 

light blue plus

Cloud and Threat Reports

The Netskope Cloud and Threat Report delivers unique insights into the adoption of cloud applications, changes in the cloud-enabled threat landscape, and the risks to enterprise data.

Storm with lightning over the city at night

Accelerate your security program with the SASE Leader