Netskope debuts as a Leader in the 2024 Gartner® Magic Quadrant™️ for Single-Vendor Secure Access Service Edge Get the report

close
close
  • Why Netskope chevron

    Changing the way networking and security work together.

  • Our Customers chevron

    Netskope serves more than 3,400 customers worldwide including more than 30 of the Fortune 100

  • Our Partners chevron

    We partner with security leaders to help you secure your journey to the cloud.

A Leader in SSE.
Now a Leader in Single-Vendor SASE.

Learn why Netskope debuted as a leader in the 2024 Gartner® Magic Quadrant™️ for Single-Vendor Secure Access Service Edge

Get the report
Customer Visionary Spotlights

Read how innovative customers are successfully navigating today’s changing networking & security landscape through the Netskope One platform.

Get the eBook
Customer Visionary Spotlights
Netskope’s partner-centric go-to-market strategy enables our partners to maximize their growth and profitability while transforming enterprise security.

Learn about Netskope Partners
Group of diverse young professionals smiling
Your Network of Tomorrow

Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.

Get the white paper
Your Network of Tomorrow
Introducing the Netskope One Platform

Netskope One is a cloud-native platform that offers converged security and networking services to enable your SASE and zero trust transformation.

Learn about Netskope One
Abstract with blue lighting
Embrace a Secure Access Service Edge (SASE) architecture

Netskope NewEdge is the world’s largest, highest-performing security private cloud and provides customers with unparalleled service coverage, performance and resilience.

Learn about NewEdge
NewEdge
Netskope Cloud Exchange

The Netskope Cloud Exchange (CE) provides customers with powerful integration tools to leverage investments across their security posture.

Learn about Cloud Exchange
Netskope video
The platform of the future is Netskope

Intelligent Security Service Edge (SSE), Cloud Access Security Broker (CASB), Cloud Firewall, Next Generation Secure Web Gateway (SWG), and Private Access for ZTNA built natively into a single solution to help every business on its journey to Secure Access Service Edge (SASE) architecture.

Go to Products Overview
Netskope video
Next Gen SASE Branch is hybrid — connected, secured, and automated

Netskope Next Gen SASE Branch converges Context-Aware SASE Fabric, Zero-Trust Hybrid Security, and SkopeAI-powered Cloud Orchestrator into a unified cloud offering, ushering in a fully modernized branch experience for the borderless enterprise.

Learn about Next Gen SASE Branch
People at the open space office
Designing a SASE Architecture For Dummies

Get your complimentary copy of the only guide to SASE design you’ll ever need.

Get the eBook
Make the move to market-leading cloud security services with minimal latency and high reliability.

Learn about NewEdge
Lighted highway through mountainside switchbacks
Safely enable the use of generative AI applications with application access control, real-time user coaching, and best-in-class data protection.

Learn how we secure generative AI use
Safely Enable ChatGPT and Generative AI
Zero trust solutions for SSE and SASE deployments

Learn about Zero Trust
Boat driving through open sea
Netskope achieves FedRAMP High Authorization

Choose Netskope GovCloud to accelerate your agency’s transformation.

Learn about Netskope GovCloud
Netskope GovCloud
  • Resources chevron

    Learn more about how Netskope can help you secure your journey to the cloud.

  • Blog chevron

    Learn how Netskope enables security and networking transformation through security service edge (SSE)

  • Events and Workshops chevron

    Stay ahead of the latest security trends and connect with your peers.

  • Security Defined chevron

    Everything you need to know in our cybersecurity encyclopedia.

Security Visionaries Podcast

Neurodivergence in Cyber
Host Emily Wearmouth sits down for a conversation about neurodivergence in cyber with special guest Holly Foxcroft, a neurodiversity consultant and expert on neurodiversity research in the cybersecurity industry.

Play the podcast
Neurodivergence in Cyber
Latest Blogs

Read how Netskope can enable the Zero Trust and SASE journey through security service edge (SSE) capabilities.

Read the blog
Sunrise and cloudy sky
SASE Week 2023: Your SASE journey starts now!

Replay sessions from the fourth annual SASE Week.

Explore sessions
SASE Week 2023
What is SASE?

Learn about the future convergence of networking and security tools in today’s cloud dominant business model.

Learn about SASE
  • Company chevron

    We help you stay ahead of cloud, data, and network security challenges.

  • Leadership chevron

    Our leadership team is fiercely committed to doing everything it takes to make our customers successful.

  • Customer Solutions chevron

    We are here for you and with you every step of the way, ensuring your success with Netskope.

  • Training and Certification chevron

    Netskope training will help you become a cloud security expert.

Supporting sustainability through data security

Netskope is proud to participate in Vision 2045: an initiative aimed to raise awareness on private industry’s role in sustainability.

Find out more
Supporting Sustainability Through Data Security
Thinkers, builders, dreamers, innovators. Together, we deliver cutting-edge cloud security solutions to help our customers protect their data and people.

Meet our team
Group of hikers scaling a snowy mountain
Netskope’s talented and experienced Professional Services team provides a prescriptive approach to your successful implementation.

Learn about Professional Services
Netskope Professional Services
Secure your digital transformation journey and make the most of your cloud, web, and private applications with Netskope training.

Learn about Training and Certifications
Group of young professionals working

Combating Misinformation and Deep Fakes in Elections and Business: Q&A with David Fairman & Shamla Naidoo

Jul 26 2024

Technological advances in how we create and consume media have repeatedly transformed how election campaigns are fought: social media, TV and radio were all revolutions in their times.There have always been concerns about the impact these new technologies would have on democracy: the Milwaukee Journal worried following the first televised presidential debate, in 1960, that “American Presidential campaigning will never be the same again.” Perhaps they were right…

It’s clear 2024 will be remembered as a big year for democracy, with elections in 64 countries, as well as in the European Union and the latest disruptive tech is AI. This year we are seeing an increased use of generative AI to create deep fakes–videos that look and sound real, but have in fact been artificially created, often to spread either misinformation or disinformation. Deep fakes are powerful tools and with AI technology rapidly evolving–and access to them expanding–there are clear potential dangers not just for democratic decision making, but for consumers and businesses too.

With that in mind, I (a Brit) sat down with our in-house experts David Fairman (an Australian and our APAC CIO) and Shamla Naidoo (a South African American, and CXO Advisor), to hear their thoughts on the potential security issues driven by tech during these global elections, what these technological developments could mean for people and enterprises, and how we can protect ourselves as individuals.

Emily: David, kick us off, what are deep fakes and how are they being deployed?

David: Deep fakes are images, video, and audio that have been created by generative artificial intelligence which manipulate a given person’s likeness to make them appear to do or say something which in reality they did not. Not only do they spread misinformation and lies, they’re also increasingly easy and cheap to make. All you need is the right software and source materials such as publicly available images and videos of a person to inform the AI tool. 

In the case of politicians, this material is very easy to source, but increasingly business leaders, celebrities and frankly anyone who uses social media could be a deepfake victim. Today, as we mostly consume media via short quickly-absorbed videos, deepfakes can be very convincing, especially to the untrained eye. As technology continues to evolve, it will become even more difficult to distinguish them and say what content is real and what isn’t. 

Emily: Shamla, how are bad actors using them during elections?

Shamla: We are seeing deep fakes across many democracies in the run up to elections, in particular during the often emotionally charged campaign periods, and the ultimate goal is to influence our voting decisions. All of our decisions, whether we are conscious of it or not, are influenced by what we see and hear on a daily basis. And in an election campaign a deep fake piece of content relating to a key topic can affect our decisions about who we vote for and ultimately who ends up in power.

Because deep fakes are cheap and fairly simple to make, they present a huge opportunity for bad actors who have an interest in a population voting (or not voting) for a particular candidate.

Emily: How are deep fakes being handled by the victims in the political world?

Shamla: Although they are increasingly common, deep fakes are being handled case by case in most democracies, viewed fundamentally as a reputation management issue for candidates, because the misinformation is often personal and scandalous in nature. This perhaps downplays the power of these campaigns because each one is being created with a wider goal in mind. 

The more prolific deep fakes become (and in my eyes it is inevitable they will become more common), the more we will have to educate and put systems in place to ensure people and platforms double check information – and sources.

Emily: What about in business, David, how are deep fakes posing a threat to business, and how are organisations handling the early incidents that are being reported?

David: Well this is a very new threat for businesses, but a challenge that will grow in significance. I believe businesses will have to guard against the use of deepfakes to impersonate senior executives within a business. Social engineering attacks – in which criminals impersonate someone, often an authority or trusted figure, in order to trick people into transferring money, granting access, or transferring information – are already a danger. As AI technology develops, it’ll become even harder to differentiate a real call from a senior executive from a fake one, and so the potential for bad actors to dupe unsuspecting victims will be much greater. 

Emily: This all sounds pretty doom and gloom. But hopefully it isn’t. Is it? Please tell me it isn’t…

Shamla: The fact that it’s so difficult to know what has been AI generated and what hasn’t makes it difficult for individual consumers to know what to do beyond proceeding with caution! 

But thankfully, it’s not all as bad as it seems. We’re starting to see social media platforms labelling content that has been artificially created so we can know whether or not the content is real or not before we share or ‘like’ it. 

That said, we can always aim to do more. I think that we need to start seeing governments take serious next steps in implementing AI legislation – similar to the approach the European Union has taken with their Artificial Intelligence Act, and introduce legally binding requirements that will mitigate AI risks. Only then will we be able to control the use of AI in this context and the risks associated with it. 

Emily: David, and what about in businesses? 

David: As I mentioned, one particularly dangerous way of using deep fakes are social engineering attacks. However, if companies find themselves in situations where they don’t 100% know if they’ve been attacked or not, then they should establish a so-called call-back procedure which enables employees to double check whether there’s a criminal behind an attack or whether the request (in the case that a trusted figure requested a particular thing) is legitimate. 

The advantage of this procedure is that it doesn’t rely on being able to spot that the audio or video is fake, just a sense that the message is unusual or asking you to do something out of the ordinary. 

Emily: Ah, this is all incredibly useful! Thank you both for your time and for the valuable insight. 


While deep fakes are just the latest technological innovation to challenge election processes and fairness, (and potentially impact businesses), they feel different in both the speed at which they are evolving and improving, and in the extent to which they are putting potentially significant influence in the hands of anonymous bad actors. With elections around the world underway, let’s make sure we double check where our information comes from and whether we can trust it. 

For more information on elections, disinformation, and security, check out our podcast with Shamla here, or wherever you listen to your podcasts.

author image
Emily Wearmouth
Emily Wearmouth is a technology communicator who helps engineers, specialists and tech organisations to communicate more effectively.

Stay informed!

Subscribe for the latest from the Netskope Blog