“Where’s your app? Where’s your data?”
For a long time, if you needed to know where your applications or data were, the answer was clear: it was always either on-premises or in a branch. Universally, almost regardless of organization size, infrastructures were contained, and visible within a defined boundary—you have a data center, a network, a branch, a user. Even if you had a few users connecting by VPN while traveling or working from home on occasion, it didn’t really impact network performance or introduce undue risks. Life was pretty good.
Then the cloud happened, and the answer to “Where’s your app? Where’s your data?” became a bit hazier than it had been. An application might still be in the data center. But sometimes it was in the cloud. Sometimes it was software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS). But today’s mystery didn’t get scary for networking teams until about a year ago.
When COVID-19 hit in March 2020, virtually all users left on-premises overnight to work from home. Without the majority of users existing within an on-premises network boundary, the question of knowing where apps and data reside suddenly became even harder to answer.
The COVID user exodus was like a bomb going off—wherever each of those users landed (like shrapnel) essentially became an edge of the new network perimeter. And networking teams immediately had to solve a whole new world of problems—from connectivity, to performance, to security—within what we might call their new Network Bermuda Triangle of Uncertainty: data center, cloud, and user. A lot seems to go missing in that triangle!
Triage in the triangle
Stop me if you’ve heard this joke (or one like it) before:
A user, a developer, and a network engineer walk into a bar and someone yells, “Hey, things are slow!” And the network engineering is left holding the check.
The phrase “It’s slow” is the bane of every networking person. But people have very little patience with regard to technology. If a web page takes more than a few seconds to load, people move on. The divide comes in part from the expectation of what’s slow vs. what’s fast. But in the Network Bermuda Triangle that COVID created, the problem is even worse than user subjectivity and the perception of speed. The network team can’t really know what a particular user’s performance is like because it’s now dependent on the function of every individual ISP.
How do you quantify network performance issues when you have geography to contend with, when you have load balancers to contend with, when every redundant path exponentially increases the footprint of the network—never mind the internet as a backbone and the great unknown of SaaS providers? And that doesn’t even account for individual remote worker circumstances. A Wi-Fi router using 2.4GHz spectrum gets placed too close to the microwave at home. Someone’s teenager unknowingly hooks an Xbox up to the corporate network for a higher-speed connection. As a network person, what are you going to do—put a probe in every person’s home? Even a diehard Wireshark guy like me knows that is an impossible ask.
The Network Bermuda Triangle isn’t just about the unknown, but also the uncontrollable. Without a defined boundary, the network becomes amorphous—it can spread everywhere. And in this world, security becomes top of mind. Imagine a giant colander where every hole represents a possible route for the exfiltration of data. Now multiply that by (at least) 100!
From chaos to secure connectivity
In any crisis, survival is the first objective. So, in the spring of 2020, networking teams in triage mode turned to the tool they had at hand to manage the mass decentralization of their workforces. VPN would be their first line of defense for secure connections that could keep businesses running.
But a VPN’s job is like a vacuum cleaner—sucking everything back to the data center and then running it through the on-premises security stack. This usually includes all the firewalls, proxies, intrusion prevention (IPS), detection (IDS), and other solutions that filter network traffic for threats. Unfortunately, VPN wasn’t designed for this kind of scale. It was a great solution when serving mobile users or when inclement weather hit and users had to work from home—very much an exception rather than the rule. Backhauling all traffic through the data center doesn’t work when you have 10,000 or 200,000 endpoints. It creates huge congestion at the VPN concentrator, and security becomes an impossible bottleneck. And when your company selects a cloud security vendor, you surely don’t want to recreate this nightmare of traffic backhauling and bottlenecking inside the cloud. Unfortunately, as was discussed in another recent blog titled “Hairpinning: The Dirty Little Secret of Most Cloud Security Vendors,” this practice is commonplace among many vendors.
From a security standpoint, VPN use has also been a longstanding battle. Security wants everyone to use VPN so that they can see everything that users are doing and have them go through the central security stack. But as COVID-19 hit and employees went remote, teams quickly realized that the high volumes of Zoom and WebEx traffic in particular—sometimes going through multiple security stacks—made it almost unusable. The network was completely congested at key entry/exit points. It didn’t take long for companies—even big financial organizations—to decide they needed a compromise that could free up the network congestion.
Enter the split tunnel
“Split tunneling” was the compromise companies made. While VPN would still be used for on-premises business access, Zoom traffic would go to the public internet to alleviate VPN congestion. But they threw a bit of caution to the wind out of necessity because the decision to split tunnel instantly did two things:
- It bypassed the protection of the security stack, potentially exposing some parts of the organization to outside cyber threats or data leakage.
- It also opens a Pandora’s Box to use split tunneling for other applications. Once Zoom was approved for direct internet connection, every head of business was probably asking: What about Office 365?
Let’s say you’re using Microsoft OneDrive—go back to that fundamental question: Where’s your data? Out there, somewhere. It’s not in the data center or within the boundary of the security stack. Where’s your user? Out there, also.
So why are you having them come in just to go right back out again? Especially when we all agree that the transport is secure? The lanes that TLS protocols open up to talk to each other have been proven to be secure. The military uses it. The Pentagon uses it. So if the pipe between the data and the user is secure—why are we worried about letting them go direct-to-internet for Office 365? It’s only a problem if the data is somehow infected to begin with.
Reassessing the situation with security in mind
The opportunities presented by makeshift networking systems haven’t slipped past savvy cybercriminals. As it turns out, at least one old enemy is resurfacing to take advantage of our current compromises: Visual Basic macro-based threats. Since the beginning of COVID, our research team here at Netskope has found that Microsoft Office documents infected with macro-based viruses and Trojans have increased by as much as 9x.
How do we assess these challenges and relate them to the modern security stack?
The first thing users often say is, “That’s what I have antivirus for.” But antivirus (AV) is only to protect the user. It doesn’t do anything for the data that they’re collaborating on in the cloud—even when a file is opened in a browser. If data isn’t stored locally on the device itself (e.g., files in OneDrive or Google Drive), AV can’t help. Again, our reports show that 61% of malware is cloud-delivered.
The next step in the process of trying to tame this Bermuda Triangle is making sure that our conduit—our secure pipeline—is also scanning for threats. We need to turn that direct connection between the user and the data and use that in-line opportunity to scan for threats. You’re going there anyway to access your data—you might as well let someone scrub it in the process. Think of this as a built-in network-based malware scanning. Clearly, this means that the network plays an active and integral role in the overall security posture.
Offload with confidence
In 2020, networking teams offloaded Zoom traffic onto direct split tunnel connections out of sheer necessity for survival. But what if you could offload with confidence—knowing that you were getting the benefits of a direct connection without sacrificing security? Not only is it better for the user in terms of performance, but it’s also something that every network person intrinsically understands—the closer you are to where you’re going, the faster you’re going to get there.
Things are more stable than they were a year ago when all hell broke loose. So now, let’s put some security back in place without compromising the value of what’s been gained. We can’t bring back the old regime and have everyone backhauled through the security stack with a massive performance bottleneck degrading the user experience. Your users have tasted freedom and they’re never going back.
The Bermuda Triangle reality that every networking team has to face right now is that your well-defined network boundary exploded. The old world is gone, and it isn’t coming back. So, in this new world, what are you going to do to protect your core? You need a solution that is close, fast, and secure—providing that in-line protection between data and users regardless of where they may be. After all, the network is the glue that holds everything together. So why not use the network to reduce risk, reduce cost, and, most importantly, reduce friction. Friction-free security … who wouldn’t want to sign up for that?