Co-authored by Neil Thacker and Nathan Smolenski
A framework and strategy review for managing network & security transformation is much needed. Every CIO, CISO, and CTO today will be assessing their ongoing costs to run and operate a secure network and security programme for 2021 and beyond. In parts 1 & 2 of this three-part series, I explained what numbers should feed these calculations and measurements and how performance, flexibility, and scalability are all key to this transformation. We are now in a critical stage to decide what our networks and security programmes will look like in the near future…and we only have one chance to get it right.
Driving top-line growth while improving the bottom line with operational cost efficiencies
Ask a board what their ultimate goal for digital transformation is and it’s improving top-line growth whilst applying operational cost efficiencies to maintain a healthy bottom line. Transformation does come with new costs, but as project teams become more experienced with digital transformation, so come the economic efficiencies.
This same approach applies to network and security transformation. We now have organisations that have followed the same design principles and have moved, or are moving, their security technologies and controls to the cloud. These skill sets are in high demand as more and more organisations realise the value of this transformation. This move also allows the organisation to simplify its budget projections and focus on expense management by reducing its unpredictable CAPEX expenditure and moving to a predictable OPEX subscription-based model that supports operational cost efficiencies. More on this later. In summary, a win-win. Not only is this simpler to forecast, but as security becomes a services-based industry, it will support cost avoidance and will allow for additional consolidation opportunities.
Gone are the days of routing traffic through the public internet and through a myriad of appliances all making attempts to inspect and decode traffic with the team needing to perform regular reviews for each appliance to assess ROI/TCO and asking the obvious question: “Do we still need this and is there a better option?” Today, all organisations have the opportunity to use cloud-based microservices when the needs arise without expensive design and architectural reviews. To think of this as an analogy, it’s similar to booking an international trip and using a dozen airlines and airports to get you to your destination. Every flight connection requires another security check where you and your baggage needs to be scanned. Now consider paying a huge premium for this. Given a choice, everyone will instead choose a cost-effective direct option with the same or better security applied on demand. This is what network and security transformation should be about, simple, fast, and secure without unnecessary delays.
Flexibility outside the bounds of IT
As we transform our networks and security and move our security controls to the cloud, we must assess how we think about forecasting and budgeting. Securing a user (I much prefer to refer to users as employees) in our environment is an expense typically assessed for each budgetary year. If we have 20,000 employees, it’s obvious that it’s going to be more expensive and require more resources than securing 2,000 employees. The issue with organisations is that they cannot accurately predict what their employee count will look like in the 3-5 years ahead. The challenge is mergers and acquisitions (M&A), a change agent that occurs for most organisations that will shake up any IT and security strategy. With M&A, predicting onboarding costs usually involves thinking about new hardware or even replacement hardware to scale to the organisation’s new requirements. These types of challenges can take months to plan for and apply, and will typically slow an organisation down at a critical time. However, as organisations embrace and use the cloud, we can systematically use the flexible benefits of the cloud to scale when necessary without compromise. Adding another 5,000 employees to a cloud-based Next Gen Secure Web Gateway (NG SWG) is as simple as updating the license. No new hardware, no shipping hardware to new locations, no racking and stacking, and procuring cabinet space. This flexibility outside of the legacy bounds of IT should not be underestimated.
As we have now overcome some of the more difficult challenges of the past and simplified onboarding, we need to think about other opportunities to consolidate. I think we can all agree that the average organisation has acquired many technologies and solutions over the years that are ready for replacement in a cloud-first world. The first statement I hear from most CISOs when discussing security transformation is, “I need to consolidate.” Consolidation of technologies is not an easy task but it can be made easier by using concepts such as Secure Access Service Edge (SASE) to identify what key capabilities are required to support the organisation’s future ecosystem. A staple for most organisations’ future architecture is the focus on the following, ideally on as few platforms as possible with API integrations and a fast and performant global network to provide access to business applications and infrastructure.
- Identity & Zero Trust Network Access (IAM & ZTNA)
- Web & Cloud Security Cloud/Gateway (SWG & CASB)
- Data Protection (Data Classification & DLP)
- Threat Protection (Anti-Malware, Sandboxing, Browser Isolation)
- Endpoint Protection (NG-AV, EDR)
- Automation & Orchestration (SIEM & SOAR)
As we assess cost reduction and this new concept model, we must continue to ensure we see value, benefit, and overall risk reduction to our organisations whilst providing the best connectivity and flexibility to our workforce. After all, a security budget should always be appropriate to the risk appetite of the organisation.
Business value, benefit, and risk reduction
As we look at value, benefits, and ultimately risk reduction opportunities and better control efficacy, it is often difficult to come to a realistic value at risk. There are various flavors of assessing such risk postures and there are certainly many debates around this topic. Bruce Schneier wrote a great article on this same topic for CSO back in September 2008 that has aged relatively well. As it pertains to the traditional approach of putting a dollar value on risk, he posits, “The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles, like time and money, and intangibles, like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk.”
This “probability x impact” approach has been the method we have all tried to implement in one way, shape, or form to get some semblance of a financial indicator of the cost of the risks that we have identified and are attempting to manage. The problem, as Bruce also points out, is that the resulting data outputs from these calculations essentially work against us when talking to business leaders, and are clouded by the lack of good data we have as inputs.
For example, If the calculated cost of a given risk is $40,000 annually and the total cost of ownership of the people, process, and technology intent on better managing or reducing that risk is $65,000 annually, imagine what the CFO is going to want to know. How accurate is our data on the factors that go into measuring impact (actual loss, reputation, etc.), and how accurate our data is for determining the actual probability? And, even if we all agree on those numbers, how the CFO interprets and chooses to ultimately enable you to invest can obviously be influenced by these, and many other factors. In speaking to many in the industry, as well as from our own experiences as practitioners, it is often the challenge of bridging the gap in understanding. If you do not understand your organization’s true risk tolerance levels and perspectives, you could really be fighting an uphill battle.
When making considerations for risk management; it is critical to determine how effectively risks are managed. From a cybersecurity perspective, it is often that we will see organizations align policies and controls to standardized frameworks that are often audited by 3rd parties annually to determine maturity, alignment, and overall progress. Security teams then often are forced to reactively prioritize many of their efforts post-assessment to address the findings.
From a risk management perspective, where an organization progresses in terms of risk then purely becomes an output of the efforts put forth to respond to the assessment findings. Value At Risk frameworks like FAIR (Factor Analysis of Information Risk) call this approach an Implicit Method of managing risk due to its reactive nature and lack of consistent feedback loop. The result is often less control of the risk management outcome from a loss exposure perspective as the probability and impact elements are not natively included in frameworks. A proactive risk management posture, in contrast, has a very explicit risk target that is constantly managed as a result of feedback and inputs into the risk management process.
As we assess operational cost efficiencies, flexibility, cost reduction, business value, and better risk management as a practice, we aim to work within a model that continuously informs and supports proactive adjustments of our controls to address an ever-changing cost, business benefit and risk landscape.