Shamla Naidoo:
This job is really hard and it continues to get harder. But at this point, there's very little in the way of mental health support for the security leaders and for the security teams. So I really think that CEOs are going to start to double down on not just innovating for the business but also helping the CISOs to create both innovation for security, giving them the tools, the technology and the solutions to help them do their jobs better. But also supporting that with mental health and wellness support programs.
Producer:
Hello, and welcome to Security Visionaries hosted by Jason Clark, CSO at Netskope. You just heard from one of today's guests, Shamla Naidoo, Head of Cloud Strategy and Innovation at Netskope. In this episode, Shamla is also joined by Steve Riley, Field CTO at Netskope, Mike Anderson, Chief Digital and Information Officer at Netskope and last but certainly not least, David Fairman, APAC CSO at Netskope.
Producer:
As we welcome the New Year with open arms, security leaders around the world are continuing to try and stay five steps ahead of bad actors in the space. To kick off 2022, we brought together some of the sharpest leaders in the industry to share what predictions are top of mind on their risk radars. We hope you enjoy this round table discussion and from everyone at Netskope, we want to wish you a happy and healthy New Year.
Sponsor:
The Security Visionaries podcast is powered by the team at Netskope. Netskope is the SASE leader, offering everything you need to provide a fast data centric and Cloud Smart user experience at the speed of business today. Learn more at N-E-T-S-K-O-P-E.com
Producer:
Without further ado, please enjoy episode seven of Security Visionaries with your host, Jason Clark.
Jason Clark:
Welcome to Security Visionaries. I am your host, Jason Clark CSO at Netskope. And today I'm joined by some of the best experts in the industry and we're going to be talking about predictions and it's always a big topic this time of year. But we're going to try and bring to light some of these that we all need to be paying attention to for 2022 and beyond. First guest is, Steve Riley, great to have you here. How are you doing?
Steve Riley:
Thanks, Jason. How about yourself?
Jason Clark:
Doing super fantastic. And Dave Fairman, how are you?
David Fairman:
Hey, Jason, good to be here. Thanks for including me in your conversation this week. I'm doing well, mate. I'm doing really well. I'm looking forward to the Christmas and New Year break.
Jason Clark:
What time is it in Australia right now?
David Fairman:
2:00 AM in the morning. So I'm hoping my responses to this conversation will be eloquent considering the time.
Jason Clark:
Yeah. Thanks for staying up for us. It'll be awesome.
David Fairman:
Oh, good man.
Jason Clark:
And Shamla, how are you?
Shamla Naidoo:
Hey, Jason, thank you so much for including me in this fantastic conversation. I'm looking forward to it.
Jason Clark:
Awesome. And Mike?
Mike Anderson:
Hey, good morning. It's great to be here, looking forward to hearing some great predictions this morning on this podcast.
Jason Clark :
Well, perfect. Well, let's keep it lively and really just bring anything up you want to and comment on any of these as we go through, just so we'll make it fun for the audience. But again, everybody here as you'll see and you can look up, they're all amazing experts in the industry that I've known for a very long time. So the first thing I wanted to start off with is that, it's kind of a prediction but it's also very obvious. So I kind of call it a little bit of a softball but I bring it up because I'm worried not everybody's thinking about it. And that is that as we return to work and meaning, everybody was working from home and then your company says, "It's time to come back in the office three days a week or five days a week." And we're already starting to see this. A significant amount of people have either A, already moved but didn't tell their employer or B, are deciding, you know what? I like working from home and don't want to go back to the office. And with that, we're going to see a lot of attrition and turnover and that comes with insider threat. And when somebody decides to change jobs, they see their work product as their own. And when we see as, increase of over a 10X the downloads of information that they have touched or worked on. It could be anything from somebody that's on their sales to team, downloading all their customer lists so that they can take it to their next place. So just something that every security team should be thinking of, not just thinking about the external threats. So anybody have any thoughts on that one?
David Fairman:
No, I think that's a fair prediction. And we're talking about this is going to be the new era of resignation and people leaving the organization as well. I think we're going to probably see a little bit more of a rise in that activity at the moment. Now, I know we are talking about it here in region.
Jason Clark:
Yeah, I think that the place that catches people a little bit blind is just the use of really of all the personal apps, all the storage apps, et cetera. And a lot of organizations aren't inserting that information, that traffic into their existing insider threat processes. So I think that's the place I really recommend people look into. So to Shamla, I wanted to start with you and you have a long shot prediction around technology specific security vendors, are going to redefine, rebrand themselves as SSE vendors. Can you unpack that prediction and share your thoughts?
Shamla Naidoo:
Yes, absolutely. If you look out there today Jason, most of the cyber security vendors who provide products or services or tools are rebranding themselves as Secure Services Edge vendors and they're really pushing this idea of zero trust. And so what you have is, everyone who's doing things like securing or protecting files, protecting servers, protecting networks, acting as gateways, acting as data leak prevention tools. Everyone is branding themselves as a zero trust vendor securing the edge. And what that is doing, I think is going to create an enormous growing burden for the CISO, because now we've shifted the burden on what we actually do to this very generic term. And we are leaving it up to the consumer or the decision makers to determine whether or not these solutions address strategic gaps. Which gaps they address? What are the pros and cons? We also leaving it up to the consumer to decide, which ones they need versus which ones they can do without? And I feel like that is unfair to the industry because if everyone says they're zero trust vendor and there's no strategic or industry definition for what is included in zero trust and where is the edge, that just makes the job of the CISO much, much harder. Because really when you think about it, where is this edge that these SSE vendors are going to be addressing? It's everywhere. It's wherever we conduct transactions, it's wherever we conduct business. So the edge basically is everywhere. And we know from experience that not every provider, not every vendor can actually address all of the issues in those environments. And so that's why I think that as companies rebrand themselves, it's just going to increase the burden on the CISO.
Jason Clark:
You know what? I think same thing with SASE. As soon as SASE came out you started seeing, there's now 50 SASE companies. And everybody's just calling themselves SASE or they started just buying companies but with no integration and then saying, "Hey, we have all the parts and they all work together." And they really don't. But Steve you'd recently published something, Dave, you as well around zero trust and some good articles. Any additional thoughts on Shamla's prediction?
Steve Riley:
Yeah, I think it's important to remember what these topics are. SASE is intended to be an architecture and zero trust is intended to be a new way of thinking about assessing the trustworthiness of an interaction. Zero is a starting point, but ultimately there has to be some level of trust for any two entities to interact. And we don't want to just assume that you have full access to everything because of what your IP address is. And I really love the way that vendors who brought this are moving more toward a continuous adaptive trust approach, where you look at all these contextual signals and determine just how much access to grant, for just that interaction, for just that amount of time.
Jason Clark:
Yeah, I agree, and trust isn't binary. It's not on or off. And I think that a lot of those vendors that Shamla mentioned that they kind of still view that zero trust is like an ACL, that trust is you either have it or you don't. And a lot of vendors you ask them to define what zero trust even means and you're going to get different answers for most of them as well. So I think there it is, we are hurting the industry, there's a lot of confusion. David, so you had an interesting prediction around deepfake, and voice cloning and misinformation. So why is this a prediction that you're considering and how should users and companies be thinking about protecting themselves?
David Fairman:
Well, look, I think even this year, we started to see the rise of deepfake and probably even in the preceding year, the rise of deepfake as a tool for various nefarious reasons. Whether it was political influence or whether it was for increasing fraud and scams, social media social engineering for supporting social engineering attacks, et cetera. There was a couple of really good events that happened this year with regards to an energy company, which they had a fraud committed and the tool that was used to support that fraud was deepfake. And there was another unsuccessful attack on a technology company using the same capability. I think what we're starting to see now, is the fraud elements, whether it's identity fraud, business email compromise. Sorry, not business email compromise but sort of scams, and executives of social engineering and pressure put on employees to act under pressure. I think we're starting to see that now. I think that deepfake is something that helps adversaries, threat actors, fraudsters execute on those attacks. Particularly, when you think about scams and sort of those executive pressure type techniques that we see. We're also starting to see application fraud start to be... Or deepfake be seen as a vector to increase the success of the application fraud. We hear of things of ghost attacks or ghost fakes, which are purporting to effectively taking over the identities of people who are deceased and that has started. We're seeing a rise in financial crime in the financial crime world. So look, I think we started to see that in the past couple of years. I think it's only going to increase as we move into 2022 and into 2023. Deepfake technology is becoming more and more sophisticated, more and more accurate. I think it's hard for organizations to combat against that. You ask about how should organizations be thinking about this? What should they be doing to try and prevent that? Funnily enough, it comes back if you think about the fraud angle of this. It does come back to some of the basics of fraud prevention, things about validating that you know who you're talking to or who you are transacting with. Use mechanisms so that you are doing that authentication, all that validation out of band. Just don't trust what you see through an audio file as you talking to someone that you think you are talking to. How do you validate that training and awareness of your people? There's certain cues and signals that you can identify when you look at deepfake videos to identify those sorts of fraudulent video images or audio files. So there's a number of different things there. I think we're going to start to see... I mean, we basically saw it in the US elections around political influence with fake news. We've seen all of that. I think it's only going to grow. And I think social adversaries are going to start to really embrace this more so than they have today in terms of social engineering capability, which is again, going to lead to substantiative cyber attacks.
Jason Clark:
A good example of the executive pressure, we actually just saw this, I got alerted a couple weeks ago that a person in Europe got a phone call, and a text message and a phone call. And it was them acting like they are a CEO, telling this person to go take an action, "Immediately, urgent, I need you to take this action." And the person's kind of like, "Okay, well, will jump on a call." "Hey, I don't have time to jump on a call." And his response was, "Well, what's my favorite soccer team?" Or what's my favorite hockey team? And immediately the conversation went dead. But at first admittedly, this person was like, "Oh, I really thought he need me to go do that. I thought it was a weird request but I was ready to step into action." So the thinking and the awareness training kicked in and worked and we set to continue to do a better job of that.
David Fairman:
That's exactly what happened in the energy attack and that's successful. I think that led to about a 236,000 US Dollar fraud. So it will be there and we'll see more of it and I think proceeding those social engineering attacks. And I think political influence, and think about what that means not just... You talk about political influence but just from an influencing perspective. So think about how you can not just drive society or a subset of our community down a certain path. Think about how that could cause divide on an organization, on a private institution. If there was deepfake being used to send messages executives, around culture or around activity that we're doing, we could do it on a much smaller scale and we could start to influence and destabilize organizations. So maybe when it comes to destabilization of organizations that's the motivation for threat adversaries. I think we'll see a rise in that because it's just a tool to drive that lack of trust in the environment.
Jason Clark:
I think it's a great one, spot on. Steve? So another prediction that you've been talking about is organizations across the global start to measure their carbon footprint in relation to IT and their data centers. How likely is this? What percentage of organizations do you think will do this?
Steve Riley:
I think it's totally likely and it's going to affect 100% of organizations out there. We know that priorities from investors and stakeholders and forthcoming regulatory requirements are going to push organizations to improve the methods and processes they use to account for their carbon footprint. And I see that there's two dimensions here. There's a security dimension and infrastructure dimension. In the infrastructure dimension, folks are going to think, "What do I do in my data centers? How about moving them all to where you can get free air cooling like Iceland?" Open the windows, get free air cooling in your data center. Other things, if you can't pick up your data center to move, you could look at things like renewable energy credits, onsite generation technologies like cogeneration that combine cooling and power. Just make sure that you're getting good utilization out of the hardware that you have, is another way of reducing overall power consumption. And then for water specifically, I've seen instances of where orgs have moved from potable water from Maine's to using gray water sources as a way of reducing costs. Now that's for people who want to have on-premises data centers still. I would argue that maybe migrating to the cloud is another way of getting close to net zero emissions. The cloud providers have pretty strong financial incentives to develop energy efficient data centers. And they run with effectiveness that are much more so than what a typical organization might be able to do and they've got programs in place to have their operations that are carbon neutral. And even more so now, you can see like from Azure and GCP and AWS, where they publish the emissions from their data centers now. And organizations moving to the cloud could say, "Hey, we've reduced our emissions by going out this way." Now, I said there is a security dimension too. And it's interesting because I've seen some instances of where security risks are acquiring climate change dimensions. I'll give you a couple of examples, hacktivists. So these are attackers motivated by issues. They're targeting large enterprises with carbon footprints. So if you're a huge emitter, someone's going to come after you. And this is a brand new risk because this is just an enterprise doing business as usual but now they're getting attacked for some reason that they may find very difficult to understand why. But also think about how many of us work now. We're all distributed, we're at home and we don't have business continuity capabilities in our home offices. So a weather event could actually take a lot of home workers offline. So change and sustainability are part of boardroom and share order conversations now. So it's time to start thinking about this.
Jason Clark:
I think it's a good one Steve. I think a lot of listeners are probably like, "Oh, you know what? I hadn't really put that on my risk radar yet but let me write that one down because it is something that's probably going to come up at some point and we need to already be thinking about it." So Mike, I know you had some thoughts around autonomous cybersecurity and where you remove the human delay and policy management. Why is this a top of mind prediction for us going forward?
Mike Anderson:
When you think about any kind of technology, if you think about traditional plan, build, run concepts, there's a lot of focus that's been put traditionally on how do you reduce the run cost of any IT solution? So when we think about cybersecurity, the run is really when you get into security operations. And there's two reasons why I feel like this prediction will start to play out. One, is the actors get more and more sophisticated. Our risk increases based on the delay it takes for us to implement policy within our environments. So whether that policy is keeping people from going somewhere or going the wrong places or if it's coaching people to make other decisions in the moment. The faster we can get those policies instrumented in our environments, the quicker we can respond to the threats that are happening. And the challenge we've also got is the skill gap, especially in cybersecurity. And so trying to hire, retain people in security operation that's going to be problematic and we've already seen that. And so what happens in a lot of organizations is, you've got the person that is maintaining the platform, especially medium enterprises. That's maintaining the platform, doing the policy management, then that person leaves. So then there's this delay between when you've got policies being created. Now there's this air gap before the new person comes in, they get trained up and start administering the solution. And so just like we've seen in IT operations, where we've seen AIOps become kind of the new buzzword, that's going to move its way over into cybersecurity. And so it's going to help address both that skill gap problem and the problem we have around, what happens when people leave and who picks up the keys to the car and keeps running? And at the same time, helping us respond to threats more quickly. And so that's the piece. And I think what we're going to see first is more of, kind of your traditional, if we look at the sales automation vendors like Salesforce, where they predicted the next best action you should take from a sales standpoint. We have to first establish a trust. We have to know that if I'm going to turn on autonomous cybersecurity, I have to first trust the decision making process that's going into it. And so I think what we're going to see as a first step is, models that essentially suggest kind of here's policies we should create. And get a person comfortable enough to say, "You know what? 99% of the time I just click okay and just approve it." And that builds that trust level where someone says, "I'll go ahead and turn on that autonomous mode." And so I think we're going to see this multi-step journey but I think in the next three to five years, we're really going to start to see the autonomous cybersecurity being used as a way to reduce the run costs. And the other aspect is just financial. I talk to a lot of my peers in the industry and they say, "Look, my CFO is asking me, when do we get to a point where we can say a certain percentage of our revenue is going to go into cybersecurity?" And so we're going to see the same pressures that we've seen on IT and technology and honestly, every function in the company. We're going to start seeing those same pressures on security to say, "What is good enough?" And so when we get to that point, we're going to see optimization of our security stack and those tools we use. Automation is a way to, again, reduce that run cost so I can reinvest in more of my strategic priorities to head off the new threats like, how AI may be used by some of the bad actors?
Jason Clark:
Yeah, I'm curious from the team, is any of you already seen autonomous cybersecurity in either the software supply chain or in infrastructure security or cloud security in any way?
Shamla Naidoo:
So Jason, let me try to address that. There's a reason why you seeing all these shaking heads, where people are saying, "We haven't seen this." It's because so much of the responsibility for making those kinds of decisions falls on the security leaders. And given how personal the failure and the outcomes are, it's hardly any surprise that we not willing to let machines make these decisions. We don't just put one layer of humans to help make the decisions. We put multiple layers because of the fear of failure. And the fact is in machine learning and artificial intelligence algorithms, if you don't have some appetite for failure at the beginning for allowing self learning to get better and to improve, you're just not going to be successful. So the point is that we are not going to see autonomous security until we give security leaders a little bit of leeway to make mistakes and allow the machines to make some mistakes early on while we teach and learn and make better automated decisions.
Mike Anderson:
Essentially, I'd add on, earlier we were talking about zero trust and as we move away, I think Steve, you and I had this conversation before around the move from implicit trust to explicit trust, and from trust but verify to verify then trust. I think the same thing happens as we think about autonomous. That verification is, how do I know that it's going to make the right decision? And then once I know, then I can trust it. So I think it becomes pervasive as we think about the mindset. I tell a lot of people when they talk about zero trust, first thing I say, it's not a product or a destination because it's a journey. It's just like an agile mindset. It's a zero trust mindset, which is really shifting from, I'm going to verify then trust in the things I do. And the more I know about you, the more I'm going to trust you. And I think going back on that conversation from earlier, zero trust requires a, say, teamwork makes a dream work, right? So how do we get all solutions within our environments working together to help us understand as much as we can about a person or entity so that we can make the right decisions? And so I think that mindset becomes part of that AI journey as well for how we can move towards that autonomous cybersecurity model.
Jason Clark:
Yeah, you kind of have to bring it all together. You have to have a brain of your cybersecurity program before you can start applying that and right now there's so much disparate types of systems. I always use the analogy that we have like a disconnected nervous system. The sense of smell, and the sense of feel and sense of seeing are not connected, other than maybe in a SIM, which is memory versus the frontal lobe. So I think we have a lot of work to do, to be able to connect all these things together, which I think that's the intent with SSE and with zero trust. It's a journey to get these things connected so we can react and defend faster. And another topic that you all were talking about is APIs. And I see there are a lot of conversation around SAS being the fastest growing risk for organizations. And there's a lot of conversations around mobile being... Everybody is familiar with the mobile risks. But APIs, they don't get talked about enough and it is a fast growing risk and everybody's wanting to connect everything together. And so when we think about that from a future task surface, probably one of the fastest growing task surfaces potentially. And I'm curious, what are a couple of thoughts around what the risks are there? And then we'll talk about what people should be doing.
Steve Riley:
So let's talk about some data that maybe supports your idea that API techs are growing. According to Akamai, API requests comprise 83% of all their traffic now. They think it's going to grow 30% year-over-year and they're expecting 42 trillion API requests by 2024. Cloudflare says that API traffic grew 300% faster than web traffic in 2020. And a research note at Gartner showed that client inquiries related to APIs, including security management are increasing 3% year over year. So yeah, this stuff is on people's minds. That's true. Now, what do you do about it? Mostly JSON and XML payload processing and monitoring API usage thresholds are the things that you can take a look at, but that's only if you can find them. A lot of applications are plagued with shadow and zombie APIs and these could likely create vulnerabilities that may result in huge security incidents. Automation can help with this. There's the automated API discovery mechanisms are beginning to appear on the horizon. They rely on traffic pattern learning and they can also integrate with API definitions like Swagger and the OpenAPI Specification. And these two tools can help often provide a positive security model for APIs.But the cataloging, and validation, and testing and access control are only part of the solution here. It's also necessary to manage the consumption of internal and third party APIs. Now, one of the thing that I'll just make as note is that because APIs are becoming more and more prevalent, we are seeing web application, firewall providers add API protection and other API gateway features. In fact, 2020 was the final year Gartner published a Magic Quadrant for Waves. And the new MQ covering this space has expanded into DDoS protection bot management and API protection.
Jason Clark:
So now it's called, I don't know if ain't right, it's now it's W-A-A-F?
Steve Riley:
W-A-A-P.
Jason Clark:
A-A-P, okay. So how do you say that?
Steve Riley:
Web Application and API Protection, WAAP.
David Fairman:
WAAP, WAAP. I think you spot on Steve and I think it was Gartner that predicted that in 2022, anyway, that authenticated APIs were going to be the leading cause of attack or the leading attack vector. I think we're starting to see that too. You've rattled off some really, really good stats there. And let's think about it. We keep talking about digital transformation and we talk about as part of enabling digital transformation over the number of years now. We've been talking about the API economy so it's only natural that we're seeing this rise in APIs and that is really what's driving that traffic. I think to your point, it's funny how some of these fundamentals, I sort of mentioned it when we were talking about the deepfake piece around going back to some of the fundamentals of controls, and checks, and balances and fraud about verification. Well, you mentioned it yourself around discovery and understanding your inventory of APIs in your environment. That's still a basic fundamental control, isn't it? You can't protect what you don't know about. You can't secure what you don't know about. So I think some of that asset inventory, particularly on the API side. And you're right, there's some great capabilities coming out there, emerging on the market nowadays, in terms of API discovery and security. But I think there's another piece that we need to talk about when we talk about API security, what do we actually mean there? API are really here to facilitate business logic. So I think there's two elements, there's pieces around is this API secure? Or have we seen a change in this API such as, it was never calling or delivering personal information or some form of data type before, that has now changed? So let's make sure we've got the right controls in around that.
David Fairman:
But what about the behavior of that API and how does that support the overall business logic? And now we start to see a change in the way that API is behaving. And what does that mean from how it's drifted from the business purpose of that API? And those things then themselves become this indicator of a potential attack on that API itself. And we're seeing some of these companies that you alluded to that are really looking at the business logic side to model API behavior. I think that's going to be really, really critical.
Shamla Naidoo:
Hey, and Jason one thing I would add to this conversation is a few years ago, there was a rush to free. And what we saw is that the customer free was you giving up privacy and seclusion. Right now, we are seeing a rush to open. And this whole idea of a rush to open APIs is great for the ecosystem to invite more and more people to build on your capability, to allow you opportunities to add to revenue. But the question really is, what's the cost of APIs? And what's the cost of that open API? And I feel like, David said this, we're coming right back to the cost of this open API economy, is going to be authentication and access control. Because what we are really doing by creating open APIs, we are allowing more people to do business with us but embedded in that community are going to be the criminals. So we back to this whole idea of, you still have to go back and double down on access control and authentication even in this API economy, in this open economy.
Mike Anderson:
Yeah, I think there's a thing I would add on to this, is there's been this move... You look into just IT in general, so first off, let's talk about internal APIs. Where a lot of companies look at it from, "I've got external APIs that I publish and make accessible to my trading partners. Then I've internal APIs that is the building blocks for application." The buzzword two or three years ago was, micro services architecture, where I basically take an entire application, expose it via an API and that is a self-contained application. Now there's pure purist out there, there's reality. A lot of companies say microservices still have layers that are not fully decoupled from other applications. Now you have composable architectures, which is where I'm taking business applications, making them consumable as APIs as well. And so I think we have to think about how do we make sure we discover both the external APIs as well as those internal APIs. And it starts with education around, what are the fundamentals around how I build those APIs? Now shifting it to the external side, there's the consumption element. And a lot of times what I hear is people are concerned about, "Can I trust that API I'm consuming from that third party? What do I know about them? Because there's so many of them that I can leverage." Some of those you may think that they're provided through a marketplace through a hyperscaler that's out there, one of the cloud providers. But is it really provided by them or is it simply a marketplace application? And can I trust that marketplace application and the API that's being created? And so I think we have to start thinking about APIs provided by external parties and our third party risk conversation. And not only can I trust the party but can I trust the data that's coming in? Because often what we're seeing is people consuming APIs and putting that data into data lakes that were then building machine learning algorithms on top of those data lakes to make decisions. And so if I don't know enough about the third party is the data I'm ingesting, is it trustworthy? And is it now going to impact my machine learning that I'm using to make decisions across my organization? So I think we really have to think about not just our own environments but the APIs we're consuming from other parties and start to think about that from a risk management standpoint.
Jason Clark : Vous venez de soulever un autre point. Je veux dire qu'une grande partie de l'avenir de la sécurité réside dans les modèles et dans l'exploitation de l'IA pour nous aider à faire notre travail. Lorsque Dave a parlé du comportement des API, nous avons abordé l'aspect autonome de la cybersécurité. Si vous pensez à tout ce dont nous venons de parler, le ML est le facteur habilitant de tout cela. Mais Mike, vous y avez fait allusion, pouvons-nous faire confiance à tout ce qui nous parvient ? Et vous construisez des modèles autour de choses auxquelles vous ne faites pas confiance. Que voyez-vous dans l'avenir, pour peut-être une ou deux minutes de plus, en ce qui concerne les risques liés à la ML et la manière de les gérer ?
David Fairman : Je suis d'accord avec vous parce que je pense que c'est un sujet qui me tient à cœur depuis quelques années. J'ai participé à la création d'entreprises très innovantes qui cherchent à résoudre ce problème. Mais si nous parlons de certains risques, nous pensons à l'empoisonnement des données, aux biais, à la robustesse du modèle. Décortiquons-les donc. Si vous réfléchissez à la base, et je ne vais pas l'expliquer aussi bien que n'importe quel scientifique de données qualifié, nos modèles sont basés sur des données. Il existe un ensemble de données d'entraînement dont nous connaissons le résultat et que nous utilisons pour entraîner ce modèle à prédire l'avenir. Ces données de formation peuvent être manipulées de multiples façons. Il pourrait être empoisonné. Il se peut que l'intégrité des données d'entraînement soit suspecte. Des données erronées peuvent être insérées dans les ensembles de données de formation. Shamla a également évoqué la course à l'ouverture. De nombreux ensembles de données d'entraînement sont des ensembles de données open source ou librement disponibles pour l'entraînement, comme tout ce qui est ouvert. Qu'en est-il de la sécurité de ces formations ou de l'intégrité de ces ensembles de formation ? Ils peuvent eux-mêmes être compromis ou l'intégrité de ces derniers peut être compromise, ce qui faussera les résultats de ce modèle d'apprentissage. Ensuite, il y a la robustesse et la partialité. Les modèles sont formés par des humains, qui ont eux-mêmes des préjugés. Comment pouvons-nous commencer à essayer d'identifier et de prédire cela ? Et puis vous pensez à l'omniprésence des modèles dans l'automatisation. L'intérêt de confier des activités à des machines est que nous pouvons nous adapter et faire les choses très, très rapidement. Si ce modèle ne se comporte pas comme nous le souhaitons, il peut avoir un impact très important sur une organisation en termes de résultats, d'activités et de décisions. Alors comment mettre en place une gouvernance et des contrôles autour de ce modèle pour s'assurer qu'il se comporte comme nous l'attendons ?
Mike Anderson : Oui, je pense que j'ajouterais quelque chose à Dave. Je pense que je vais revenir sur certaines choses que vous avez dites, où vous parlez des "deepfakes". Si vous réfléchissez vraiment à notre situation actuelle, nous disons que l'IA ML n'est en fait qu'une machine d'apprentissage aujourd'hui. Il s'agit toujours d'une programmation humaine et d'un algorithme basé sur des données. Nous n'avons pas encore atteint le stade de l'intelligence artificielle réelle où les ordinateurs prennent leurs propres décisions et créent leurs propres algorithmes sur la base de ce qu'ils observent dans l'environnement. Et je pense que lorsque nous en arriverons là, je me demanderai comment nous protéger contre ces fausses données. Si un ordinateur tombe en panne et que vous pensez à ce qui s'est passé, comme vous l'avez mentionné plus tôt à propos des élections politiques et de la façon dont les "fake news" ont fondamentalement guidé une décision différente. Comment un ordinateur peut-il, s'il est doté d'une véritable intelligence artificielle, s'il apprend à partir de ce qui se trouve dans l'environnement, empêcher les fausses nouvelles, les fausses informations d'influencer les décisions qu'il va prendre ? Devient-il vraiment autonome en matière d'intelligence artificielle ? Et je pense que c'est là que se tiendront de nombreuses conversations sur l'éthique de l'intelligence artificielle. Et comment s'assurer que nous éliminons ces fausses données ? Je pense donc que la prédiction que vous avez faite tout à l'heure à propos de ce deepfake est extrêmement importante parce qu'elle va influencer la direction que nous prendrons à l'avenir du point de vue de l'IA ML.
David Fairman : Oui, c'est drôle. Ces deux questions sont très liées, très liées, et l'une influencera l'autre.
Jason Clark : Pour conclure, Shamla, vous aviez une dernière pensée pour les auditeurs et nous allons terminer ici.
Shamla Naidoo : Oui, ma dernière prédiction Jason, c'est que ce que nous allons voir, c'est que les PDG en particulier vont créer des programmes pour soutenir la santé mentale et le bien-être des responsables de la sécurité. Permettez-moi d'ajouter un peu de couleur à ce commentaire. En 2019, très, très tôt, en 2019 et avant la pandémie, Forbes a mené une enquête et un RSSI sur six a déclaré qu'il s'était tourné vers les médicaments et l'alcool pour faire face au stress du travail. Réfléchissez à l'impact de cette décision. Une personne sur six a volontairement confié qu'elle se tournait vers les médicaments et l'alcool pour faire face au stress du travail. Cela suggère que ce chiffre est beaucoup plus élevé pour les personnes qui n'ont pas divulgué leurs informations. Le fait est que je pense que ce travail est vraiment difficile et qu'il ne cesse de le devenir. Mais à ce stade, il y a très peu de soutien en matière de santé mentale pour les responsables et les équipes de sécurité. Je pense donc que les PDG vont commencer à redoubler d'efforts, non seulement pour innover pour l'entreprise, mais aussi pour aider les RSSI à innover en matière de sécurité, en leur donnant les outils, la technologie et les solutions qui les aideront à mieux faire leur travail. Mais il faut aussi les soutenir par des programmes d'aide à la santé mentale et au bien-être, car je ne sais pas si on peut y arriver tout seul. Pensez-y, la plupart d'entre nous ne se sont pas engagés dans ce secteur dans le but de protéger la sécurité nationale. Nous avons signé pour ces industries et ces emplois afin de protéger nos entreprises. Nous assistons aujourd'hui à une extension des attributions du CISO à la protection de la sécurité nationale de nos pays, sans le soutien et la formation appropriés. Nous devons être en première ligne de la sécurité nationale et des économies de nos pays et du monde entier, ce qui crée une pression énorme pour le RSSI. Et je pense que cela doit s'accompagner, non seulement d'un financement et du soutien que nous constatons aujourd'hui, mais aussi d'un soutien en matière de santé mentale et de bien-être, car ce travail est probablement le plus stressant dans la suite du chef d'entreprise aujourd'hui.
Jason Clark : Absolument, sans aucun doute. C'est un travail extrêmement difficile et très stressant. Et je pense que la lumière qui brille ici, c'est qu'il s'agit d'une communauté tellement étonnante. Nous partageons, nous sommes connectés. Je pense donc qu'il s'agit en partie de parler. Il s'agit de parler du stress avec des personnes qui comprennent. Mais je pense que nous devrions passer beaucoup plus de temps à réfléchir à la manière d'aider tout le monde du point de vue de la santé mentale. C'est un sujet que Shamla pourrait éventuellement aborder avec l'Alliance des conseillers en sécurité. Je sais que là-bas, l'accent a été mis sur l'intégration des talents dans l'industrie et sur l'aide à apporter aux jeunes pour qu'ils y accèdent.
Shamla Naidoo : Je suis d'accord. Je pense que ce sujet mérite beaucoup plus d'attention, beaucoup plus de recherches, beaucoup plus de discussions, car nous devons reconnaître que le stress au travail est intégré dans le travail. Ce n'est pas parce que nous ne sommes pas à la hauteur. Il faut donc que ces conversations fassent surface si nous voulons conserver les talents que nous avons dans le secteur et en attirer d'autres.
Mike Anderson : Oui, je pense que je voudrais juste ajouter un commentaire à ce sujet parce que la santé mentale est un domaine qui me tient à cœur. Je pense qu'en général, nous devons nous assurer que les gens se sentent à l'aise pour parler de santé mentale sur le lieu de travail parce que c'est quelque chose qui nous affecte tous, en particulier pendant la pandémie, nous l'avons tous ressenti à différents niveaux. Nous devons faire de la santé mentale en général un sujet dont il est acceptable de parler et nous devons la rendre plus humaine. Nous avons passé tellement de temps à essayer d'éliminer l'élément humain du lieu de travail. Mais je pense que nous devons ramener un peu de cet élément humain et nous assurer que nous faisons preuve d'empathie envers nos concitoyens, et que nous permettons aux gens de parler de leur santé mentale et de leur situation de ce point de vue. Tout comme pour la santé physique, la santé mentale doit faire l'objet du même type d'attention.
Jason Clark : C'est tout ce qu'il nous reste à faire, c'était formidable. Merci à chacun d'entre vous pour vos brillantes idées et pour avoir ouvert la conversation et partagé vos pensées avec tous les auditeurs. Encore une fois, merci à tous les auditeurs de continuer à télécharger et merci à tous ceux qui sont venus nous rejoindre et discuter avec nous. Et encore une fois, une grande communauté, une grande industrie. Je vous souhaite à tous une bonne fin de journée. Nous vous remercions.
David Fairman : Merci également à Jason.
Steve Riley : Merci à tous.
Sponsor : Le podcast Security Visionaries est alimenté par l'équipe de Netskope. À la recherche de la bonne plateforme de sécurité cloud pour faciliter votre parcours de transformation numérique, le Netskope Security Cloud vous aide à connecter en toute sécurité et rapidement les utilisateurs directement à internet, depuis n'importe quel appareil vers n'importe quelle application. Pour en savoir plus, consultez le site N-E-T-S-K-O-P-E.com.
Producteur : Merci d'avoir écouté les Visionnaires de la sécurité. Prenez le temps d'évaluer et de commenter l'émission et de la partager avec quelqu'un que vous connaissez et qui pourrait l'apprécier. Restez à l'écoute, de nouveaux épisodes paraîtront toutes les deux semaines et nous vous retrouverons dans le prochain épisode.