For CISOs the growing adoption of chatbots presents a challenge that needs to be addressed this year. They offer businesses huge potential for efficiency, faster customer service and improved customer engagement. But like every emerging technology, they also open up new attack surfaces.
We’ve already seen several attacks carried out via chatbots, so I think it’s reasonable to expect that this year, as businesses deploy increasing numbers of chatbots, we can expect the number of attacks to increase too. What can enterprises learn about protecting themselves from the chatbot attacks that have been publicised?
Re-examine your approach to securing enterprise apps
Cast your mind back to the spring of last year and you may remember stories in the news about a breach of customer data affecting Delta Air Lines and Sears. A few months earlier [24]7.ai, a provider of AI-based online customer support used by the companies, had been compromised. This resulted in the theft of hundreds of thousands of sets of customer data from customers like Delta, Sears, Kmart and Best Buy.
Essentially, chatbot hacks are attacks against enterprise applications. As such, one of the first steps CISOs should take to protect themselves is to review their current security provisions, specifically to check two things.
First, that all the strategies currently in place for every enterprise application should also be deployed for all chatbots. Regular software updates, security patches and multi-factor authentication are some of most important first steps. It’s also necessary to encrypt data at rest and data in transit, enforce access control, and validate every input to the backend data.
Second, that current security approaches are appropriate for cloud-based apps. Cloud infrastructure can significantly increase the attack surface of any enterprise, leaving enterprises wide-open to new threat vectors. Therefore, it’s important that businesses deploying chatbots via cloud-based infrastructure ensure that they rethink their legacy security solutions. One approach is to consider a Cloud Access Security Broker (CASB). Businesses are increasingly turning to CASB to address cloud service risks, providing visibility, compliance, granular access control, threat protection, data leakage prevention, and encryption, even when cloud services are beyond their perimeter and out of their direct control.
Revisit your eco-system
Perhaps the most publicised chatbot attack was that on Ticketmaster last year. In June 2018, Ticketmaster UK disclosed a breach of personal and payment card data from 40,000 customers, carried out through compromised chatbot software. Inbenta, the company that Ticketmaster partnered with to develop the chatbot, was compromised by the Magecart criminal group, which implanted, or replaced, a malicious JavaScript tailored to collect personal information and payment card data from the payment pages of the Ticketmaster websites. In this case, the chatbot itself was not exploited, but the platform was used to distribute malware, and while it wasn’t a particularly complicated attack, it serves as an important warning to all major organisations. Unfortunately, this wasn’t an isolated event, but rather the first of a massive campaign by the same group.
Attacks to the supply chain are becoming increasingly common and chatbots are no exception. This breach, like [24]7.ai’s, highlights the importance of securing your vendor ecosystem. Enterprises should not assume that their provider has the same security levels as they do, so if you’re using chatbots through third-party platforms, it’s vital to assess their security posture to understand what further protection you need.
Don’t forget the human factor
In 2016, Microsoft Research involuntarily gave us an early example of potential attacks against AI-based chatbots, when its Tay bot started to tweet racist and inflammatory messages. According to Microsoft they were the consequence of internet trolls “poisoning” its AI with offensive tweets. Unsurprisingly, the chatbot was shut down after just 16 hours.
As chatbots themselves become more advanced with more in-built AI, we can expect attack variants against them to become more sophisticated too, acting in more subtle ways. The example of Microsoft Tay suggests not only that social engineering attacks against chatbots are theoretically possible, but also that such attacks, aimed at leaking private information, could be carried by weaponized chatbots themselves. It’s another example of how AI can be used to help enterprises, but also weaponized against them.
Today, attacks via chatbots seem to be confined to text-based applications. In most cases the platforms have been compromised to inject malware that steals data from customers. However, very soon I expect to see threat actors creating malicious chatbots to dupe customers into clicking links that trigger the delivery of a hostile payload like phishing pages.
It’s vital therefore that CISOs dedicate enough resources to educating employees and customers. Alongside initial training on spotting suspicious activity, organisations should run regular ‘awareness’ campaigns to keep staff vigilant to inconsistencies.
For more information on protecting your business from attacks today, have a look here.