Artificial Intelligence (AI) is rapidly becoming ubiquitous in supporting key business decisions, and for many organisations it is critical for their digital transformation and new business models. With organisations quickly driving forward to identify new ways to extract competitive value from their data, the regulators are preparing to step in. Eager to avoid running too far behind the curve, or to allow a “Wild West” to develop, EU lawmakers are already working to regulate and impose guidelines on the use of AI. While the new AI Act is not going to come into force for a few years, staying ahead of incoming legislation will give IT and security teams a huge advantage over the competition, ensuring that active AI programmes are not curtailed or slowed down to implement last minute compliance and ethics requirements.
So, what is the EU AI Act all about?
In essence, regulators have recognised the potential impact AI could have on society—and EU citizens’ rights specifically—and are planning legislation to prevent the misuse of AI and protect individuals. The Act contains powers for oversight bodies to require the withdrawal of an AI system or require an AI model to be retrained if it’s deemed to be a high risk. Think of it as following the exact same pattern the EU took with the GDPR. Of course, the GDPR focus is on personal data, and its responsible use and handling. This new proposed legislation will add another layer over the GDPR and ensure that AI systems used in the EU market comply with requirements and existing legislation on fundamental rights.
Is it essentially a GDPR upgrade?
This Act and its requirements will sit alongside the GDPR, so organisations will need to comply with both, and can potentially fall foul of non-compliance repercussions of both. The Act is likely to have the same extraterritorial reach.
GDPR can impose fines of up to 4% of company revenue—surely this isn’t going to be as severe?
Current drafts of the AI Act are considering up to 6% of global revenue fines for non-compliance, so if the up to 4% of revenue fines from the GDPR caught your board’s attention you can be sure the AI Act will appear on their radar too!
So what will the Act require?
Currently, it is expected that the new legislation will ensure risk management systems and processes are established, implemented, documented, and maintained. Organisations will need to determine if their AI systems fall into the “high risk” category and potentially will need to undergo a regular assessment. With the GDPR, expectations included identification of known and foreseeable risks, evaluation of risks of reasonably foreseeable misuse, and adoption of suitable controls and countermeasures. The Act is also expected to require organisations to communicate with data subjects to tell them when and how AI was used to make decisions. Details are not yet available on whether this communication will need to happen proactively or only on request.
This legislation naturally sits adjacent to existing GDPR legislation and both could be applied, if for instance, bad data protection hygiene leads to negligence in AI that feeds from the dataset in question.
What does negligence look like in AI?
This is one of the big questions that the EU lawmakers are currently considering. It is likely it could range from bad data protection hygiene to knowingly allowing discriminatory decisions to be made using data sets with known bias.
So if I am good for the GDPR am I halfway there?
Yes and no.
Yes: the lessons we all learned from the GDPR—including data hygiene, standards, and processes—will be hugely helpful in complying with the AI Act.
No: the central premise around the GDPR is that an organisation holds data that needs protecting… but with AI you are regulating the methodology of decision making based on that data and that involves much more than the raw constituent data itself. The decision making algorithms of course, but think about this too…. data exfiltration is no longer the only concern. You may find your AI system is using a data set that nefarious actors may want to infiltrate with “rogue” data. While data integrity compromise may seem an unlikely threat, if those data manipulations guide an AI to make a different decision (increasing the risk score for individuals purchasing life insurance for instance), it could genuinely be a key action-on-objective a threat actor may use to damage the organisation.
OK, so I need to do more than copy/paste my GDPR policies. What can I do now to start to prepare?
- Ensure the right people in the organisation have positions of responsibility over AI and future AI legislation compliance. Form an AI ethics board.
- Ensure you document your AI use and policies, and give due thought within the documentation to issues and how you will handle any issues arising. This documentation is a crucial component in ensuring responsible use of AI, but—perhaps as importantly—it is also critical to an organisation’s defence over claims of gross negligence (just as in the GDPR you are expected to be able to prove that when things go wrong it is despite due diligence and appropriate processes).
- Start to educate and have conversations about AI with employees across the business so it does not become a behind-the-scenes or last minute activity. Transparency garners trust. It is easy to assume that business leaders, especially those focused on data science and intelligence will feel threatened, but in many situations, a good ethics stance should be supported from top to bottom in the organisation
- Negligence is the key determinator so you should be able to demonstrate you understand the use and potential impact of your AI programme and have assessed the ethical considerations.
Although the implementation of a new EU regulation on AI could be a few years away, there is no doubt that it is coming and businesses that take action now will be better prepared and face less disruption when the Act arrives.