What does the European Union’s AI act mean for us?

Q: We want to target customers in France, with most of our customer interactions done via our website. We use several third-party AI tools to provide our service and wonder what the European Union’s artificial intelligence act means for us?

A: You are right to be thinking about the act now ahead of your expansion into France. It has two threads that work together to determine exactly what you will need to do for each AI that you use.

The first is your AI “role”: whether you are the provider, deployer, distributor or importer. The second is the “risk” attributed to the type of AI that you use: is it banned, high-risk, of limited risk or a general purpose AI model?

From your description, it sounds like you are a deployer, that is someone who is using an AI system “under their authority”, which you are doing by integrating it into your website.

What you have to do now is to individually audit each AI tool and to check three key things. Are the AI tools in one of the eight categories of banned AI? Are any in the eight categories of high-risk AI or part of a safety component? And are any of the tools in this very specific list: AI that interacts with people; AI that generates synthetic audio, image, video or text; AI that involves any emotion recognition or biometric classification; or AI that generates deep fakes?

It’s likely that some of your retail-focused AI will be caught by that third list category. For example, chatbots used for customer service, tools used to generate content to show a customer a product in their own environment and possibly some of the more fun-based functionality on websites and apps (for example, face swaps). Depending on how some of these work, they could be high-risk.

On the basis of what you discover, your obligations under the act are likely to be documenting that you’ve made a rational and legal decision about your use of that AI (for example, with respect to risks such as personal data, intellectual property, bias and accuracy etc) and complying with transparency obligations (such as informing people about your use of AI in the same way that you do about personal data).

Should any of your website AI be categorised as high-risk, your compliance burden is more complex than what you may be used to under the GDPR. The act places a legal obligation on your business to: use AI in accordance with its instructions from supplier; to monitor the AI use; to ensure real human oversight; to ensure that input data is relevant and representative in the context of your business use; and to keep logs for six months.

If there is a serious incident or malfunction with the AI, or if it could be harmful, you must inform the provider and authorities and stop using it. Your business must also be able to prove that it is doing all of the above.

Artificial intelligence could be transformative for many businesses, but it is essential to understand the regulatory environment

It will be important, too, to consider the act alongside any existing GDPR compliance in the business, because there are many crossovers. The most obvious example being the existing GDPR rules on profiling, which is very common in retail marketing.

The act comes with a fines regime that is even harsher than (and on top of) the GDPR, maxing out at €35 million, or 7 per cent of turnover, whichever is greater. However, as we know from the GDPR, it’s not the fines that should focus the mind, it’s the impact on revenue and reputation. That should be reason to get ahead of the game on this one and not leave it to the last minute, as many did for the GDPR. There is plenty of time (the act’s provisions enter force in phases over the next 36 months), but it is also time to start now.

Vanessa Barnett is a partner at Keystone Law

Post Comment