Explaining AI: Technical and Legal Challenges

Author info


This blog was co-authored by Jolien Clemens.

We come into contact with AI more often than we realise. Think of going to a bank for a loan: the bank assesses your creditworthiness using a new AI-system. When the result of the AI’s assessment is negative, you are informed that you do not fulfil all the requirements. You may ask the bank information about why your application was denied, and what can be changed in order to receive a positive assessment the next time. Giving you all this information could be difficult for the bank, as most AI-systems work like a Black Box.

In the Commission’s recent proposal for an AI regulation, explainability of AI-decisions is put forward as a method to foster public trust. Also in the European white paper on AI, extensive attention was given to transparency in AI as a key condition for the development of trustworthy AI. Even if a specific AI-regulation is not yet in force in the EU, businesses could already be asked to explain AI-decisions under the GDPR. This quest for transparency or explainability of AI does not come without challenges. First, we will zoom into the technical challenges that come with explaining AI-decisions. Next, we will provide an overview of the legal obligations and challenges that come into play when trying to explain AI.

Technical challenges

We will look into two main technical challenges. The first is the Black Box problem of AI which means that users have trouble with understanding how an AI-system functions and reaches decisions. Secondly, we will look into a rather unexposed part of legal research, namely the downsides of explaining AI.

The Black Box of AI explained

One of the issues raised by AI is its lack of transparency. The more complex AI-systems (for example deep neural networks) function like a Black Box: it is sometimes impossible to know how the AI-system reached a conclusion, even for the developer. An AI-system will learn and create rules through training  with datasets. These rules created during the training phase can be used for later recommendations or predictions. Because AI-systems learn unsupervised it can be difficult even for developers to understand how the system exactly functions, which is problematic because the GDPR requires to explain the “logic of the system”. In recent years there has been extensive research into finding ways to open the Black Box, under the name “Explainable Artificial Intelligence (XAI)”. XAI is researching the possibility of developing AI that is explainable, while still upholding a high level of accuracy. Research is however still in its infancy and mainly happening in the US. If Europe does not want to fall behind in the AI-revolution, which China and the US are currently leading, it would be clever to invest into the research and development of XAI.

The downside of explaining AI

However, despite the persistent call for explainability of AI, one can wonder whether it is in fact desirable to make AI transparent. There is a downside in increasing the transparency of AI, in literature known under the term ‘the AI transparency Paradox’. When the user of an AI-system knows exactly how the AI-system came to a decision, they can in advance strategically adapt their behaviour and by doing this also change the outcome of the AI-system in their advantage. This phenomenon is known as “the gaming of AI”. Another obstacle is security: AI-systems are also more vulnerable to hacking and bugs. Both the gaming, as well as the hacking problem poses a significant risk to the efficiency and reliability of an AI-system. In other words, even if research can transform the Black Box of AI into a Glass Box, this is not necessarily a desired outcome. Therefore, a compromise between the transparency of AI on the one hand and the reliability and accuracy of AI on the other hand, must be reached.

Legal challenges

Explainability of AI faces at least two kinds of challenges: first the explanation provided must be understandable to the users, and second the information may be at odds with confidentiality requirements of intellectual property protection for businesses.

Explainability and the GDPR

In the legal doctrine, there is currently some discussion on the existence of a right to an explanation under the GDPR. There are in total three legal basis under which a right of an explanation could fall. The first are the information obligations. The second possibility is the right to access. The final legal basis is the right not to be subject to a decision based solely on automated processing, with its accompanying recital 71.

There is no explicit mention of it in the text of the GDPR, therefore the very existence of a right to an explanation in the GDPR is debated. The only mention of an ex post right to an explanation can be found in recital 71 which states that the data subject has a right “to obtain an explanation of the decision reached”. It is unfortunate that the EU legislator chose to only include this important right in a recital. Due to the nature of recitals, this opened the still ongoing debate about whether or not there exist an individual right to an explanation. Some scholars interpreted the articles narrowly, and stated that there is only an obligation to provide information ex ante about how the AI-system functions. Others defended the opinion that individual information about the decision reached should also be provided ex post to each data subject, because this would be the only way to make the rights provided by the GDPR effective.

Putting aside the question of whether a concrete individual right to an explanation exist, businesses can nonetheless be asked to explain how an AI-system reached a decision, and to provide meaningful information about the ‘logic’ of the system. Because of the AI’s Black Box, it can be a difficult task for business. Even if it was possible to explain the decision-making process of AI, providing information in a concise, transparent, intelligible and easily accessible format, using clear and plain language is another hurdle to overcome. Technical information about the source code of the AI-system will certainly not be enough for the average data subject to comprehend, and is even protected against unlawful disclosure under IP-law. What specific information needs to be provided is not made clear in the GDPR, and it will certainly depend on the circumstances of the case, the technology used, the knowledge of the data subject, etc... Further information on this question would be welcome, as business that are using AI-systems are now somewhat left in the dark.

Protecting IP and explaining AI

Not only is there a technical difficulty in explaining AI, but there can also be a legal difficulty due to the protection of trade secrets. In 2016 the EU adopted a Directive that ensures the protection of undisclosed know-how and business information against unlawful acquisition, use and disclosure. An AI-system used by a business can be considered a trade secret under the broad definition in this directive. Information is considered a trade secret if it is confidential, if it has a commercial value, and if reasonable steps are taken by the lawful owner of the information to keep it secret.

If AI-systems can also be protected under the Trade Secret Directive, there is a potential clash with the GDPR which requires from data controllers and/or processors to explain the logic of the AI-system they use. Neither legal document provides an answer to this potential clash. Rather, both texts provide the other should prevail in case of a conflict, which without a doubt leads to confusion. Recital 35 of the Trade secret Directive states it “should not affect the rights and obligations laid down in the GDPR”. Whereas, the GDPR states that “the right to access for the data subject should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protected software”. In such situation, an interpretation by the Court of Justice would certainly help settle this question.


Margrethe Vestager said “On AI, trust is a must, not a nice to have”. The idea has always been that trust in AI will come when AI-system become more explainable. This blogpost tried to give an overview of the technical and legal challenges that come with trying to explain AI decisions. These challenges should definitely be included in the discussions that are currently taking place in the Council of the European Union on the adoption of the AI-regulation. The main focus point in the discussions should be to find a balance between AI transparency and AI accuracy, as well as the IP question: how can AI-decisions be explained without violating IP-laws?

Reaching consensus on a new regulation was always deemed to be challenging. It can already be expected that the discussions on the AI approach will be heated and lengthy. But as the technology keeps on evolving without a clear regulatory framework reacting to it, the European AI-regulation should come sooner rather than later.

You may also be interested in our article: What do you need to know about the AI Act

Do you have a specific question or would you like support in this matter? We are happy to help. In that case, please contact a Timelex attorney.