smart manufacturing The legal framework for artificial intelligence: paragraph jungle or solve it intelligently?
Guest contribution from Joana Becker & Dr. Andreas Splitgerber *
Many companies want to invest in AI, but they fear the prospect of a bunch of paragraphs. How good is the legal framework for using AI and what are the regulations that companies should consider.
presenters on this topic
The use of artificial intelligence (AI) is now essential to the success of many businesses. In industry in particular, AI systems open up entirely new possibilities as they help improve production processes, reduce machine failures, or develop smarter services. What laws really apply here? Lawyers and legislators in Germany and around the world are constantly working on the goal of enabling companies to take as many innovative steps as possible while minimizing AI risks. Existing laws (such as the General Data Protection Regulation or copyright law) are sometimes enforced, and new regulations are created in this area (such as the EU Artificial Intelligence Regulation).
AI – Definition of the European Union Commission
There is no definitive and generally recognized definition by which AI can be accommodated. Instead, the term artificial intelligence is used in different contexts and to describe different technologies. In its proposal for a new regulation for artificial intelligence, the European Commission defines artificial intelligence as “a program developed using one or more of the technologies and concepts listed in the Annex (to the Regulation) and for the purpose of a set of specific human-made objectives, which may lead to outcomes such as content, predictions or Recommendations or decisions that affect the environment with which they interact.”
In German industrial companies, AI is mainly used for the purpose of optimizing production and manufacturing processes, increasing productivity and quality, predictive maintenance and reducing costs.
AI and the law of the jungle
While companies in all sectors are now showing a high willingness to invest in AI systems as an important future technology, many companies see legal uncertainty in particular as an impediment to developing AI in their companies. Although there is currently no independent legal framework for the use of AI systems, there are already a wide range of regulations at the national and international levels that companies must consider when using AI. This includes, for example, the law on contracts for cooperation agreements, the law on copyright and patents when using artificial intelligence, the law on data protection, and the planned artificial intelligence regulations (2021/0106 (COD)).
EU regulation on artificial intelligence – a new area of law
The EU Commission’s proposal on regulating artificial intelligence from April 2021 is part of efforts to create a unified regulatory framework at the European level. It is surrounded by the new European Union Regulation on Machines (2021/0105 (COD)), which concerns the general safety (not only in relation to artificial intelligence) of the entire end product. The EU’s AI and new machine regulations must come into force at the same time – which will likely only be possible in 2023 or later.
The AI list addresses both the developers and users of AI who use it in their professional environment. It is trying to promote the use of trustworthy AI in the EU by classifying it into different risk categories and banning certain AI systems in order to ensure that the EU’s fundamental rights are protected. Low-risk AI systems, such as chatbots or spam filters, are not subject to any regulatory requirements apart from individual transparency obligations, so that the core of regulation relates to high-risk AI.
According to AI regulations, the high risks when using AI systems result from the expected negative impacts on European fundamental rights. This is feared, for example, when AI is used in critical infrastructures (eg in traffic), for product safety components or in staffing or human resource management. Accordingly, service providers and users have to consider comprehensive obligations before and during market approval, such as conformity assessment, risk management, maintaining human oversight of the system, technical documentation, accuracy, robustness, cybersecurity, and quality management.
Furthermore, the Provider and the User are also responsible for the continuous monitoring and reporting of critical incidents and for implementing appropriate remedial measures. In the event of a serious violation of the provisions of this regulation, companies face high fines of up to 20 million euros or 6 percent of global annual sales.
Also consider data protection with AI
When using AI, companies should always keep an eye on data protection risks and requirements. Once AI processes data that enables the identification of a natural person, these activities are subject to the European Union’s General Data Protection Regulation (GDPR), which in Germany is supplemented by the Federal Data Protection Act (BDSG). The technical implementation of AI is crucial here, because decisions with legal or fundamental rights-damaging implications should not be left to the machine alone. The general principles of the GDPR also apply, such as the principle of data minimization. Accordingly, personal data may only be processed for an appropriate and appropriate purpose and to the appropriate extent. The responsible person should implement these principles by designing the technology (privacy by design) and the appropriate default settings for data protection (privacy by default).
Anonymous data does not fall under the General Data Protection Regulation (GDPR) and BDSG
Regardless of the meticulous technical design behind the AI, compliance efforts from a data protection standpoint can be reduced accordingly by the use of personal data aliases. Ideally, to avoid falling under the scope of the GDPR or BDSG, the relevant personal data should be anonymized before being processed by AI.
In cases where the scope of the GDPR is still open, companies must establish a sustainable concept of data protection. This also includes conducting a data protection impact assessment and observing the central principles of the General Data Protection Regulation (GDPR): transparency, purpose and responsibility in data processing.
For the purpose of ensuring ongoing legal compliance, it makes sense for companies to appoint an AI Officer in addition to a Data Protection Officer, who acts as the central point of contact for the use of AI.
When dealing with artificial intelligence, companies must:
- Check anonymity to avoid GDPR and BDSG
- Developing the concept of sustainable data protection
- Assessment of necessary data protection consequences
- Observe core principles such as transparency, purpose and accountability
- Appointment of an artificial intelligence officer
Conclusion: The legal framework is promising
So the good news for companies is that although there are different and inconsistent regulations for the use of AI, it can be well implemented with an appropriate concept and use of legal advice, so that companies can benefit more from technical advances in the future to be able to use it. It is gratifying that the EU legislator’s goal is to encourage innovation in the EU. However, at the same time, a great deal of weight is also rightly attached to human dignity. As a result, we are currently seeing a legal framework that is not yet final, but is showing promising directions. It seems that companies can look forward to a smart legal framework and not have to fear the jungle of clauses.