How to implement the EU AI Act with 10 hacks.

The EU AI Act is the first global AI law to set standards for the safe and legally compliant use of Artificial Intelligence (AI), with strict rules for high-risk systems and bans on certain applications. It only offers entrepreneurs a short transition period to ensure compliance. This article offers concrete recommendations for digital, legal, compliance and risk officers to implement an AI policy. The time to act is now to shape the digital transformation responsibly.


By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

The EU AI Act as a milestone in global AI regulation

The Artificial Intelligence Act (AIA), based on the political agreement early February 2024, is the first global AI law that guarantees safety, legality and fundamental rights in the use of AI. Accompanied by an AI Liability Directive and the proposal for a directive of the European Parliament and of the Council on liability for defective products, this comprehensive regulation set is intended to close liability gaps within the scope of application of EU law, facilitate the handling of legal violations caused by AI and thus create a stable legal framework for companies to build, offer and adopt AI systems.

With a risk-based approach, mandatory impact assessment and strict rules for high-risk AI systems, the AIA sets high standards and simultaneously imposes high penalties for non-compliance. Violations can cost up to EUR 35 million or 7% of annual turnover. The strict regulations are intended to ensure that AI systems respect the safety and fundamental rights of European citizens. 

Ban on AI systems with unacceptable risk

This category includes AI systems that adversely affect human behavior in a subliminal manner, enable the exploitation of vulnerabilities of vulnerable persons, are used to assess the trustworthiness of natural persons (“social scoring”) or are used for real-time biometric surveillance in public spaces for law enforcement purposes. Such AI systems are prohibited as they pose a significant risk to fundamental rights and freedoms. The strict bans emphasize the EU’s desire to protect citizens from the potentially harmful effects of AI.

Strict rules for high-risk AI systems

AI systems with a high risk to people’s health, safety or fundamental rights compromise human dignity, the protection of personal data or freedom of expression. Strict regulations apply to the design, commissioning and use of these AI systems, including requirements for data quality, security, documentation and human oversight. Proof of conformity with the AIA, including a quality management system and validation of the AI system, must be made visible through CE labeling.

Code of conduct for AI systems with low/minimal risk

AI systems that do not fall into the first two categories are categorized as low/minimal risk and are subject to less stringent requirements. Providers of such systems are encouraged to draw up codes of conduct and voluntarily apply the regulations for high-risk AI systems. However, the AIA requires that these AI systems must also be safe in order to be sold and operated on the market, whereby safety can be ensured through the voluntary application of the high-risk regulations.

Special regulation for “general purpose AI models” 

In addition to the three main risk categories, the AIA introduces separate regulations for general purpose AI (GPAI) models, which can be used as stand-alone systems or as an integral part of other systems. In principle, GPAI models are considered systems with limited risk and are subject to transparency obligations. However, GPAI models are classified as systemic risk if the computational effort for training exceeds 10^25 floating point operations (FLOPS) and thus a high possibility of impact in the sense of generative AI must be assumed. For GPAI models, in addition to the provision of technical documentation and instructions for use, compliance with copyright and the transparent summarisation of the content used for training, further obligations regarding risk management and ensuring cyber security must be taken into account.

Create an AI policy for your own company 

The AIA defines strict regulations for AI systems and high penalties for non-compliance. This will entail high compliance costs for companies. These measures, but above all the rapid pace of AI development, especially the latest achievements in the field of generative AI, emphasize the urgent need for internal policies for the legally compliant and ethically responsible use of AI (“Trustworthy AI”). The AIA offers initial guidance in this regard, but it is essential that it is adapted to company practice. 

We do not believe that a blanket ban on AI is appropriate or expedient. Instead, the digital transformation should be organized responsibly, in compliance with applicable law and in line with our European values in order to create a clear framework for employees.

10 tips and tricks on how to create an AI policy for companies of all sizes:

  1. Define clear roles and responsibilities by interlinking digital, IT, legal, information security and data protection officers from your company in an integral organizational structure and process organization.
  2. Involve the works council and co-determination committees at an early stage to ensure support and compliance.
  3. Develop a comprehensible definition of AI with practical examples to promote a common understanding. Ensure that the definition is technology-neutral. It is not the technology that is decisive, but the potential consequences.  
  4. Determine the scope of your AI policy, including geographical, procedural and legal implications as well as possible effects on suppliers and service providers.
  5. Define guidelines for the ethically responsible use of AI in line with your corporate values. Set clear boundaries (“red lines”).
  6. Involve subject matter experts and employees in the assessment of the criticality of AI systems, align them with the risk classes of the AIA and dovetail the process with your RMS/ICS control process.
  7. Provide specific templates for the criticality assessment of new AI systems by employees to simplify the application process.
  8. Formulate practical rules for the use of AI in day-to-day work: 
    • Restrict use strictly to professional purposes and only allow access via internal company accounts.
    • Always observe the terms and conditions of the tool providers, especially with regard to the processing of personal data. 
    • Carefully regulate the handling of sensitive information. 
    • Critically review the results from AI applications. 
    • Ensure compliance with copyright laws. 
    • Mark AI-generated content transparently and clearly. 
    • Address legal issues. These include liability issues, the prohibition on the collection and utilization of evidence in accordance with the German Code of Criminal Procedure and the clarification of property rights when AI systems generate intellectual property.
  9. Define the consequences of non-compliance and develop contingency plans for critical situations.
  10.  Anchor your AI policy in your company through communication and training measures. 

Act now!

The EU negotiating partners have presented the final version of the AIA. In a leaked document, the agreement between the European Parliament and the EU member states can be read on 900 pages. 

What’s next? The final version will be officially published, followed by votes in the committees of the European Parliament and a plenary session and Council meeting at the end of April. Once formally adopted, the law will enter into force 20 days after publication in the Official Journal, around May 2024. Although certain provisions, particularly on unacceptable risks, will apply six months after entry into force, the regulations for AI systems will “only” apply 12 or 24 months after entry into force. At the same time, the EU member states will implement the AIA into national law. 

Companies therefore do not have much time to analyze the implications and use the transition period for an effective AI framework. AI can improve working life, increase efficiency and offer strategic opportunities for new growth. Create a legally compliant and ethically responsible approach to AI with an AI policy. Seize this opportunity and don’t wait any longer. Act now!


Jan Hasse

Jan is Partner and Managing Director at hy Technologies. He has 10+ years of experience in applied Artificial Intelligence (AI) at Deloitte, PwC, Bertelsmann and Allianz. As a visionary leader and bold creator, he was instrumental in founding Germany's AI Park. As founder of a self-service marketplace for distributed AI solutions, he combines corporate experience with a lean startup mindset. Together with his team, Jan designs and develops AI-fueled business models and scalable products to increase efficiency and optimize costs for our customers in small and medium businesses, multinational corporations and public administrations. With a strong network in the European AI ecosystem, he transforms AI into RoI and focuses on human-centered AI products. As a thought leader, he is also a sought-after speaker and panelist at leading events on the topics of Machine Learning, Deep Learning and Foundational Models.