The law enumerates examples of models that pose unacceptable risks. Models falling under this category are prohibited. Examples include the use of real-time remote biometric authentication in public spaces and social scoring systems, and the use of subconsciously influencing techniques to exploit the vulnerabilities of specific groups.
High-risk models are allowed, but must meet multiple requirements and undergo a conformity assessment. This evaluation must be completed before the model is released to the market. These models must also be registered in the EU database that will be set up. Operating high-risk AI models requires proper risk management systems, logging capabilities, and human oversight, respectively. Data used for training, testing, and validation must have appropriate data governance and controls to ensure cybersecurity, model robustness, and fairness.
Examples of high-risk systems include models related to the operation of critical infrastructure, systems used in recruitment processes or employee evaluations, credit scoring systems, automated insurance claims processing, or customer risk insurance premiums. settings etc.
The remaining models are considered to have limited or minimal risk. In these cases, transparency is required. That is, the user must be informed that what she is interacting with is generated by her AI. Examples include chat bots and deep fakes. Although these are not considered high risk, it is imperative for the user to know that there is an AI behind them.
All operators of AI models are encouraged to implement an ethical AI code of conduct.
Step 3: Get ready
If you are a provider, user, importer, distributor, or affected person of an AI system, you must ensure that your AI practices comply with these new regulations. To begin the process of becoming fully compliant with AI laws, you must begin the following steps: (1) assess the risks associated with AI systems, (2) raise awareness, and (3) develop ethical systems. (4) assign responsibilities, (5) keep current, and (6) establish formal governance. By taking proactive steps now, your organization can avoid potentially facing significant sanctions when this law takes effect.
Please note that this article refers to an ongoing legislative process that may lead to changes to requirements.
What are the penalties for violation?
The penalties for violating AI laws are significant and can seriously impact a provider or adopter's business. The amount ranges from 10 million euros to 40 million euros, or 2% to 7% of annual global turnover, depending on the severity of the infringement. Therefore, it is important that stakeholders fully understand the AI Act and comply with its provisions.
How will the financial services sector be affected by this legislation?
Financial services has been identified as one of the areas where AI can have the greatest impact. EU AI law includes a three-tier risk classification model that classifies AI systems based on the level of risk they pose to fundamental rights and user safety. The financial sector uses a large number of models and data-driven processes, and will increasingly rely on AI in the future. Processes and models used to assess creditworthiness and assess customer risk premiums are expected to fall into the high-risk category. Additionally, models used to operate and maintain potentially critical financial infrastructure, as well as AI systems used for biometric identification and classification of natural persons, or for employment and employee management, also fall into the high-risk category. So far, things that have not been included in the scope of risk classification include, among others, AI systems used purely to improve the customer experience, systems to detect fraud, customer lifetime value prediction, and pattern analysis. (It does not directly influence individual customer decisions).