17:13 Fri 04.10.24 | |
AI regulation in Ukraine: what has already been done |
|
As part of the three-year Roadmap for the Regulation of Artificial Intelligence in Ukraine, the Ministry of Digital Transformation of Ukraine presented a White Paper that provides specific tools for businesses to use AI. Sergiy Barbashyn, Chairman of the UNBA NextGen, shared the experience of introducing artificial intelligence in Ukraine and the state's experience in developing future legislation on AI regulation with foreign colleagues during the European Young Bar Association International Weekend 2024, which took place on September 26-29, 2024 in London (UK). He said that last fall, the Ministry of Digital Transformation of Ukraine presented the Roadmap for Artificial Intelligence Regulation in Ukraine, which should help Ukrainian companies prepare for the adoption of a law similar to the European Union's Artificial Intelligence Act and educate citizens on how to protect themselves from AI risks. In June of this year, the Roadmap presented a White Paper detailing the approach to artificial intelligence regulation in Ukraine. This document will help companies understand how to prepare for future AI legislation and create products that are safe for citizens. S. Barbashyn reminded which groups are covered by the AI Act, which was adopted in March 2024 by the European Parliament:
He also drew attention to the risks associated with the use of AI. The AI Act's classification of AI systems by risk level is critical to protecting users and society from potential dangers associated with the use of AI. The establishment of four risk groups allows for the determination of the level of oversight and regulation for each category of systems. Systems with an unacceptable risk are prohibited altogether due to the threat to fundamental human rights, while high-risk systems are subject to strict testing and compliance requirements. This minimizes the likelihood of negative consequences and ensures the safety of users. This classification also allows regulators and businesses to clearly delineate responsibilities depending on the risk level of AI systems. Companies using high-risk systems should be prepared for additional transparency, risk management, and certification requirements. This is important to maintain user trust and avoid legal issues. At the same time, systems with limited and minimal risk do not need such strict requirements, but must provide a sufficient level of transparency and information. «Knowledge of these four risk categories allows businesses to properly assess potential threats and plan the implementation of AI technologies accordingly. The correct classification of AI systems helps not only to avoid fines and reputational risks but also promotes the development of ethical and safe use of artificial intelligence in various fields of activity», - summarized S. Barbashyn. |
|
© 2024 Unba.org.ua Всі права захищені |