AI in advocacy and justice: ethics, regulation, limits of application
The Ukrainian National Bar Association held a roundtable discussion entitled «Artificial Intelligence in the цork of advocates: ethics, responsibility, legal process engineering». Participants discussed how artificial intelligence systems are already being used in the professional activities of advocates and where the ethical boundaries of what is permissible lie.
Project or subject?
The chairman of the Committee on legislative initiatives on advocacy Oleksiy Yushchenko explained that the round table was held at the Committee's premises so that issues related to artificial intelligence in the work of advocates could be formalized in legislative proposals. He also called on those present to submit ideas, which the Committee would systematize in the form of amendments.
The possibility of implementing useful initiatives in the organization of advocates' work, in particular at the level of the UNBA, was confirmed by the Vice President of the UNBA, BCU Valentin Gvozdiy. He outlined the role of the Committees that organized the event. In his opinion, the Committee on Legislative Initiatives should determine where regulatory regulation is needed and prepare proposals for parliament, while the Committee on digitalization of advocacy should promptly develop a training course at the Higher School of Advocacy so that advocates understand the capabilities of the tools, limitations, and correct scenarios for their use, taking into account the cost of individual products and paid versions of services.
V. Gvozdiy noted that software products that work with large data sets help lawyers do their work faster and better, but at the same time, the problem of misuse of such tools is growing in importance. «It is important to remember that what ChatGPT offers you, for example, is just a draft, a guideline that does not replace your role. And it will be unfortunate if you believe text with references to non-existent documents and non-existent facts. The worst thing an advocate can do is to transfer their work to a machine», - he warned.
The idea of preparing training materials for advocates with explanations of the forms, limits, and consequences of using AI in their work was supported by the chairman of the UNBA Committee on digitalization of the advocacy Andriy Prykhodko. He called on all interested parties to join the Committee's work in this area.
The UNBA round table was supported by the Pylyp Orlyk Foundation. Its chairman Artem Mykolaychuk noted that artificial intelligence has already gained subjectivity. «We have imperceptibly entered a new era. Unfortunately, we were not prepared for this and did not realize it», - he said. As an example, the expert cited the forecast of unemployment in the USA at 10.2% by the end of 2027, as well as the fact that Indian IT companies are reducing their staff by 80%.
In this context, he emphasized the need to think through the right model of interaction between artificial intelligence and humans: how to use it or how to partner with it, how to regulate it, and how to limit its rights. Without such an approach, there is a risk of ending up in a future that humans will not be able to control.
Three «layers» for advocacy
The topic of risks and opportunities raised by the previous speakers was continued by a member of the UNBA Working Group on legal regulation of Artificial Intelligence Oleg Chernobai. The advocate assessed the pace of legal regulation of artificial intelligence issues at the international level and gave examples of technological development. Against this backdrop, he noted that Ukraine is lagging behind in terms of legal regulation, although technological aspects, including the defense sector and the use of drones, are developing actively.
Separately, O. Chornobai emphasized the place of advocacy and formulated three levels of its interaction with artificial intelligence. He described the first as utilitarian: the use of AI by advocates to prepare procedural documents and even to assist in forming a legal position, where the system analyzes the collected evidence and can make predictions about the position and outcome of the case. This necessitates instructional and advisory materials for advocates on the practical use of tools and the correct formulation of tasks for them.
The member of the relevant working group defined the second level as professional legal services for AI developers and users: advocates can assist them in preparing documents, representing them in court, and other legal procedures. He outlined the third level as possible services for AI itself—if the discussion about subjectivity leads to the recognition of a certain status for it. «So far, this is a futuristic level for Ukraine, but we will keep in mind that it can be developed», - O. Chornobai cautioned.
Moving on to methodology, he emphasized that the second and third levels are directly related to lawmaking and law enforcement, and therefore require a basic understanding of the law. The advocate described the problems of the positivist approach in the absence of specific regulations and noted that in Ukraine there are in fact only isolated norms in this area, particularly in legislation on copyright and academic integrity. At the same time, he pointed out that real relations regarding the use of AI are already developing faster than regulatory harmonization.
O. Chornobai considers the European approach to be a benchmark for rule-making and focused on EU Regulation 2024/1689 of 13 June 2024, which establishes harmonised rules on artificial intelligence. He also mentioned the Ukrainian approach, which the Ministry of Digital Transformation has defined as “soft law”: the 2023 Regulatory Roadmap and the 2024 White Paper, with a focus on recommendations, codes of conduct, a «regulatory sandbox» and preparation for the further implementation of European regulations.
European risk scale
Co-chair of the inter-factional deputy association «Strategic Foresight of Ukraine» MP Oleksiy Zhmerevsky said that together with his colleagues, he is researching advanced changes in legislation related to technological challenges.
According to him, while the European AI Act was being drafted, they had the opportunity to communicate with the authors, secretariat, and scientists who participated in the development of the document. The act was adopted in the summer of 2024, and its implementation will be phased in until 2027. At the same time, 2026 is a key year due to the obligation of EU member states to introduce national regulations for high-risk AI systems.
The MP then outlined the logic behind the European approach, emphasizing that different regions of the world have different perceptions of risks and the balance between technology and human rights. He described the European Union as the jurisdiction with the most stringent and detailed regulation in the field of AI. The Chinese approach is focused on state interests, control, and security, while the USA are dominated by liberal, market-based approaches and the priority of business and corporate interests.
The AI Act introduces four levels of risk: unacceptable, high, medium, and low. Unacceptable risks include the use of AI for social ratings, emotion recognition, and psychological profiling, as well as its use for risk prediction and preventive influence on individuals. Such practices are completely prohibited in the EU.
High risk is associated with critical infrastructure and personnel selection procedures, where use is possible, but subject to transparency of algorithms, clear criteria, and auditing so that the system is not a «black box». O. Zhmerechetsky explained the medium risk using the example of content generation and chatbots, where the key requirement is to inform people about their interaction with the machine. As an example, he cited the labeling of AI content on social networks and the self-identification of chatbots in service communications. Low risk is characterized by applied tasks such as spam filters in email, for which a more liberal approach is used.
The MP stressed that Ukraine will have to implement the provisions of the AI Act in view of its course towards EU membership, but, in his opinion, it is not necessary to get ahead of the curve due to the costs and complexity of adaptation for business. He said that the Ministry of Digital Transformation is already working in this direction, and the bill has not yet been submitted to parliament, although discussions are ongoing.
O. Zhmerenetsky also touched on the judicial system: he suggested integrating AI to prepare draft decisions and reduce technical time costs, and linked automated decisions to mass «technical» cases, such as fines for traffic violations.
At the same time, according to the MP, it is unlikely that artificial intelligence systems will be introduced in criminal cases.
An assistant, not a replacement
A member of the Verkhovna Rada of the VIII convocation Igor Lapin outlined his position on the use of artificial intelligence, primarily in the justice system, seeing himself as more of an opponent than a supporter of this direction.
He called the use of artificial intelligence acceptable for systematizing legislation, summarizing judicial practice in certain categories of cases, and forming case studies. At the same time, the lawyer linked the key problem to how AI can influence the formation of a judge's opinion. After all, judges are guided by the law and their inner convictions, which are formed under the influence of many factors. In this context, he asked whose side artificial intelligence would be on in such a model: the law, the individual, justice, or the mechanism of forming inner convictions.
He drew particular attention to the principle of adversarial proceedings and described the risk of a situation where the «artificial intelligence of the prosecutor» would compete with the «artificial intelligence of the advocate». In his opinion, human psychology in a specific situation cannot be reduced to a mathematical calculation, so he considered the role of AI to be merely auxiliary.
In the same section, he raised the issue of responsibility for mistakes: who should be held responsible if the system «paints the wrong picture» based on the data and this affects the perception of reality. He outlined the chain of risks that may arise if artificial intelligence is used by an advocate, prosecutor, or even a system that helps a judge prepare a preliminary draft decision. «To be honest, I would be afraid to take such risks in the context of justice», - I. Lapin concluded.
He also addressed the issue of personal data and who grants the right to use it for calculations. The lawyer mentioned that modern systems sometimes refuse to give medical advice and, in this regard, asked who and on what basis considers it acceptable to give legal advice when it comes to a person's fate. Separately, he described the risk of information arrays influencing system responses, citing a scenario in which AI could process large data arrays, formed in particular by bot farms, in response to a hypothetical political question and produce a distorted picture of reality.
Touching on the military sphere, I. Lapin noted that the use of AI there may seem good until the question of responsibility for the consequences arises, especially if the technology changes behavior and this leads to losses. In civilian life, according to his logic, the problem is exacerbated by the fact that it is difficult to hold a transnational corporation accountable.
In conclusion, he returned to the distinction between roles: artificial intelligence can be useful as a tool for quickly systematizing large amounts of data and practice, but as a substitute for humans, it is unacceptable in matters involving humans and their perceptions.
Popular news
Guarantees of the practice of law
Ukraine has signed the Convention for the Protection of the Profession of Lawyer
Today, on 9 March, Ukraine's Permanent Representative to the Council of Europe Mykola Tochytskyi signed the Council of Europe Convention for the Protection of the Profession of Lawyer. This makes our country the 28th to sign this important international treaty.
Discussion
AI in advocacy and justice: ethics, regulation, limits of application
The Ukrainian National Bar Association held a roundtable discussion entitled «Artificial Intelligence in the цork of advocates: ethics, responsibility, legal process engineering». Participants discussed how artificial intelligence systems are already being used in the professional activities of advocates and where the ethical boundaries of what is permissible lie.
European integration
Researchers from the USA explain how shadow reports became a grant service
Shadow reporting often becomes a tool for the grant economy and competition for influence on policy. Such «expertise» replaces impartial analysis with the delegitimization of bar self-government, masks conflicts of interest, and is used as a channel for external pressure on the institution.
European integration
A translation of the report on advocacy presented to the European Parliament has been published
A translation of a research report on the Ukrainian advocacy profession in wartime, previously presented to the European Parliament in Brussels, has been published. The document is presented as a basis for discussion on the rule of law, Ukraine's European integration aspirations, and countering Russian disinformation in the legal sphere.
Legal defence of military personnel
How to formalize discharge from military service: practical workshop
The issue of discharge from military service remains one of the most pressing and complex for Ukrainian defenders and their families. Due to constant changes in legislation, military personnel often face refusals to discharge them from service or even to consider their reports.
Legislation
The Verkhovna Rada Committee criticized the format of the government working group on advocacy
The implementation of the Roadmap on the rule of law (approved by Cabinet of Ministers Resolution No. 475-r of May 14, 2025) in relation to advocacy raises the practical question of who exactly should prepare legislative changes and how.
Self-government
The BCU demands a review of the composition of the government working group on reforming the advocacy profession
The President of the UNBA, BCU Lidiya Izovitova, appealed to the Cabinet of Ministers of Ukraine to review the composition of the working group on improving legislation in the field of advocacy and legal practice.
Guarantees of the practice of law
The President has determined, who will sign the Convention for the Protection of the Profession of Lawyer
President of Ukraine Volodymyr Zelenskyy authorized Ukraine's Permanent Representative to the Council of Europe Mykola Tochytskyi to sign the Council of Europe Convention for the Protection of the Profession of Lawyer.
Publications
Volodymyr Matsko Extradition during wartime: when the risks outweigh the request
Volodymyr Matsko Extradition as a systemic form of rights violations
Victoria Yakusha, Law and Business The anti-corruption vertical cannot «take care» of the Bar as an institution, - acting head of the HQDCB
Censor.net Protecting advocates – protecting justice: addressing concerns about the new law
Ihor Kolesnykov A BRIEF SUMMARY REGARDING THE APPLICATION OF THE ORDER ON EXTENDED CONFISCATION IN LATVIA REGARDING FINANCIAL ASSETS OF…
Valentyn Gvozdiy WORKING IN A WAR ZONE
Lydia Izovitova Formula of perfection
Sergiy Vylkov Our judicial system is so built that courts do not trust advocates