![]() |
12:57 Mon 10.02.25 |
Australian courts respond to the problems of AI use |
|
![]() The extent to which artificial intelligence will be used in the legal profession is currently difficult to assess. However, the courts are already concerned about the use of AI for affidavits, testimonies, and references to precedents. Australian judges are concerned about the reliability of evidence. The Guardian writes about this and cites several cases of unfair use of new technologies. Due to lack of time, a lawyer generated a review of court decisions using ChatGPT, which he included in the text of a court filing in an immigration case without checking. The review revealed seventeen non-existent cases. When this became known, the lawyer apologized to the federal court, which noted that the use of generative AI raises serious public concerns and that it is therefore important to set the right example by preventing its uncontrolled use by lawyers. Another case was recorded in a family dispute. The lawyer used AI to create a list of court decisions at the request of the judge. However, when the judge and her assistants checked the list, they could not find these cases in the registers. Later, the lawyer admitted that the list was generated using Leap's legal platform with integrated generative AI. But the problem is not only with lawyers. Judges have expressed concern about the use of AI by people who testify or represent themselves. In one case, a defendant submitted a characterization from his brother. The court suspected that it was written by AI because it contained a strange phrase: «I have known my brother both personally and professionally for a long time». The court noted that this phrase did not fit the context and cast doubt on whether the brother had actually written this characterization himself. Thomson Reuters, a media company that offers its own artificial intelligence software for lawyers, surveyed 869 lawyers in Australia last year. It turned out that 40% of respondents worked in firms that experimented with AI but used it cautiously. 9% of lawyers have already used AI in their daily work. Almost one in three said they would like to have a generative AI assistant for legal work. Recently, the Supreme Court of New South Wales issued a new guideline that restricts the use of generative AI by lawyers to prepare affidavits, testimonies, characterizations, and any other materials submitted as evidence or used in cross-examination. Ginny Paterson, professor of law and director of the Center for Artificial Intelligence and Digital Ethics at the University of Melbourne, believes that the main problem is not only AI itself, but also the lack of training of lawyers. In her opinion, if people are properly trained, they will quickly realize that AI is not good at such tasks. And the Victorian Legal Services Commission has identified the misuse of AI by lawyers as a key risk. After all, lawyers should remember that their duty is to provide accurate legal information, not to rely on AI. Unlike a professional lawyer, AI cannot provide a high-quality legal assessment or guarantee confidentiality. |
|
© 2025 Unba.org.ua Всі права захищені |