Ethical Governance of artificial intelligence Hallucinations in legal practice
Keywords:
Generative Artificial Intelligence, Hallucinated Legal Citations, Ethical Governance, RetrievalAugmented Verification, AI iability Frameworks, Comparative Legal EthicsAbstract
This paper examines the ethical and legal challenges posed by “hallucinations” in generative‐AI tools used for legal drafting—instances where language models fabricate case citations or statutory text with convincing authority. Drawing on a comprehensive review of professional‐responsibility rules, civil‐liability doctrines, and technical mitigation strategies, the study assesses how existing frameworks address, or fail to prevent, AI‐induced errors in attorney filings. Empirical benchmarking data reveal that leading retrieval‐augmented models still produce fabricated authorities in up to one-third of complex queries, while sanctions under traditional malpractice and negligence regimes remain retrospective and inconsistent. Comparative analysis of U.S. and EU liability proposals—the AI Liability Directive and the Revised Product Liability Directive—highlights gap in coverage for bespoke legal services. In response, the paper proposes an integrated governance model combining binding bar-association standards (mandatory AI‐literacy training, provenance logging, and human-in-the-loop review), statutory safe-harbor provisions granting rebuttable presumptions of compliance, and robust technical protocols. The study concludes by recommending targeted rule‐making, pilot programs to evaluate framework efficacy, and incorporation of AI governance curricula in legal education, thereby safeguarding the integrity of legal practice in the AI era.
Downloads
References
American Bar Association. (2024). Formal Opinion 512: The role of lawyers in the use of generative AI tools. American Bar Association.
Bashayreh, M. H., Tabbara, A., & Sibai, F. N. (2023). The Need for a Legal Standard of Care in the AI Environment. Sriwijaya Law Review, 73-86.
Buchner, B. (2022). Artificial intelligence as a challenge for the law: the example of “Doctor Algorithm”. International Cybersecurity Law Review, 3(1), 181-190.
Chamberlain, J. (2023). The risk-based approach of the European Union’s proposed artificial intelligence regulation: Some comments from a tort law perspective. European Journal of Risk Regulation, 14(1), 1-13. Doi: https://doi.org/10.1017/err.2022.38
Epstein Becker & Green. (2024). AI and ethics in the legal profession Retrieved https://www.ebglaw.com/assets/htmldocuments/eltw/eltw385/AI-and-Ethics-in-the-Legal-Profession-Epstein-Becker-Green.pdf
European Commission. (2022a). Proposal for a directive on adapting non‐contractual civil liability rules to artificial intelligence (AI Liability Directive) (COM (2022) 496 final). European Commission.
European Commission. (2022b). Proposal for a directive on liability for defective products (Revised Product Liability Directive) (COM (2022) 495 final). European Commission.
Koch, B. A., Borghetti, J.-S., Machnikowski, P., Pichonnaz, P., Rodríguez de las Heras Ballell, T., Twigg-Flesner, C., & Wendehorst, C. (2022). Response of the European Law Institute to the public consultation on civil liability: Adapting liability rules to the digital age and artificial intelligence (ELI Response Paper). European Law Institute. https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_
Fernández Llorca, D., López, M., & Giergiczny, A. (2023). Liability regimes in the age of AI: Use‐case‐driven analysis of the burden of proof. Journal of European Tort Law, 5(1), 1–29.
Llorca, D. F., Charisi, V., Hamon, R., Sánchez, I., & Gómez, E. (2023). Liability regimes in the age of AI: a use-case driven analysis of the burden of proof. Journal of Artificial Intelligence Research, 76, 613-644. https://doi.org/10.1613/jair.1.14565
Hacker, P. (2023). The European AI liability directives: Critique of a half‐hearted approach and lessons for the future. European Journal of Risk Regulation, 14(1), 99–124.
Jacobs, D., & Simon, F. (2022). Assigning obligations in AI regulation: A discussion of two models. AI & Society, 37(5), 1203–1220.
Kharitonova, E. (2022). Legal means of providing the principle of transparency of AI: A comparative analysis. Journal of AI Policy, 15(2), 45–67.
Nuñez Duffourc, M., & Gerke, S. (2023). The proposed EU directives for AI liability leave worrying gaps likely to impact medical AI. European Journal of Health Law, 30(2), 211–235.
Pistilli, M., Rossi, L., & Steinberg, D. (2022). Stronger together: Ethical charters, legal tools, and technical documentation in machine learning. AI & Society Review, 33(1), 101–120.
Rodríguez de las Heras Ballell, J. (2022). The revision of the product liability directive: A key piece in shaping AI liability. European Journal of Consumer Law, 11(3), 157–174.
Bashayreh, M. H., Tabbara, A., & Sibai, F. N. (2024). The need for a legal standard of care in the AI environment. Journal of International AI Law, 8(1), 15–38.
Buchner, B. (2022). Artificial intelligence as a challenge for the law: The example of “Doctor Algorithm.” Journal of Law and Technology, 19(4), 233–258.
Chamberlain, C. (2021). The risk-based approach of the European Union’s proposed AI regulation. European Journal of AI Policy, 3(2), 44–67.
European Law Institute. (2022). Response to the public consultation on civil liability: Adapting liability rules to the digital age and artificial intelligence. European Law Institute. https://doi.org/10.1515/jetl-2022-0002
Duffourc, M. N., & Gerke, S. (2023). The proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI. NPJ Digital Medicine, 6(1), 77.
Pistilli, G., Muñoz Ferrandis, C., Jernite, Y., & Mitchell, M. (2023, June). Stronger together: on the articulation of ethical charters, legal tools, and technical documentation in ML. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 343-354).
Epstein Becker & Green. (2024). AI and ethics in the legal profession: Addressing hallucinations in legal filings. Epstein Becker & Green.
Giannini, A., & Kwik, J. (2023, March). Negligence failures and negligence fixes. A comparative analysis of criminal regulation of AI and autonomous vehicles. In Criminal law forum (Vol. 34, No. 1, pp. 43-85). Dordrecht: Springer Netherlands.
Hacker, P. (2023). The European AI liability directives–Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review, 51, 105871. https://doi.org/10.1016/j.clsr.2023.105871
Jacobs, M., & Simon, J. (2022). Assigning obligations in AI regulation: A discussion of two frameworks proposed by the European Commission. Digital Society, 1(1), 6.
Mata v. Avianca, No. 22-cv-1461 (PKC) (S.D.N.Y. 2023).
Prictor, M. (2023). Where does responsibility lie? Analysing legal and regulatory responses to flawed clinical decision support systems when patients suffer harm. Medical Law Review, 31(1), 1-24.
Rodríguez de las Heras Ballell, T. (2023, August). The revision of the product liability directive: a key piece in the artificial intelligence liability puzzle. In ERA Forum (Vol. 24, No. 2, pp. 247-259). Berlin/Heidelberg: Springer Berlin Heidelberg.
Lawless, J. (2025, June 7). UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court. AP News. https://apnews.com/article/uk-courts-fake-ai-cases-46013a78d78dc869bdfd6b42579411cb
Stanford Hai. (2024, May 23). AI on trial: Legal models hallucinate in 1 out of 6 (or more) benchmarking queries. https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
Downloads
Published
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
License
Copyright (c) 2025 Khurram Shahzad Warraich, Hazrat Usman, Sidra Zakir, Mohaddas Mehboob (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.