
AI in Law: The Real Risks Beyond Hallucinated Cases
April 9, 2025Feature Article(Source)
Chatter about artificial intelligence (AI) echoes through the halls of law schools–whether it’s a professor disparaging AI and its capabilities in the classroom or a classmate whispering to you about a new way AI saved them time on some laborious task or assignment. Some professors permit AI usage within exams, while others denounce AI use in all its forms, expressly stating in their syllabi that any AI usage will be deemed academic dishonesty. These incongruities and mixed messages have made discussions of AI contentious and disconcerting. Regardless of one’s viewpoints, there are legitimate ethical considerations lawyers must acknowledge when using AI. Everyone has heard horror stories of attorneys being sanctioned for using non-existent, hallucinated cases. However, for diligent lawyers who use AI responsibly and knowledgeably, hallucinations and false information should be of minimal concern. The legal profession should not eschew AI in practice but rather recognize the legitimate issues regarding AI so that it can be used sensibly and responsibly as a tool to improve advocacy and the legal practice as a whole. In this article, I discuss the burgeoning issue of AI privacy within the practice of law and the ways in which a lawyer may evaluate whether an AI platform is trustworthy.
Legal research AI models such as Westlaw’s CoCounsel and Lexis+ AI have been developed and maintained with attorneys and law firms in mind. Both platforms implement significant security measures to protect and encrypt data, especially in light of their intended audiences who deal with sensitive material running the gamut of legal matters. Thus, when utilizing these platforms, lawyers can be confident that adequate data protection and security measures are in place. If a lawyer’s use of AI were to be limited to platforms created for lawyers with a lawyer’s ethical duties and responsibilities in mind, privacy and data security would be of significantly lesser concern.
However, some tasks are better suited for other non-legal, general-use AI platforms, and not all law firms contract with legal AI platforms. In these instances, lawyers must be acutely aware of how these platforms function and what they do with their information. Entering private or personal data into these unprotected AI programs can pose serious risks with potentially calamitous implications for both attorneys and their clients. These include numerous professional responsibility issues, such as attorney-client privilege, confidentiality requirements, and the duty to keep abreast of technological advancements in law practice.
Accordingly, it is imperative that attorneys be aware of ways in which engaging with AI platforms may jeopardize their bar licenses and their duty to their client’s safety and confidentiality. Personal data that is entered into the AI programs is vulnerable to misuse, and one must understand what data can be entrusted by these programs. By ingesting your data, AI programs increase their computational strength to analyze interactions and improve and create more personalized responses for their users. Furthermore, some AI providers integrate their service with third-party apps, meaning that external groups such as advertisers and business partners may collect, process, or access any data inputs.
Unfortunately, determining whether an AI platform is safe for legal usage may go beyond just reading its terms and conditions. The case of Dinerstein v. Google (2019) demonstrates how seemingly redacted and anonymized data inputs may not be sufficient to protect confidential information. Matt Dinerstein, a University of Chicago Medical Center (UCMC) patient, brought multiple claims against the UCMC and Google, with whom the UCMC granted access to countless patient medical files. UCMC had supposedly “de-identified” the file, which was supposed to redact patients’ personal information from their files. Dinerstein claimed that even though identifiable information was removed from the data shared with Google, Google could easily re-identify patients by matching their files with the extensive demographic data they already possess. While a federal judge in Illinois dismissed Dinerstein’s claims on the grounds that he failed to demonstrate damages, the case highlights how private data can be accessed and reconstituted, even when initially believed to be protected.
Notably, this is not a call to dismiss non-legal AI platforms. In fact, I encourage any technological innovation that enhances a lawyer’s ability to advocate and represent their clients. However, attorneys must remain vigilant about these privacy risks and consult trustworthy experts and sources to best approach these platforms. One guide for law firms suggests, “As a precaution, refrain from inputting sensitive client or case information into your AI system. If you require AI assistance from document writing, anonymize the data whenever possible and ensure that the data shared with AI systems is limited to non-confidential information.” As demonstrated in Dinerstein v. Google, this may be insufficient to adequately protect private information. Additionally, lawyers may also consider encrypting and securing client data using virtual private networks (VPNs), end-to-end encryption, and regular audits. Firms must also develop internal rules and regulations regarding the usage of non-legal AI programs to protect themselves and their attorneys from malpractice. Attorneys should seek and obtain clients’ consent when using AI to help reduce their liability.
In addition to self-regulation by lawyers and law firms, the federal government and state regulators are beginning to create and implement guidelines for protecting consumer data within AI programs. Over the next four years, AI policy will primarily be shaped by states, with any federal involvement under the Trump Administration likely focused on competition with China, national security, and energy policy. Whether there will be federal regulation on AI remains an open question; however, on his first day in office, President Trump repealed former President Biden’s first AI executive order, which regulated AI development and testing. States have already begun regulating AI platforms, and the federal government’s hands-off approach will likely mean that all monitoring and enforcement of laws will be left to the states.
Numerous states have been regulating AI, especially in areas like consumer protection, sector-specific automated decision-making, and chatbots. Colorado’s AI Act requires developers and deployers to be transparent and complete annual impact assessments of high-risk AI systems (AI systems that might impact rights, safety, or society). Illinois and New York also have enacted legislation regarding transparency to prevent discrimination. Proposed legislation in Connecticut, Massachusetts, and Texas also suggest similar regulations.
Even with state legislation, lawyers must consider their professional responsibilities and obligations. They must uphold the Model Rules of Professional Conduct—including but not limited to Rule 1.6 (confidentiality) and Rule 1.1 (competence)—when using AI, as well as common sense, good faith, and sound judgment. As AI becomes more integrated into the world and professional spaces, lawyers must learn to navigate the specific challenges AI brings to legal practice. Law students should be warned of the dangers of AI beyond the stories of hallucinations. Lawyers and law students alike should be made aware of the risks as well as the advantages of utilizing non-legal AI programs and be prepared to consult experts when in doubt. We must consider all potential issues the legal profession may face when integrating AI into practice, not just the crude errors resulting from an attorney’s negligence or failure to proofread. Above all, we must preserve the integrity of the attorney-client relationship and ensure it remains paramount.
Author’s Note: I wrote this article independently, but I did use AI tools to help with some light editing—things like grammar, flow, and clarity. All of the ideas, structure, and final decisions are my own. I included this note because I believe that, when used responsibly, AI can be a helpful resource—just as I discuss throughout the piece.
Suggested Citation: Jessica Rosberger, AI in Law: The Real Risks Beyond Hallucinated Cases, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Apr. 9, 2025), https://jlpp.org/ai-in-law-the-real-risks-beyond-hallucinated-cases/.

Jessica Rosberger is a 2L at Cornell Law School. She graduated from Cornell University’s
College of Arts and Sciences with a major in American Studies and minors in Inequality Studies and Law and Society, and was admitted as part of the 3+3 Pathway Program. In addition to her involvement with the Cornell Law Journal of Public Policy, she is the President and Founder of the 3+3 Student Association, Vice President of Internal Affairs of Lawyers Without Borders, and Social Chair of the Jewish Law Students Association.
You may also like
- April 2025
- March 2025
- February 2025
- November 2024
- October 2024
- April 2024
- March 2024
- February 2024
- November 2023
- October 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- April 2019
- February 2019
- December 2018
- November 2018
- October 2018
- September 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- May 2017
- April 2017
- March 2017
- February 2017
- December 2016
- November 2016
- October 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- August 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- April 2011
- March 2011
- November 2010
- October 2010
- September 2010