All Articles

Evolving Ethical Guidelines for AI in the Legal System

Emerging technologies are reshaping rules and norms around legal ethics and accountability.

Everchron

In December 2024, the Illinois Supreme Court issued a policy recommendation on artificial intelligence that some legal practitioners found surprising. Effective January 1, 2025, legal professionals including attorneys, litigants, court officials, and judges would be permitted to use AI in their work. Moreover, they would not be required to disclose their use of AI in filings.

Illinois was not the first state to permit legal teams to use AI. A few months before the Illinois ruling, the Delaware Supreme Court issued a similar policy allowing judicial officers to use AI. Just prior, the American Bar Association issued Formal Opinion 512, which emphasizes that the duty of competence for lawyers now includes understanding AI tools and using them appropriately.

Courts are attempting to strike a delicate balance that permits legal practitioners to use emerging tools while also protecting individuals from downstream effects. These guidelines revolve around seven core principles, according to legal professionals at Reed Smith.

Competence

The ABA’s Model Rule 1.1 requires lawyers to exercise "legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation." If a legal professional elects to use AI or generative AI (GenAI) to get work done, then the professional must understand what the technology can and cannot do, according to recent guidelines from the New York City Bar.

Crucially, evolving ethical guidelines emphasize the importance of human competence—not just any competence. The California state bar, for instance, has said that lawyers cannot delegate their professional judgement to GenAI. Moreover, the onus is on legal practitioners to double check that their AI tools are not producing erroneous results like hallucinations.

Confidentiality

According to Model Rule 1.6, attorneys have the responsibility to protect their client’s information from impermissible disclosures. GenAI and large language models (LLMs) have complicated this task.

Legal teams should ensure they have a robust security framework in place. When interviewing AI vendors, legal professionals should ask about data privacy and protection.

Consent

The ABA’s ethical guidelines emphasize transparency and professional responsibility. However, there are questions about how these principles might be applied to client consent around the use of AI.

For instance, can a client revoke their consent after AI has already processed their data? Should clients be made aware of how and where their data is stored, processed, and shared? And should attorneys seek the explicit consent of their clients before using AI to perform any legal task, or can they assume implicit consent?

Firms should take care to find platforms that offer flexibility on AI, so that legal teams can enable or disable AI tools on a matter-by-matter basis. If clients are not comfortable with the use of AI, then firms can simply toggle off those specific features, ensuring client data is not exposed.

Confirmation

The Illinois Supreme Court’s guidelines on AI state that the Rules of Professional Conduct and the Code of Judicial Conduct apply to the use of AI technologies. Legal practitioners—including attorneys, judges, and self-represented litigants—must review AI-generated content before submitting it in any court proceedings to confirm that the output is accurate.

Moreover, the Illinois Supreme Court urges legal practitioners to guard against unsubstantiated or deliberately misleading AI-generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making.

Conflicts

A major concern for legal practitioners and researchers is whether GenAI might create conflicts of interest. This is a risk whenever practitioners use software that stores or shares client data. GenAI is a powerful tool because it learns from the data it is given—but if GenAI retains information from prior interactions and uses that information to respond to prompts, there is a risk that the model will inadvertently disclose sensitive information.

Both the New York City Bar and the Pennsylvania Bar have cautioned lawyers to ensure their GenAI tools do not expose client data. Again, firms should look for platforms that keep data siloed by matter.

Candor

GenAI tools are sometimes prone to errors and hallucinations. These can include fake legal cases, misrepresented facts, or incorrect citations. Generally, lawyers are expected to correct any false statements made to the court, according to Model Rule 3.3.

The New York City Bar has extended that responsibility to GenAI. According to the court’s opinion, legal practitioners have the duty to investigate the authenticity of any evidence that might have been produced by GenAI.

Compliance

The regulatory landscape around AI is evolving quickly. The onus is on legal practitioners to stay abreast of these changes. Some courts, for example, still do restrict the use of GenAI or require that legal teams inform participants when they plan to use AI. Future regulatory changes might require legal practitioners to pay closer attention to the way client data is being used and to vet vendors accordingly.

Conclusion

AI is increasingly becoming an indispensable part of the legal professional’s toolkit. As a result, courts should move quickly to levy guidelines around the ethical and responsible use of AI that gives legal professionals the flexibility to use these new tools to do better, faster work for their clients. At the same time, however, courts are establishing guardrails that ensure AI does not compromise fundamental legal principles around fairness and transparency.

More from the Blog

Next level litigation® with Everchron.

Transform the way you manage cases. Schedule a demo to learn more.