Ohio Board of Professional Conduct Issues AI Ethics Guide for Lawyers and Judicial Officers

Maia E. Jerin By Maia E. Jerin

The Ohio Board of Professional Conduct published its first comprehensive Ethics Guide on the use of artificial intelligence in legal practice. Released April 2026, the Guide provides nonbinding guidance to lawyers for ethically integrating AI into their practice.

Executive Summary

The Guide is not alarmist — it recognizes that AI, used properly, can meaningfully improve efficiency and client service. But the ethical obligations it highlights are real, and enforcement is accelerating. A few concrete steps worth taking now:

  • Audit AI tools. Review terms of service and privacy policies for every platform your firm uses. If you cannot confirm that the vendor does not train on your prompts, do not input client information.
  • Update engagement letters. Address AI use, disclosure, and billing in new engagements.
  • Draft a firm AI policy. Supervisory obligations under Rules 5.1 and 5.3 require it. The policy should specify approved tools, verification requirements, and confidentiality protocols.
  • Verify everything. Treat AI output the way you would treat work from a first-year law clerk: useful as a starting point, never reliable without independent verification.
  • Check local court rules. Several Ohio courts already require AI certification or disclosure. Others are likely to follow.

What the Guide Covers

  • Competence (Prof.Cond.R. 1.1). AI qualifies as “relevant technology” under the competence rule. You are not required to use AI, but you are required to understand its benefits and risks well enough to make informed decisions. At minimum, that means knowing how a tool functions, its hallucination potential, and what data it trains on. CLE, peer consultation, and self-study are all explicitly encouraged.
  • Client Confidentiality (Prof.Cond.R. 1.6). This is the most immediately actionable section. Many free or public AI platforms treat user prompts as fair game for model training and third-party disclosure. Entering client information into such tools can breach Rule 1.6. The Guide directs lawyers to review vendor terms of service, use only enterprise-grade tools that expressly prohibit data retention and training on user prompts (e.g., Westlaw AI, Lexis AI), and consult IT/cybersecurity professionals when evaluating new platforms. In some matters, client consent before AI use may be warranted.
  • Client Communication (Prof.Cond.R. 1.4). Routine AI-assisted research generally does not require disclosure. However, disclosure becomes more likely when AI materially affects how the representation is conducted, substantially shapes the final work product, involves client data, or triggers separate billing. When in doubt consider proactive communication as the safer path.
  • Independent Judgment (Prof.Cond.R. 2.1). Professional judgment cannot be delegated to AI. Lawyers remain fully responsible for the work product AI helps generate. The Guide specifically cautions against inexperienced lawyers relying on AI to guide critical analysis — a practice that impermissibly allows the tool to supplant, rather than assist, independent legal reasoning.
  • Candor to Tribunals (Prof.Cond.R. 3.1, 3.3, 8.4). AI hallucinations in court filings are a live disciplinary risk. Every citation, proposition of law, and factual assertion generated by AI must be independently verified against original source materials before filing. Local court AI disclosure requirements, which are proliferating across Ohio, must also be followed.
  • Fees (Prof.Cond.R. 1.5). If AI cuts your drafting time from 20 hours to 8, you bill 8. The time-savings belong to the client. Standard AI subscription costs are overhead, not a billable line item, though per-use specialized tools (e.g., jury analytics platforms) may be passed through if disclosed in the engagement letter.
  • Supervision (Prof.Cond.R. 5.1, 5.3). Supervising partners bear ethical responsibility for AI misuse by associates and staff as with any other nonlawyer assistant. Firms should consider implementing written AI policies, provide staff training, and vet vendor contracts for confidentiality protections.

A Note for Judges

The Guide addresses judicial officers at length. Key takeaways: judges should not use AI to make ultimate case decisions (Jud.Cond.R. 2.7); general AI tools pose ex parte information risks (Jud.Cond.R. 2.9); and public AI platforms may compromise confidentiality of case information (Jud.Cond.R. 3.5). Legal research should be confined to AI-integrated commercial databases like Lexis and Westlaw. Judges are advised to avoid using AI to generate the first draft of any decision or order.

This News Flash summarizes the Ohio Board of Professional Conduct’s Ethics Guide on Artificial Intelligence for Lawyers and Judicial Officers (April 2026). The Guide is nonbinding staff guidance and does not constitute a formal Board opinion or legal advice. Questions regarding specific AI use in your practice should be directed to ethics counsel or the Board’s informal advice line.