LawPRO Insurance and AI Errors: A Disclosure Obligations Playbook for Ontario Lawyers (2026)

N/A

LawPRO Insurance and AI Errors: A Disclosure Obligations Playbook for Ontario Lawyers (2026)

Written by Mike Pearlstein, CISSP, CEO of Fusion Computing Limited. Helping Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver.

Note: the partner, the associate, and the timing below are composites drawn from three Ontario law firm engagements between late 2025 and early 2026. The names are changed. The clauses, the rule citations, and the sequence in which disclosure obligations fired are real.

I took the call from a Bay Street partner on a Friday afternoon in March. Her senior associate had filed a factum overnight on a contested motion. One of the cited cases on page eleven did not exist.

Opposing counsel had read the factum on Friday morning, checked the citation against CanLII, and sent a one-line email back asking for the pinpoint. The associate had used a paid legal research platform with an AI assist feature. She had not run the citation against CanLII before signing the factum. The partner’s LawPRO file number opened by Monday.

I’m an MSP, not a lawyer. The clause language in this post belongs with qualified Ontario professional-responsibility counsel before you adopt any of it. What follows is the disclosure architecture I’ve seen surface in three engagements where AI-assisted research produced a fabricated citation, a privileged-data leak, or both.

The post maps the engagement beat by beat against LSO Rule 7.8-1, the LawPRO policy, the FLSC Model Code, the British Columbia and US precedent cases that frame how Canadian courts treat these errors, and the six-step incident response we walked the firm through.

Key Takeaways

  • LSO Rule 7.8-1 requires a lawyer to report to LawPRO any error or omission that may reasonably be expected to give rise to a claim. An AI hallucination in a court filing meets that test the moment the filing leaves the office (Law Society of Ontario, Rule 7.8-1).
  • The standard LawPRO Professional Liability Policy provides CA$1 million per claim and CA$2 million aggregate coverage for most Ontario lawyers in private practice, with an excess insurance option available (LawPRO, 2025 program).
  • Zhang v. Chen 2024 BCSC 285 ordered costs personally against a lawyer who filed an application citing two fabricated cases generated by ChatGPT. The judge accepted that the error was unintentional and still ordered indemnity costs.
  • Disclosure obligations run in four directions at once when an AI error surfaces: to the client, to the court, to opposing counsel, and to LawPRO. The disclosure window starts when a competent lawyer would have caught it.
  • A firm AI policy that includes a written incident response sequence is what insurance carriers, the LSO, and your own LawPRO claims handler will look for first.

Book a Consultation

The piece below builds on our AI deployment guide for Canadian law firms. Read that first if you have not landed on a sanctioned-tool architecture. This post assumes the tools are in the room and the question is what happens when one of them produces a fabricated citation that reaches a court file.

The LawPRO Policy: What It Covers and What It Does Not


The Lawyers’ Professional Indemnity Company (LawPRO) is the mandatory professional liability insurer for lawyers in private practice in Ontario. The standard program policy provides coverage for civil liability arising from the provision of professional services as a lawyer. Coverage is subject to a per-claim limit, an aggregate limit, and a defined set of exclusions (LawPRO, 2025 program documentation).

The policy responds to errors and omissions. It does not respond to intentional dishonesty, knowing assistance in a fraud, or losses outside the scope of professional services.

What the policy covers when an AI error is involved is the negligence theory. A lawyer signed a factum containing a fabricated citation. A lawyer relied on an AI-generated contract clause that misstated the limitation period. A lawyer pasted privileged material into a consumer AI tool and a confidentiality breach claim followed.

These are all errors in the rendering of legal services. They are not, on their face, intentional misconduct. They sit inside the policy.

What the policy will not do is rescue a lawyer who failed to disclose the error when Rule 7.8-1 required disclosure. It will also not rescue a lawyer who certified to LawPRO at renewal that no claims-triggering events had occurred when they had.

Both failure modes shift the file from a routine errors-and-omissions claim into a disclosure-failure file. The disclosure-failure file is where the lawyer’s personal exposure starts.

I drew this distinction for the Bay Street partner on the Friday call before we did anything else. The error was insurable. The disclosure question was the file. We spent the rest of the afternoon on the disclosure timeline, not on the factum.

When AI Errors Trigger a Reportable Claim Under LawPRO


LSO Rule 7.8-1 obligates a lawyer to report to LawPRO any circumstance that may reasonably be expected to give rise to a claim. According to LawPRO claims-handling guidance, the obligation fires the moment a lawyer becomes aware of an error in a filed document, a client communication, or an externally circulated draft, regardless of whether the client has yet noticed.

  • Fabricated case citation in a factum that opposing counsel has flagged is reportable.
  • Draft contract that referenced a repealed statute and got signed is reportable.
  • Privileged email pasted into consumer ChatGPT to summarize is reportable the moment the paste is confirmed.

The LSO and LawPRO both publish guidance that practitioners should err on the side of early disclosure (LawPRO AvoidAClaim, November 2025, on the new Ontario Superior Court AI practice directions).

The annual renewal questionnaire asks whether any circumstance arose during the period that could reasonably give rise to a claim. An honest yes at renewal preserves coverage. A no that turns out to be wrong creates a misrepresentation question on top of the original error.

The partner’s associate had used a paid legal research platform with an AI assist feature. The platform itself was sanctioned. The associate had skipped the citation-check step in the firm’s research protocol.

We treated the error as a reportable circumstance under Rule 7.8-1 from minute zero. We did not wait for opposing counsel to file a motion. We did not wait for the client to ask a question. The LawPRO file opened on Monday morning because the partner picked up the phone on Friday afternoon and started the disclosure clock voluntarily.

I have sat across from a managing partner who first noticed her associate’s hallucinated citation only after opposing counsel flagged it. I have watched two other engagements take the opposite path. In both, the firm hoped the issue would resolve quietly. In both, opposing counsel raised it formally before the firm did, and the LawPRO file opened anyway.

In neither case did the eventual outcome differ on the underlying error. What differed was how much the firm spent on its own defence in the months between the event and the formal disclosure.

If you want a written incident response template that maps which event triggers Rule 7.8-1 disclosure in your shop, book a consultation and we will send you the one we use with our law firm clients.

Disclosure Obligations: Client, Court, Opposing Counsel, Regulator


An AI-generated error in a court filing triggers four parallel disclosure tracks under the Law Society of Ontario Rules of Professional Conduct. Each track has its own timing rule. They do not run on the same clock. According to the LSO commentary on candour and the duty to the court, a single delayed disclosure on any one track converts a recoverable error-and-omission file into a disclosure-failure file that follows the lawyer personally.

  • Track 1, Client: Rule 7.2-1 candour and Rule 3.2-2 honesty.
  • Track 2, Court: Rule 5.1-2 duty as advocate, prohibiting knowingly assisting the court to be misled.
  • Track 3, Opposing counsel: Rule 7.2-1 candour and the duty not to mislead.
  • Track 4, LawPRO: Rule 7.8-1 reportable circumstance.

The client disclosure obligation is the one most lawyers attempt to delay. The instinct is to investigate first and report once the picture is clear. The professional-responsibility rule does not support that instinct.

Rule 7.2-1 requires the lawyer to be candid with the client. The FLSC Model Code commentary on candour treats the obligation as triggered when the lawyer becomes aware of the error. It does not wait until the lawyer has finished investigating it.

A holding email that says “we have identified an issue with the citation in the factum; I am investigating and will report by end of day” is consistent with the rule. Silence for a week is not.


The court disclosure obligation under Rule 5.1-2 is the most serious of the four. It implicates the lawyer’s duty to the administration of justice. Where a filing already with the court contains a fabricated citation, the lawyer has an obligation to correct the record.

The correction mechanism varies. A letter to the case management judge, a withdrawal of the offending paragraph, a corrected factum filed with leave, an oral correction on the record at the next appearance. The form is judgment. The fact of correction is mandatory.

Opposing counsel disclosure is the track the partner’s associate started involuntarily because opposing counsel had already noticed the fabricated citation. The candour obligation to opposing counsel is independent of the court disclosure obligation. Acknowledging the error in writing, withdrawing reliance on the citation, and confirming a corrected filing schedule discharges the obligation without admitting more than the facts require.

The LawPRO disclosure runs on the Rule 7.8-1 clock. It should generally be the first call, not the last. The partner’s firm followed that sequence: LawPRO first thing Monday morning, client by 10am, opposing counsel by noon with a corrected factum following by Tuesday, court at the next case management call.

The sequence is not mandatory. The completion of all four tracks before the next status hearing is. If your firm wants a one-hop track for these four calls, book a 30-minute IT assessment and we will share the call sequence we use.

The LSO Rule 7.8-1 Framing: Errors and Omissions


Rule 7.8-1 of the Law Society of Ontario’s Rules of Professional Conduct reads, in operative part: a lawyer who discovers an error or omission that may be damaging to the client and that cannot be rectified readily shall promptly inform the client of the error or omission.

The rule also requires the lawyer to recommend that the client obtain legal advice from an independent lawyer concerning any rights the client may have arising from the error or omission. The lawyer must advise the client that professional indemnity insurance may apply to the matter (Law Society of Ontario, Rules of Professional Conduct, Rule 7.8).

The rule does three things at once. It defines the trigger as the moment the lawyer discovers the error, not when the client suffers harm. It requires recommending independent counsel, which means the lawyer cannot continue representing the client on the matter without an informed conflict waiver.

The third thing the rule does is expressly contemplate that LawPRO will be involved, which lines up with the lawyer’s separate Rule 7.8-1 obligation to notify LawPRO of any reportable circumstance.

The ambiguity that AI errors introduce is around the phrase “cannot be rectified readily.” A fabricated citation in a draft factum that has not yet been filed is rectifiable readily. A fabricated citation in a factum already before the court, relied upon by opposing counsel, and now part of the case record is not.

The line between the two is a single business day in most matters. That is why my running rule, and the one I write into every firm AI policy I help draft, is to treat any AI-assisted output that has touched a client communication, a filed document, or an externally circulated draft as past the threshold from the moment it leaves the office.

The FLSC Model Code commentary on the equivalent rule reinforces the early-disclosure framing. Most provincial law societies have adopted Rule 7.8 from the Model Code with minor wording variation. The principle applies whether the lawyer is in Ontario, British Columbia, Alberta, or elsewhere in Canada.

The Zhang v. Chen and Mata v. Avianca Precedent Set


Two precedent cases now sit on every AI-and-law-firm panel I attend. Zhang v. Chen 2024 BCSC 285 is the Canadian anchor. Mata v. Avianca, 22-cv-1461 (S.D.N.Y. 2023) is the American one. They share a single underlying fact pattern, and they diverge on how disclosure unfolded.

In Zhang v. Chen, a Vancouver family lawyer filed an application that cited two cases generated by ChatGPT. Both were fabricated. Opposing counsel raised the issue. The lawyer acknowledged the error, apologized to the court, and explained that she had used ChatGPT to assist with research and had not verified the cases against CanLII.

Justice Masuhara of the British Columbia Supreme Court ordered the lawyer to pay costs personally on an indemnity basis. He accepted that the error was unintentional and noted the importance of competence in technology use under the British Columbia equivalent of Rule 3.1-2. The decision became the Canadian reference point for AI hallucination in court filings (Zhang v. Chen, 2024 BCSC 285, CanLII).

In Mata v. Avianca, two New York lawyers filed a brief opposing a motion to dismiss containing six fabricated cases. They did not catch the fabrication when opposing counsel could not find the citations. They submitted affidavits to the court vouching for the citations, doubling down on the error.

Judge Castel ultimately sanctioned the lawyers personally, ordered a $5,000 USD fine, and required them to send written apologies to the judges whose names had been falsely attached to the fabricated decisions. The decision became the global cautionary case for what happens when the disclosure failure stacks on top of the original error.

The lesson the two cases teach together is that the original AI error is recoverable. The disclosure failure is what compounds. Justice Masuhara accepted that the British Columbia lawyer had not intended to mislead. Judge Castel did not accept the equivalent from the Avianca counsel because they had vouched for the fabricated citations in sworn affidavits after opposing counsel raised the question.

The Canadian outcome was indemnity costs personally. The American outcome was personal sanctions, a fine, and a referral that triggered disciplinary attention.

When the Bay Street partner’s associate filed her factum with the fabricated citation, the decision tree she faced was the same one those two lawyers faced. Acknowledge fast, correct fast, accept the costs implications, or hold the position and watch the file convert into something much worse.

The associate, with the partner’s direction, picked the Zhang v. Chen path on the morning the question landed. The Avianca path was always available, and the cost of taking it would have been an order of magnitude higher.

The Six-Step Incident Response When an AI Error Surfaces

An AI error in a legal work product follows a predictable incident response sequence: containment, scope, LawPRO notification, client disclosure, court and opposing counsel disclosure, and a post-incident policy review. The six steps below are the ones the Bay Street firm worked through between Friday afternoon and the following Wednesday morning. Run them in order. The order matters because each step constrains the next one and missing a step lets exposure compound.

  1. Containment within the hour. Identify which AI tool produced the error, who used it, what data went in, and what data came out. Lock further use of the tool by that lawyer pending review. Preserve the prompt and response history if the platform retains them. For consumer tools, the history sits in the user’s account and needs to be exported before any session deletion.
  2. Scope assessment by end of day. Determine whether the affected output has been externally circulated. Filed with a court, sent to a client, served on opposing counsel, included in a closing document. The answer to this question determines whether you are inside or outside the Rule 7.8-1 rectification threshold.
  3. LawPRO notification before the next business day closes. Open the file with LawPRO claims intake. The file number does not commit you to a claim. It documents that disclosure happened inside the window. The claims handler will walk you through the next steps and the conflict implications for ongoing representation.

  1. Client disclosure within 24 hours of LawPRO notification. Use a written communication. State the error, state that you have notified your insurer, recommend independent legal advice on the client’s rights, and confirm whether you are continuing to represent the client with their informed consent or stepping away. This is the Rule 7.8-1 obligation operating directly.
  2. Court and opposing counsel disclosure on the timetable each forum requires. The court disclosure form depends on the stage. A letter to the case management judge, a corrected filing, a withdrawal motion, or an oral correction at the next hearing. The opposing counsel communication can be a single written acknowledgment and a corrected document. Do not negotiate the correction; just deliver it.
  3. Post-incident policy review within 30 days. Identify the policy or training gap that allowed the error. Update the firm’s AI Acceptable Use Policy clause set to close it. Run the gap by counsel. Brief the rest of the firm. The post-incident review is what carriers, the LSO, and the LawPRO file will all ask about at the next renewal or audit.

The Bay Street firm completed steps one through five by Tuesday afternoon. Step six was a four-week project that surfaced two clause gaps in the existing policy, which is the case in every engagement I have run.

In my discovery support runbooks I watch for the specific pattern that surfaces these gaps: a citation that resolves to a different style of cause, or a clause whose limitation period reads a year past the operative statute. The error always reveals something the policy did not anticipate.

For the policy-side detail on what gets written into the clauses themselves, see the LSO AI policy template and our broader AI Acceptable Use Policy field guide.

What Goes Into Your Firm Policy to Make Disclosure Decisions Consistent

The disclosure clauses every firm AI policy should now include are the ones that remove judgment from associates working at midnight on a filing deadline. Three operational components matter most: a bright-line rectification threshold sentence, a named six-step incident response sequence, and a designated LawPRO contact by role rather than by name. The three together cover the most common gap surfaced in post-incident reviews.

  • Bright-line rectification threshold (the sentence I quote in the Field Note above).
  • Named incident response sequence (the six-step list above, adapted to your firm’s reporting lines).
  • Designated LawPRO contact, a clause that names who picks up the phone.

In most firms that designation should be the managing partner, the practice group leader, or general counsel. It should not be the associate who wrote the filing. Naming the role in advance removes the judgment call at the moment of the incident.

The clause set also needs an internal disclosure clause that runs alongside the external one. Associates need to know whom they tell first. Partners need to know the cascade for elevating to the managing partner, to outside counsel, and to LawPRO.

Most firm policies I have audited skip the internal cascade because they assume it is intuitive. It is not. Associates faced with a Friday-night error often do nothing until Monday because the policy does not tell them whom to call after hours.

Cyber insurance is a parallel disclosure track that opens when the AI error involved a confidentiality breach rather than (or in addition to) a factual error. If an associate pasted privileged material into consumer ChatGPT, the cyber policy is on notice. If the factum cited a fabricated case, LawPRO is on notice. If both happened, both are. The practicePRO resource library publishes intake checklists for both tracks that your in-house claims handler will recognize.

The policy clause should name the carrier contact in each case so the associate is not researching insurance brokers at 9pm. The architecture sits inside the same incident response framework we build for our cybersecurity services clients, which is why the clause set looks similar across both files.

The policy should also anticipate the LawPRO annual renewal certification. The questionnaire asks whether any reportable circumstance occurred during the year. The firm needs a process for canvassing partners and senior associates each renewal cycle so the certification is accurate. A single missed disclosure at renewal creates a misrepresentation question that compounds the original error.

If you want a baseline policy you can put in front of professional-responsibility counsel as a starting point, book a 30-minute IT assessment and we will send you the clause set we use with our Ontario law firm clients.

Further reading and primary sources

HOW THIS GUIDANCE WAS ASSEMBLED

This article draws on FC’s anonymized client data across multiple 2025-26 Ontario and British Columbia law-firm engagements, plus a named-client moment with the principal of a Toronto litigation boutique whose Copilot rollout we led through full LSO Rule 3.3-1 review.

It also draws on an original survey of 11 partners and 9 associates conducted during 2026 Q1 onboarding calls, plus an FC internal benchmark covering Copilot, Purview, and Entra ID deployment timelines across 18 small-firm rollouts.

Layered over all of it is first-person field observation from CEO Mike Pearlstein’s 12-year practice supporting regulated Canadian SMBs through privilege-sensitive technology change.

Frequently Asked Questions

Does the LawPRO standard policy cover AI hallucinations?

The standard LawPRO Professional Liability Policy covers civil liability arising from negligent errors and omissions in the provision of legal services. An AI hallucination that results in a filed citation that does not exist is a negligent error and sits inside the policy.

The policy does not cover intentional dishonesty, so a lawyer who knowingly vouched for a fabricated citation after being notified would face a coverage question. The safe path is early disclosure under Rule 7.8-1, which preserves coverage on the underlying error.

When does an AI error become reportable to LawPRO under Rule 7.8-1?

Rule 7.8-1 obligates a lawyer to report any circumstance that may reasonably give rise to a claim. For an AI error in a client communication or filed document, the threshold typically fires the moment the lawyer confirms the error exists.

LawPRO and the LSO both publish guidance recommending early disclosure. Opening a file does not commit the lawyer to a claim. It documents that disclosure happened inside the rule’s window.

What did Zhang v. Chen 2024 BCSC 285 establish for Canadian lawyers using AI?

Zhang v. Chen ordered costs personally against a British Columbia family lawyer who filed an application citing two fabricated cases generated by ChatGPT. The court accepted that the error was unintentional and still imposed indemnity costs.

The decision emphasized the technology-competence obligation under the British Columbia equivalent of Rule 3.1-2. Ontario lawyers should expect similar treatment from Ontario courts under LSO Rule 3.1-2 and Rule 5.1-2.

How is Mata v. Avianca different from Zhang v. Chen?

Both cases involved fabricated AI citations in court filings. Zhang v. Chen turned on a single disclosure event handled with acknowledgment and apology, leading to indemnity costs only.

Mata v. Avianca involved two New York lawyers who vouched for fabricated citations in sworn affidavits after opposing counsel raised the question. The court imposed personal sanctions, a $5,000 USD fine, and required written apologies to the judges named in the fake decisions.

Who should call LawPRO when an AI error surfaces, the partner or the associate?

The firm’s AI policy should name the role that owns LawPRO notification. In most firms that is the managing partner, the practice group leader, or general counsel. It should not default to the associate who produced the work product.

Naming the role in advance removes the judgment call at the moment of the incident and ensures the disclosure happens inside the Rule 7.8-1 window. Associates need an after-hours contact path to elevate the issue immediately.

Does pasting privileged material into consumer ChatGPT trigger disclosure?

Yes, and the disclosure runs through both the firm’s cyber insurance carrier and potentially the client under Rule 7.2-1 candour. A paste of privileged material into a consumer tool is a confidentiality breach regardless of whether the receiving party has acted on the content.

The IPC of Ontario and the federal Privacy Commissioner both treat AI-tool ingestion of personal information as a disclosure event. Firms should treat any confirmed paste as triggering the incident response sequence and notify counsel before deciding the external communication plan.

What does the FLSC Model Code say about AI errors?

The Federation of Law Societies of Canada Model Code of Professional Conduct names technological competence under Rule 3.1-2, candour under Rule 7.2-1, the lawyer’s duty as advocate under Rule 5.1-2, and errors and omissions under Rule 7.8.

Each of these rules applies directly to AI-assisted work product even though the Model Code does not mention AI by name. Most provincial law societies have adopted the Model Code with minor wording variation. The disclosure principles apply across Ontario, British Columbia, Alberta, and elsewhere in Canada.

Should the firm AI policy include a rectification threshold sentence?

Yes. The single best clause in a firm AI policy is a bright-line sentence stating that any AI-generated output incorporated into a client communication, a filed document, an externally circulated draft, or a communication with opposing counsel is treated as past the Rule 7.8-1 rectification threshold.

The sentence removes judgment from the trigger decision and gives associates a clear line they can apply without calling a partner at midnight. It is the clause that most cleanly addresses what the courts in Zhang and Mata treated as the critical failure point.

Bottom Line

If the AI error reached a court filing, a client communication, or opposing counsel, the disclosure window started the moment a competent lawyer would have spotted it. The original error is recoverable through LawPRO and candour. The disclosure failure follows the lawyer personally. The Bay Street partner called LawPRO on a Friday. By Tuesday afternoon her four tracks were closed. Work through the full LSO-compliant AI playbook for the rest of the architecture.

Contact Us

Fusion Computing has provided managed IT, cybersecurity, and AI consulting to Canadian businesses since 2012. Led by a CISSP-certified team, Fusion supports organizations with 10 to 150 employees from Toronto, Hamilton, and Metro Vancouver.

93% of issues resolved on the first call. Named one of Canada’s 50 Best Managed IT Companies two years running.

100 King Street West, Suite 5700
Toronto, ON M5X 1C7
(416) 566-2845
1 888 541 1611