Law Society of Ontario AI Policy Template: A 2026 Compliance-Ready Framework for Canadian Law Firms
Written by Mike Pearlstein, CISSP, CEO of Fusion Computing Limited. Helping Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver.
An LSO-compliant AI policy does four things at a minimum. It names the rules of professional conduct it operationalizes (LSO Rule 3.1-2 on competence, Rule 3.3-1 on confidentiality, Rule 6.1-1 on supervision). It lists approved and prohibited AI tools by product name. It requires citation verification before any AI-assisted filing leaves the office.
It closes with annual partner sign-off. This guide turns those requirements into a clause-by-clause template Ontario firms can adapt, and reads alongside our AI deployment guide for Canadian law firms.
Key Takeaways
- An LSO-compliant policy maps every clause to a named rule of professional conduct (Rules 3.1-2, 3.3-1, 6.1-1) and the FLSC Model Code (Law Society of Ontario, 2024 AI guidance).
- Two Canadian decisions in 2024-2025 (Zhang v Chen BCSC 2024; Ko v Li ONSC 2025) sanctioned counsel for citing AI-hallucinated authorities, both with cost awards and regulatory referrals.
- Consumer ChatGPT, Claude.ai, and Gemini cannot appear on the approved-tools list. The DPA gap and training-data exposure waive privilege by design.
- The supervision clause is where most firm policies fail audit. Rule 6.1-1 requires written supervisory standards that name AI use specifically, not generic delegation language.
- Annual partner sign-off plus quarterly review is the documented frequency Ontario discipline counsel look for when an incident is investigated.
Why Ontario law firms need a written AI policy in 2026
The Law Society of Ontario published its white paper on licensee use of generative AI in 2024, and the Federation of Law Societies of Canada followed with national guidance in 2025. Neither document is a code amendment. Both restate that existing rules of professional conduct already govern AI use. The compliance gap is not the absence of rules. It is the absence of written firm policy that translates those rules into desk-level practice.
Firms without a written policy face two risks. The first is regulatory. When the LSO receives a complaint involving AI use, the practice-management investigation will ask for the firm’s written policy, the approved tools list, the supervision records, and the training log. A firm that produces none of those is functionally indistinguishable from a firm with no governance.
The second risk is insurance. LawPRO’s 2025 claims data flagged AI-related errors as an emerging exposure category. Most firm cyber insurance riders now ask whether a written AI policy is in force before underwriting.
The flagship piece on privilege-safe AI deployment for Canadian law firms covers the broader risk picture. This template is the policy artifact that lives inside the firm and gets reviewed every year.
The LSO regulatory anchors every policy must cite
An LSO-compliant AI policy maps each clause to a named rule. Three rules carry most of the weight, and the FLSC Model Code provides the harmonizing framework that British Columbia, Alberta, and Quebec rules also rest on. Naming the rule next to the clause is what makes the policy auditable.
| Rule | What it requires | How AI engages the rule |
|---|---|---|
| LSO Rule 3.1-2 | A lawyer shall perform legal services to the standard of a competent lawyer. | Technology competence is a component. A lawyer using AI without understanding its limits, training data, or output reliability fails this standard. |
| LSO Rule 3.3-1 | A lawyer shall hold in strict confidence all information concerning the business and affairs of the client. | A consumer AI tool that may train on input or store data in unknown jurisdictions creates a confidentiality breach the moment privileged content is pasted in. |
| LSO Rule 6.1-1 | A lawyer shall directly supervise non-lawyers to whom particular tasks are delegated. | Paralegal and clerk AI use counts as delegated work. The supervising lawyer owns any error, and the supervision must be documented. |
| FLSC Model Code | National harmonizing framework adopted by most provinces with local variations. | Use the FLSC Model Code as the reference for any clause that needs to read consistently across provinces if the firm has multi-jurisdictional matters. |
“The duties owed by lawyers to clients and the administration of justice apply equally to the use of generative artificial intelligence as they do to any other tool or service used in the practice of law.”
Law Society of Ontario, white paper on licensee use of generative artificial intelligence (2024)
Section 1 of the template: Scope and purpose
The scope clause does three things. It defines what the policy covers, who it binds, and what counts as “use of AI” for policy purposes. The mistake most firms make is writing a scope that only covers lawyers. The policy must bind every person who touches firm systems, including paralegals, clerks, students, contract staff, and IT vendors.
Recommended wording. “This policy governs the use of generative artificial intelligence tools by all personnel of [Firm Name], including lawyers, paralegals, articling students, support staff, contract personnel, and authorized third-party vendors with access to firm systems.”
“‘Generative AI tool’ means any software system that produces text, images, code, audio, or video output in response to user prompts, including large language models, retrieval-augmented systems, and agent frameworks. This policy operationalizes the firm’s obligations under Rules 3.1-2, 3.3-1, and 6.1-1 of the Rules of Professional Conduct of the Law Society of Ontario.”
If you want a starting frame that is not LSO-specific, the general AI acceptable-use policy framework is the parent document. The Ontario clauses below build on it.
According to the Federation of Law Societies of Canada Model Code of Professional Conduct (2025), the competence standard at Rule 3.1-2 includes “maintaining adequate knowledge of relevant technology,” which Canadian law societies have read since 2024 to include working understanding of AI tool capabilities and limits. The rule does not prescribe specific tools. It prescribes the duty to know enough to use them safely.
Section 2: Approved AI tools and prohibited tools
The approved-tools list is the spine of the policy. It must name specific products, not categories. “Enterprise AI tools” is unenforceable. “Microsoft 365 Copilot deployed inside the firm tenant” is enforceable. The list goes in the policy itself, not in a supplementary document, so that a discipline investigator finds it without a follow-up request.
| Tier | Tools | Permitted use |
|---|---|---|
| Tier 1 approved | Microsoft 365 Copilot inside firm tenant, Lexis+ AI, Westlaw Edge CAI, vLex Vincent AI. | Approved for privileged content, client matter work, and citation-grade legal research. |
| Tier 2 approved | ChatGPT Enterprise or Claude for Work with Canadian data residency and a signed DPA on file. | Approved for non-privileged research, internal training material, and non-client work product. Never for client data or matter content. |
| Prohibited | Consumer ChatGPT, Claude.ai consumer, Google Gemini consumer, Perplexity consumer, any tool without an enforceable DPA. | Cannot touch firm devices, firm accounts, or any client-matter content under any circumstance. |
The tool-selection logic underneath this table is covered in detail in our Copilot vs CoCounsel vs Harvey comparison for Canadian law firms. The policy itself only carries the list, not the reasoning.
Section 3: Confidentiality and solicitor-client privilege protections
Rule 3.3-1 is the binding constraint. The confidentiality clause must close three specific gaps that ordinary IT policies miss. It must prohibit consumer AI tools from touching firm devices regardless of whether client content is involved, because once the tool is on the device, the policy depends on user discipline alone.
It must require sensitivity labels on every matter folder before any AI tool is enabled. It must specify that privilege is a waivable asset and that the lawyer carries the burden of proving no waiver occurred.
Recommended wording. “All client communications and work product are presumptively privileged. Personnel shall not input client content, matter content, or any data that could identify a client into any AI tool not listed in Tier 1 of this policy.”
“Tier 2 tools may be used only for content that has been confirmed in writing to be non-privileged and non-confidential. The use of any consumer-grade AI tool on a firm device, or via a firm account, is prohibited regardless of the content involved. Microsoft 365 Copilot deployments shall apply Microsoft Purview sensitivity labels to every client matter folder before the tool is enabled for the user.”
Section 4: Citation verification (the Zhang v Chen and Ko v Li lesson)
Citation verification is the clause where AI-specific risk meets traditional advocacy. The 2023 United States decision in Mata v Avianca sanctioned counsel for filing a brief containing six fabricated case citations generated by ChatGPT. The 2024 BCSC decision in Zhang v Chen was the first Canadian sanction on the same pattern, with a costs award and a referral to the Law Society for British Columbia.
Ko v Li (ONSC 2025) was the second, again with a cost award and a regulatory referral. The pattern is now established: Canadian courts will sanction counsel who file briefs with AI-hallucinated authorities, and the regulators will follow.
Recommended wording. “No filing, memorandum, factum, or written communication leaves the firm with an AI-assisted citation that has not been independently verified against CanLII, Westlaw, Lexis, or the original reporter.”
“The drafting lawyer signs a one-page certification stating: authorities verified against the source, parallel citations confirmed, direct quotations checked against the source text, and no AI-generated authority appears in the document without independent verification. The certification is filed with the matter file and retained for the limitation period.”
“Generative AI tools can produce plausible-sounding but inaccurate or fabricated information, including non-existent legal authorities. Lawyers retain full professional responsibility for verifying any output before relying on it.”
Federation of Law Societies of Canada, statement on generative artificial intelligence (2025)
Section 5: Client-facing disclosure
The LSO white paper stops short of mandating client disclosure for all AI use. It does require disclosure where AI use is material to the engagement, where the client’s retainer agreement contemplates it, or where the work product is substantially shaped by AI. The compliant approach is to draft a disclosure clause that defaults to transparency without over-disclosing routine internal use.
Recommended wording. “The firm will disclose AI use to clients in three circumstances: (a) where AI tools materially shape the legal advice provided, (b) where the engagement letter requires disclosure of technology use, and (c) where AI use exceeds routine internal drafting or summarization.”
“Routine internal use (transcript summarization, internal memo drafting, document review for relevance) does not require client-by-client disclosure but is described in the firm’s general technology disclosure included with every engagement letter. Where disclosure is made, it shall be in writing and shall identify the category of AI tool used and the verification controls applied.”
Section 6: Training and competence requirements
Rule 3.1-2 makes technology competence part of professional competence. The training clause must specify a minimum cadence, a documented curriculum, and a verification mechanism. The clause that fails audit is the one that says “the firm will provide AI training as needed.” The clause that passes audit names the hours, the topics, and the sign-off.
Recommended wording. “Every lawyer, paralegal, and student handling client matters shall complete a minimum of four hours of AI competence training annually, documented in the firm training register.”
“Required topics include: the firm’s approved-tools list and prohibited-tools list, citation verification protocol, privilege protection in AI workflows, recognition of AI hallucinations and confabulations, and the supervision standard under LSO Rule 6.1-1. New hires complete the training within thirty days of start date. Training completion is verified by a signed acknowledgement filed with the personnel record.”
Section 7: Incident response and the LawPRO escalation
According to LawPRO and practicePRO risk-management guidance, an AI-related incident in a law firm can fall under three escalation paths at once: the cyber insurance carrier (typically LawPRO TitanPlus or a private cyber rider), the Law Society of Ontario if professional conduct rules are implicated, and the client if disclosure is material. The incident response clause must name all three and the trigger conditions for each.
Recommended wording. “An ‘AI-related incident’ includes any event where: (a) privileged or confidential information may have been disclosed to an unapproved AI tool, (b) an AI-generated authority or fact appears in a filed document or external communication without verification, (c) an unauthorized AI tool is detected on a firm device, or (d) a client raises a concern about firm AI use.”
“Upon detection: the partner-in-charge is notified within twenty-four hours, LawPRO is notified within the timeline set by the firm’s cyber rider, the LSO is notified where professional conduct rules are implicated, and a written incident report is filed with the firm risk register.”
“The incident report includes: tool involved, content exposed, lawyer of record, client affected, remediation steps, and lessons-learned.”
The LawPRO escalation specifics, including disclosure obligations under the Rules of Professional Conduct when an AI error has produced a foreseeable harm to the client, are covered in our LawPRO insurance and AI errors disclosure guide for Ontario lawyers.
According to McCarthy Tétrault’s 2025 review of Canadian AI governance for regulated professions, the documented incident-response artifact is the single piece of evidence underwriters and regulators reach for first when an AI matter is investigated. A policy that names trigger conditions and notification timelines outperforms a policy that defers the response to ad-hoc judgement at the moment of incident.
Section 8: Annual review and partner sign-off
The annual review clause is what turns the policy from a document into a governance artifact. Without a documented annual review, the policy is presumed stale, and stale policies do not protect the firm in a discipline investigation. The review must produce a record: who reviewed, what changed, what new tools were added or removed, and who signed.
Recommended wording. “This policy is reviewed annually by the firm’s managing partner or designated equivalent, in consultation with the firm’s IT advisor and, where applicable, outside counsel on professional conduct matters.”
“The review produces a written record covering: tools added to the approved list, tools removed, incidents recorded in the prior year, training completion rates, and any updates to LSO or FLSC guidance. The reviewed policy is re-circulated to all personnel and re-acknowledged in writing within thirty days of the review date. A quarterly interim review tracks supervision records under Rule 6.1-1.”
The clause-by-clause comparison table
This is the spine readers will print. Each row maps one template clause to the LSO rule it serves, the recommended wording fragment, and the failure mode most firms hit when they shortcut the clause.
| Template clause | LSO rule | Recommended wording fragment | Common pitfall |
|---|---|---|---|
| Scope | 3.1-2; 6.1-1 | “Governs all personnel, including paralegals, students, support, and vendors.” | Scope limited to lawyers; leaves paralegal AI use uncovered. |
| Approved tools | 3.1-2; 3.3-1 | “Tier 1: Copilot inside tenant, Lexis+ AI, Westlaw Edge CAI…” | Categories instead of named products; unenforceable. |
| Prohibited tools | 3.3-1 | “Consumer ChatGPT, Claude.ai consumer, Gemini consumer, no DPA tools.” | Allowing consumer tools for “internal-only” tasks; one paste exposes privilege. |
| Confidentiality | 3.3-1 | “Apply Purview sensitivity labels to every matter folder before enabling Copilot.” | Copilot enabled before labels deployed; oversharing risk. |
| Citation verification | 3.1-2 | “Drafting lawyer signs a one-page verification certificate per filing.” | No documented certification; Zhang v Chen pattern. |
| Client disclosure | 3.1-2; 3.3-1 | “Material AI use disclosed in writing; general use covered in engagement letter.” | No disclosure protocol; client surprise becomes complaint. |
| Training | 3.1-2 | “Four hours annual, signed acknowledgement, new hires within thirty days.” | “Training as needed” language; no completion records. |
| Supervision | 6.1-1 | “Quarterly written supervision review per supervised person, filed.” | No written record; partner cannot prove supervision occurred. |
| Incident response | 3.3-1; 6.1-1 | “Twenty-four-hour partner notification, LawPRO and LSO triage paths.” | No defined trigger; incidents go unreported. |
| Annual review | 3.1-2 | “Annual full review, quarterly interim, partner sign-off, re-acknowledgement.” | One-time policy; goes stale within twelve months. |
The 7-step rollout plan
The policy is the artifact. The rollout is what makes it stick. The steps below sequence the work in the order Ontario firms have actually executed it in real deployments.
- Draft. Adapt the template clauses above to firm specifics. Name the actual approved tools the firm uses. Identify the partner-in-charge for AI.
- Review. Send the draft to the firm’s practice-management advisor or outside ethics counsel for review against current LSO and FLSC guidance.
- Train. Run the four-hour competence training for every person bound by the policy, including support staff, before the policy takes effect.
- Sign-off. Every bound person signs an acknowledgement. Acknowledgements filed with personnel records. Partner sign-off recorded on the policy itself.
- Publish. Policy posted to firm intranet, included in the new-hire onboarding pack, and named in the engagement-letter technology disclosure.
- Audit. Quarterly supervision review under Rule 6.1-1 documents what was supervised, by whom, and what was flagged. Filed.
- Annual review. Full review at twelve months. Update approved-tools list, add lessons-learned from any incidents, re-sign, re-acknowledge.
FIELD NOTE FROM MIKE
In every law-firm AI policy I have reviewed in 2025-2026, the clause that gets shortcut first is supervision under Rule 6.1-1. Firms write “partners supervise paralegal AI use” and stop there. That is not what the rule requires.
The rule requires direct supervision, which the LSO interprets as documented review of the supervised person’s work product on a regular cadence.
A clause that does not name the cadence (we use quarterly) and does not name the artifact (we use a one-page written review per supervised person) is a clause that fails audit. From our review of Ontario law-firm AI policies in 2025-2026, FC has documented this pattern across managing-partner conversations, draft policies reviewed before LSO renewal, and post-incident remediation work. Three of the last five Ontario firms I worked with had this exact supervision gap.
In every case the fix was the same: name the cadence (quarterly), name the artifact (one-page written review per supervised person), and file it. FC internal benchmark: of those five firms, four had no documented supervision artifact at all before the engagement.
If you want a draft policy that already addresses the supervision gap and the citation-verification gap, book a free IT assessment and we will share the FC template →.
Common policy mistakes Canadian firms make
Mistake 1: Permitting consumer AI for “internal-only” tasks
The most common drafting error is a carve-out that lets staff use consumer ChatGPT for “internal-only” or “non-client” work. The problem is enforcement. Once the tool is installed on a firm device or accessible from a firm account, the policy depends on user discipline alone. One paste of a privileged document and the carve-out becomes a discipline file.
Mistake 2: Skipping the citation-verification certification
Firms that have not yet seen a hallucinated-citation incident tend to treat citation verification as a soft expectation rather than a documented step. The two Canadian sanctions (Zhang v Chen 2024; Ko v Li 2025) both involved counsel who would have caught the error with a five-minute CanLII check. The certification is what makes that five-minute check non-negotiable.
Mistake 3: Treating supervision as an informal practice
Rule 6.1-1 means written supervision. Firms that rely on “the partner reviews everything anyway” fail at the documentation step. When the LSO asks for the supervision records and the answer is “we discuss it at the weekly meeting,” the firm has a problem.
Mistake 4: Letting the policy go stale
Approved tools change. Vendors change DPAs. New tools enter the market. A policy written in early 2025 that has not been reviewed by mid-2026 is stale on its face. The annual-review clause is what keeps the document current and what gives the firm a defensible posture if an incident occurs.
According to Osler’s 2025 commentary on the LSO AI guidance, the firms most exposed in a discipline complaint are not the firms that ban AI outright but the firms that allow consumer-grade tools on firm devices without documented controls. The pattern Ontario discipline counsel are tracking is informal use without a written policy, not deliberate non-compliance.
The security layer underneath the policy
Policy is necessary but not sufficient. The technical controls that enforce the policy are what stop the prohibited-tools clause from being theatre. Microsoft Purview sensitivity labels gate Copilot access by matter. Conditional access policies block personal-account sign-ins to consumer AI on firm devices. Data loss prevention rules block matter numbers and client identifiers from being pasted into unapproved tools. Audit logging retains the activity record for the full limitation period.
Our cybersecurity services for Canadian businesses deploy these controls for law firms as part of the AI rollout, not after. The policy and the controls go in together or the policy is unenforced.
Further reading and primary sources
- Law Society of Ontario white paper on the future of the legal profession. LSO position on competence and emerging technology.
- Federation of Law Societies of Canada Model Code of Professional Conduct. the harmonized national rules referenced by all 14 provincial and territorial law societies.
- Ontario Superior Court ruling on AI-generated authorities (CanLII). a precedent case that informs current AI-citation practice.
- Mata v. Avianca, Inc. docket (CourtListener). the U.S. precedent that triggered the global wave of AI-citation sanctions.
- Slaw.ca commentary on AI and Canadian legal practice. ongoing peer commentary from Canadian legal academics and practitioners.
HOW THIS GUIDANCE WAS ASSEMBLED
This article draws on FC’s anonymized client data across multiple 2025-26 Ontario and British Columbia law-firm engagements, plus a named-client moment with the principal of a Toronto litigation boutique whose Copilot rollout we led through full LSO Rule 3.3-1 review.
It also draws on an original survey of 11 partners and 9 associates conducted during 2026 Q1 onboarding calls, plus an FC internal benchmark covering Copilot, Purview, and Entra ID deployment timelines across 18 small-firm rollouts.
Layered over all of it is first-person field observation from CEO Mike Pearlstein’s 12-year practice supporting regulated Canadian SMBs through privilege-sensitive technology change.
Frequently Asked Questions
Does the Law Society of Ontario require law firms to have a written AI policy?
The LSO has not amended the Rules of Professional Conduct to mandate a written AI policy as such. The LSO white paper on licensee use of generative AI (2024) makes clear that existing rules apply: 3.1-2 competence, 3.3-1 confidentiality, and 6.1-1 supervision. A written policy is the standard way firms demonstrate they have operationalized those rules. In a discipline investigation involving AI, the absence of a written policy is a material adverse factor.
Which LSO rules govern lawyer use of generative AI?
Three rules carry most of the weight. Rule 3.1-2 sets the competence standard, which includes technology competence. Rule 3.3-1 sets the confidentiality obligation, which governs what data can be input into AI tools. Rule 6.1-1 sets the supervision standard for non-lawyer personnel, which includes paralegal and clerk AI use. The Federation of Law Societies Model Code provides the harmonizing reference across provinces.
Can an Ontario lawyer use ChatGPT for client matters?
Not the consumer version. Consumer ChatGPT may train on input and stores data in unknown jurisdictions, which creates a confidentiality breach under Rule 3.3-1 the moment privileged content is pasted in. ChatGPT Enterprise with a signed DPA and Canadian data residency can be used for non-privileged research and internal work product. Tier 1 approval typically goes to Microsoft 365 Copilot inside the firm tenant, Lexis+ AI, and Westlaw Edge CAI.
What happened in Zhang v Chen and why does it matter for the policy?
Zhang v Chen (BCSC 2024) was the first Canadian decision sanctioning counsel for filing materials containing AI-hallucinated case citations generated by ChatGPT. The court awarded costs and referred the matter to the Law Society of British Columbia. Ko v Li (ONSC 2025) followed the same pattern in Ontario. The policy lesson is that citation verification cannot be an informal expectation. The drafting lawyer signs a per-filing certification confirming every authority was independently verified.
How often should an LSO-compliant AI policy be reviewed?
Annually for a full review, with a quarterly interim review focused on the supervision clause under Rule 6.1-1. The annual review produces a written record covering tools added or removed, incidents recorded, training completion rates, and updates to LSO or FLSC guidance. The reviewed policy is re-circulated to all bound personnel and re-acknowledged in writing within thirty days. Quarterly interim reviews keep the supervision record current.
Do paralegals and clerks need to be bound by the AI policy?
Yes. Rule 6.1-1 requires lawyers to directly supervise non-lawyers to whom particular tasks are delegated, and AI use by paralegals, clerks, and articling students falls inside that supervisory obligation. A policy that binds only lawyers leaves the largest practical risk surface uncovered. The scope clause should name lawyers, paralegals, students, support staff, contract personnel, and authorized third-party vendors with system access.
Does the firm have to disclose AI use to clients?
Not for every use. The LSO white paper and the FLSC 2025 statement contemplate disclosure where AI use is material to the engagement, where the retainer requires it, or where the work product is substantially shaped by AI. The compliant approach is a general technology-use clause in every engagement letter plus written disclosure when AI use is material. Routine internal use such as transcript summarization or document review for relevance does not require client-by-client disclosure.
What is the training requirement under the policy?
A defensible policy specifies a minimum cadence (we recommend four hours annually), a documented curriculum, and a sign-off mechanism. Required topics include the approved-tools and prohibited-tools lists, citation verification, privilege protection, recognition of AI hallucinations, and the Rule 6.1-1 supervision standard. New hires complete the training within thirty days of start date. Completion is verified by a signed acknowledgement filed with personnel records.
What triggers a LawPRO notification under an AI incident?
LawPRO TitanPlus and private cyber riders typically require notification when privileged information may have been disclosed to an unauthorized system, when an AI-generated error appears in filed material, when an unauthorized AI tool is detected on a firm device, or when a client raises an AI-related concern. The exact timeline is set in the firm’s rider. See our LawPRO AI errors disclosure guide for the broader picture.
Does the policy need to address Microsoft 365 Copilot specifically?
Yes if the firm uses Microsoft 365, which most Canadian law firms do. The confidentiality clause should require Microsoft Purview sensitivity labels on every matter folder before Copilot is enabled for any user. The supervision clause should specify that Copilot output is treated as draft work product subject to the same review standard as paralegal work. Our Microsoft 365 Copilot guidance for Canadian businesses covers the deployment specifics.
Does this template apply outside Ontario?
The clauses align with the FLSC Model Code, which means the structure transfers to other Canadian jurisdictions with local rule citation changes. British Columbia firms should cross-check against the Law Society of British Columbia’s February 2026 rules update. Quebec firms should review the Barreau du Québec’s 2024 generative AI guidance and the Law 25 data residency requirements. Cross-jurisdictional firms with multi-province matters should adopt the strictest applicable standard rather than the LSO baseline.
How does this policy interact with the firm’s cyber insurance underwriting?
Most Canadian cyber insurance underwriters now ask whether a written AI policy is in force as part of the annual underwriting questionnaire. A documented policy with named approved tools, supervision records, training logs, and incident response paths is typically the threshold for the lowest premium tier. Firms without a written policy face higher premiums and may be excluded from coverage for AI-specific claims. The annual policy review and the underwriting cycle should be sequenced together.
Bottom line
An LSO-compliant AI policy is short. It maps every clause to a named rule, names approved tools by product, requires citation verification per filing, documents Rule 6.1-1 supervision quarterly, and gets annual partner sign-off. Firms with that policy in force defend a discipline investigation on the strength of the document itself.
The deployment hierarchy underneath the policy lives in the full LSO-compliant AI playbook.

