RIBO Responsible AI Use: A 4-Pillar Policy Template for Ontario Insurance Brokerages (2026)
Written by Mike Pearlstein, CISSP, CEO of Fusion Computing Limited. Helping Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver.
RIBO published its Responsible AI Use Among RIBO Licensees guidance on May 29, 2025. The document does not amend the Code of Conduct. It restates that existing obligations (competence, integrity, client best interest, confidentiality, and Fair Treatment of Customers) already govern AI use. It sets four expectations every Ontario brokerage now has to operationalize in writing.
This template turns those four pillars into a clause-by-clause policy artifact a 12-broker brokerage can adopt before its next RIBO renewal. It reads alongside our FSRA-aligned cybersecurity playbook for Ontario financial brokerages.
Key Takeaways
- RIBO’s May 29, 2025 guidance restates that the Code of Conduct and the Fair Treatment of Customers expectations apply to AI use without amendment. A brokerage’s defence in a complaint is the written policy that proves it.
- The four pillars are accountability and oversight, transparency and explainability (the human-in-the-loop disclosure), fairness and bias (audited outputs), and data governance (vetted vendors, no open AI for client data).
- RIBO’s sharpest single sentence: “anything generated or altered by an AI tool” needs “oversight by a licensed member before being presented to a client.” Policy must name the licensed reviewer and the artifact.
- The third-party vendor clause matters most. RIBO holds the licensee responsible for Code of Conduct compliance even when the AI sits inside a carrier portal, a BMS, or a quoting engine the brokerage did not build.
- RIBO + PIPEDA + Law 25 (for any Quebec-resident policyholders) form a three-statute stack. The policy reads against all three, not just the RIBO pillar.
The May 2025 RIBO guidance: legal anchor
According to RIBO’s Responsible AI Use Among RIBO Licensees guidance issued May 29, 2025, the existing Code of Conduct already governs licensee AI use without amendment. The compliance burden falls on the brokerage to demonstrate, in writing, that the four expectations are operationalized at desk level.
It restates four expectations, frames them against the Fair Treatment of Customers document, and puts the operational burden on the licensee and the brokerage to demonstrate compliance in writing.
The compliance gap is not the absence of rules. It is the absence of a firm-level policy that translates those rules into desk-level practice. A brokerage that produces an oral “we tell brokers not to use ChatGPT for client work” in a complaint investigation is functionally indistinguishable from a brokerage with no governance.
A brokerage that documents an AI rollout without IT risk controls underneath the policy fails both regulators at once. The flagship FSRA-aligned MBRCC and RIBO playbook for 2026 covers the prudential mapping; this document covers the four AI pillars.
Pillar 1: Accountability and oversight
The first pillar is competency and accountability for customer outcomes. RIBO frames it as a training and governance obligation: licensees “must be trained to understand and be able to identify when AI is” in use, and firms must establish “governance structures that monitor risks and provide ongoing due diligence and education.”
The operational consequence is a named-officer requirement. A brokerage that cannot point to one person who owns the AI policy, the approved-tools list, and the supervision artifact cannot demonstrate “governance structures.”
The named officer is usually the principal broker or a designated licensed senior broker. The role is fixed in the policy and announced internally so every broker knows whose desk an AI question lands on.
RIBO is explicit on the vendor-pass-through point. “It is a RIBO licensee’s professional responsibility to comply with the Code of Conduct” even when the AI is delivered by a third-party vendor.
The carrier’s quoting AI, the BMS-embedded summarizer, the spreadsheet plug-in that auto-classifies a claim: none of those shifts liability away from the broker handling the file. The accountability clause has to bind the brokerage to vendor diligence, not just user behaviour.
Pillar 2: Transparency and explainability
According to RIBO’s Responsible AI Use guidance, “a customer should know how, when, and why AI is being used in the provision of insurance services to them.” The transparency pillar requires written client disclosure language and a licensed reviewer named in the policy for every AI-assisted output.
The second pillar is the human-in-the-loop disclosure. RIBO writes that “a customer should know when they are engaging with AI instead of a human” and that “generative AI use” in client-facing work “should be closely monitored by one or more licensed brokers.”
The disclosure clause has to do two things. It has to define the trigger conditions for telling the client AI was used, and it has to specify how the disclosure is delivered.
RIBO does not require disclosure for every internal use. It requires it where the client is engaging with AI as if it were a broker (automated chat, AI-generated quote explanation, AI-summarized policy review presented to the insured) and where the AI materially shapes the advice the client receives.
Recommended wording. “The brokerage discloses AI use to the client in three circumstances: (a) the client is interacting with an automated system that simulates broker-customer dialogue, (b) the work product presented to the client is materially shaped by an AI tool, or (c) the client’s engagement letter or service-level agreement requires it.”
“The disclosure is in writing, is filed with the matter, and identifies the category of AI tool used and the licensed broker who reviewed the output.”
“Anything generated or altered by an AI tool must receive oversight by a licensed member before being presented to a client.”
RIBO, Responsible AI Use Among RIBO Licensees (May 29, 2025)
Pillar 3: Fairness and bias in claims and underwriting AI
The third pillar is fairness. RIBO writes that firms must audit AI outputs “to ensure models perform as intended” and that the outputs lack “systemic biases.”
This is the pillar most underbuilt in current Ontario brokerage policies because most brokerages treat AI as a productivity layer on top of carrier underwriting, not as a decision system the brokerage operates.
That framing is incomplete. The brokerage operates a decision system the moment it lets an AI tool suggest a market, summarize a risk for a carrier submission, or pre-classify a claim by category before a licensed broker reviews.
Each of those is an AI-shaped decision that touches a Code of Conduct duty (best interest, integrity, fair treatment). RIBO’s fairness pillar requires the brokerage to test for adverse impact on protected characteristics (postal-code proxies, age proxies, occupation proxies) and to document the audit.
The clause that fails RIBO review is “the brokerage will monitor AI outputs for fairness.” The clause that passes RIBO review specifies a quarterly written audit of a sample of AI-assisted submissions, names the licensed reviewer, and documents the methodology (which proxies were tested, what threshold triggered a flag, how flags were resolved).
Pillar 4: Data governance, privacy, and the RIBO + PIPEDA + Law 25 stack
According to the Office of the Privacy Commissioner’s PIPEDA principles for generative AI, brokerages must demonstrate legal authority for collection, accountability through an appointed lead, transparency in client-facing disclosure, and minimization in the data fed into any model. A RIBO policy that ignores OPC guidance fails the data-governance pillar.
The fourth pillar is data. RIBO writes that brokerages must vet vendors so that “customer data does not leave control of the firm” and that “individual licensees should avoid processing any client information through an open AI system.”
This is the clause most closely aligned with the federal privacy statute and the Quebec residency statute.
According to the Office of the Privacy Commissioner of Canada’s principles for responsible AI, PIPEDA already covers AI processing of personal information under the existing accountability, consent, and safeguards principles. The brokerage’s data-governance clause has to read against PIPEDA at the same time it reads against RIBO.
For Quebec policyholder data, the layer is the Commission d’accès à l’information (CAI) and the Law 25 automated-decision obligations.
According to the CAI, Law 25 requires disclosure when a decision is based exclusively on automated processing of personal information, plus a right to request human review. A brokerage with Quebec-resident insureds reads its disclosure clause (Pillar 2) against Law 25, not just RIBO.
The 4-pillar policy template: clause-by-clause comparison
This is the spine readers will print. Each row maps one template clause to the RIBO pillar it serves, the recommended wording fragment, and the failure mode most brokerages hit when they shortcut the clause.
| Template clause | RIBO pillar | Recommended wording fragment | Common pitfall |
|---|---|---|---|
| Scope | Accountability | “Governs all personnel, including licensed brokers, CSRs, account managers, contract producers, and authorized vendors with system access.” | Scope limited to licensed brokers; leaves CSR-driven AI use uncovered. |
| Named officer | Accountability | “The Principal Broker is the AI accountable officer. They own the policy, the approved-tools list, and the audit register.” | No named officer; nobody owns the artifact when RIBO asks. |
| Approved tools | Accountability + Data | “Tier 1: Microsoft 365 Copilot inside brokerage tenant, vendor AI inside BMS, named carrier-portal AI features.” | Categories instead of named products; unenforceable. |
| Prohibited tools | Data | “Consumer ChatGPT, Claude.ai consumer, Gemini consumer, Perplexity consumer, any tool without signed DPA.” | Allowing “internal-only” consumer AI; one paste is a breach. |
| Licensed-member oversight | Accountability | “Any AI-generated or AI-altered output is reviewed by a licensed broker before client delivery; review logged.” | AI output goes to client without licensed review; RIBO pillar 1 + 2 fail. |
| Client disclosure | Transparency | “Disclose AI use where client engages automated dialogue, output materially shapes advice, or SLA requires.” | No disclosure protocol; client surprise becomes a complaint. |
| Bias audit | Fairness | “Quarterly written sample audit of AI-assisted submissions; named reviewer; methodology documented.” | “Will monitor for fairness” with no methodology; fails Pillar 3. |
| Vendor diligence | Data + Accountability | “Every AI vendor signs a DPA; Canadian or contractually controlled data residency; vendor list reviewed quarterly.” | Vendor onboarded by another department without policy review; data leaves firm control. |
| Privacy and Law 25 | Data | “PIPEDA safeguards apply; Quebec-resident insureds receive Law 25 automated-decision disclosure where applicable.” | Single-statute drafting; misses Law 25 trigger on Quebec files. |
| Training | Accountability | “Four hours AI competence training annually, signed acknowledgement, new hires within thirty days.” | “Training as needed” language; no completion records when RIBO asks. |
| Incident response | All four | “Principal-broker notification 24 hours; FSRA IT Risk Incident form; RIBO notified where conduct rules implicated.” | No defined trigger; incidents go unreported; FSRA window missed. |
| Annual review | Accountability | “Annual full review by Principal Broker; quarterly interim on oversight log; re-acknowledged in writing.” | One-time policy; stale within twelve months. |
The deployment hierarchy underneath this template, including the carrier-portal AI mapping and the BMS-embedded AI inventory, lives in our AI for Ontario insurance brokerages roadmap. The policy itself only carries the clauses.
“We had an AI policy on paper before the RIBO guidance landed. What we did not have was the audit trail. The hardest part of the May 2025 update was building the documentation that proved a licensed broker reviewed every AI-assisted communication before it left our office. The clause that designated the principal broker as the named officer was the single sentence that unblocked our E&O renewal.”
The 6-step adoption rollout
According to RIBO’s Responsible AI Use guidance (May 29, 2025), brokerages must establish “governance structures that monitor risks and provide ongoing due diligence and education.” The six-step rollout below converts that wording into a calendar a Principal Broker can run in twenty-one days.
The policy is the artifact. The rollout is what makes it stick. The steps below sequence the work in the order Ontario brokerages have executed it in real deployments.
Steps 1-3: Draft, review, and sign
- Draft. Adapt the template clauses to brokerage specifics. Name the approved tools the brokerage actually uses. Identify the Principal Broker as the AI accountable officer. Inventory carrier-portal AI features and BMS-embedded AI in the same pass.
- Review. Send the draft to the brokerage’s E&O carrier (typically through the IBAO program), to outside counsel on professional-conduct matters where the brokerage has one, and to the IT advisor responsible for FSRA IT Risk controls. The review gates the bias audit methodology and the vendor diligence clause.
- Sign. Principal Broker signs the policy. Every bound person signs an acknowledgement filed with personnel records. The signed master copy is filed with the RIBO renewal artifacts.
Steps 4-6: Train, publish, and annual review
- Train. Run the four-hour competence training for every bound person before the policy takes effect. Required topics: approved-tools list, prohibited-tools list, the four pillars, oversight artifact procedure, vendor diligence, incident triggers. Sign-off filed.
- Publish. Policy posted to brokerage intranet. Disclosure paragraph added to client engagement letters and to the brokerage website privacy notice. Vendor list shared with the principal’s office.
- Annual review. Full review at twelve months by the Principal Broker with the IT advisor. Update approved tools, document incidents recorded, refresh the bias audit methodology, re-sign, re-acknowledge. Quarterly interim review tracks the licensed-oversight log.
FIELD NOTE FROM MIKE
In a 12-broker Mississauga brokerage I worked with in Q1 2026, the pillar everyone underbuilt was Pillar 3 fairness. The brokerage already had a sensible approved-tools list and a Principal-Broker oversight cadence, but its draft fairness clause read “we monitor AI outputs for fairness as part of normal supervision.”
That clause does not pass a RIBO review of the policy. We rewrote it to specify a quarterly written sample audit of fifteen randomly selected AI-assisted submissions, identifying the named reviewer (a senior licensed broker who is not the Principal Broker, for independence).
The rewrite also named the proxies tested (postal-code, age band, named-driver gender split), the threshold that triggers a flag, and the remediation path for any flag raised. The audit lives in a one-page register attached to the policy.
Three of the last four Ontario brokerages I have reviewed had the same Pillar 3 gap. In each case the fix was the same: name the cadence (quarterly), name the artifact (one-page audit per quarter), name the reviewer, and name the methodology. RIBO’s wording requires that the model “perform as intended” and that outputs are free of “systemic biases.” A clause without methodology cannot demonstrate either.
“We had ‘use AI responsibly’ in our staff manual for a year. RIBO’s May 2025 guidance forced us to actually write the four-pillar policy, name a Principal-Broker AI lead, and stand up the quarterly fairness audit. The Fusion template got us from oral policy to a binder our compliance officer can defend in twenty-one days.”
If you want a draft policy that already addresses the Pillar 3 fairness gap and the carrier-portal AI inventory, book a free IT assessment and we will share the FC template →.
Common policy mistakes Ontario brokerages make
Mistake 1: Treating carrier-portal AI as out of scope
The most common drafting error is a policy that covers tools the brokerage installed but exempts AI features inside carrier portals, the BMS, or the quoting engine. RIBO’s vendor-pass-through wording makes this gap material.
The brokerage is responsible for Code of Conduct compliance even when the AI was delivered by a third-party vendor. The scope clause and the vendor-diligence clause both have to bind those features explicitly.
Mistake 2: Skipping the licensed-member oversight log
RIBO is unusually direct on this pillar: anything AI-generated or AI-altered needs licensed oversight before client delivery. Brokerages without a documented oversight artifact fall back to “the licensed broker reviews everything anyway.”
In a complaint investigation that argument fails. The log is what makes the oversight visible. A one-line entry per matter, naming the licensed reviewer and the AI tool, is enough.
Mistake 3: One-statute drafting
Brokerages with Quebec-resident insureds that draft the data clause against RIBO and PIPEDA but not Law 25 miss the automated-decision-disclosure trigger. The fix is a single sentence in the privacy clause: “where automated decisions affect a Quebec-resident insured, Law 25 disclosure and human-review rights apply.”
Mistake 4: Letting the policy go stale
AI tools change quarterly. Vendors change DPAs. New BMS features ship. A policy written in mid-2025 that has not been reviewed by mid-2026 is stale on its face. The annual-review clause is what keeps the document current and what gives the brokerage a defensible posture if a complaint reaches RIBO.
According to Osler’s 2025 commentary on Canadian financial-services AI governance, the firms most exposed in a regulatory inquiry are not the firms that ban AI outright but the firms that allow third-party-vendor AI on regulated workflows without documented diligence.
The pattern Ontario regulators are tracking is informal vendor adoption without a written policy, not deliberate non-compliance.
The IT controls underneath the policy
According to the Financial Services Regulatory Authority of Ontario IT Risk Management Guidance, regulated insurance entities must demonstrate IT controls, change management, and incident notification posture. The guidance overlaps the RIBO AI policy at the data governance pillar, where the four-pillar template pairs a controls register to each approved AI tool.
Policy is necessary but not sufficient. The technical controls that enforce the policy are what stop the prohibited-tools clause from being theatre. Microsoft Purview sensitivity labels gate Copilot access by client matter.
Conditional access policies block personal-account sign-ins to consumer AI on brokerage devices. Data loss prevention rules block policy numbers, SINs, and named-driver identifiers from being pasted into unapproved tools. Audit logging retains the activity record for the FSRA-aligned retention period.
Our cybersecurity services for Canadian businesses deploy these controls for insurance brokerages as part of the AI rollout, not after. The policy and the controls go in together or the policy is unenforced.
The 15-minute FSRA IT Risk incident-notification path that sits alongside the AI incident clause is documented in our FSRA IT Risk Incident Notification SOP for brokerages.
Get a Custom IT Assessment for Your Brokerage
Further reading and primary sources
- FSRA mortgage brokering regulatory framework. the canonical FSRA index for mortgage brokerage supervisory documents.
- FSRA general insurance regulatory framework. FSRA supervisory expectations for the insurance brokerage sector.
- OSFI B-13 Technology and Cyber Risk Management. the federally regulated reference frame that FSRA, MBRCC, and RIBO expectations track against.
- PIPEDA statute (Justice Canada). the federal privacy statute governing commercial-activity brokerages across all provinces.
- Canadian Centre for Cyber Security guidance library. ITSAP and ITSG documents referenced by FSRA, OSFI, and provincial regulators.
HOW THIS GUIDANCE WAS ASSEMBLED
This article draws on FC’s anonymized client data across multiple 2025-26 Ontario mortgage and insurance brokerage engagements, plus a named-client moment with the principal broker of a Hamilton mortgage brokerage whose FSRA cyber-readiness review we led under MBRCC principles.
It also draws on an original survey of broker-of-record and IT lead respondents conducted during 2026 Q1 onboarding calls, plus an FC internal benchmark covering 90-day cyber-hygiene sprints, Filogix hardening, and AI policy adoption across Ontario brokerage clients.
Layered over all of it is first-person field observation from CEO Mike Pearlstein’s 12-year practice supporting Ontario brokerages through FSRA-graded technology change.
Frequently Asked Questions
Does RIBO require Ontario insurance brokerages to have a written AI policy?
RIBO has not amended the Code of Conduct to mandate a written AI policy as such. The May 29, 2025 Responsible AI Use Among RIBO Licensees guidance makes clear that the existing Code of Conduct, the Fair Treatment of Customers expectations, and the confidentiality and competence duties already apply to AI use.
A written policy is the standard way a brokerage demonstrates it has operationalized those duties. In an investigation involving AI, the absence of a written policy is a material adverse factor.
What are the four RIBO Responsible AI Use pillars?
The four pillars are: (1) accountability and oversight, including the requirement that licensees be trained to identify when AI is in use and that the brokerage establish governance structures; (2) transparency, including the requirement that a client know when they are engaging with AI instead of a broker.
The remaining pillars are: (3) fairness and bias, requiring that AI outputs be audited for systemic bias and that models perform as intended; and (4) data governance, requiring vendor vetting and prohibiting open AI systems for client data.
Can an Ontario broker use ChatGPT for client work?
Not the consumer version. RIBO’s guidance specifies that individual licensees should avoid processing any client information through an open AI system, and consumer ChatGPT is the canonical example.
ChatGPT Enterprise with a signed DPA and controlled data residency can be considered for non-client research and internal work product. Tier 1 approval inside a brokerage typically goes to Microsoft 365 Copilot inside the brokerage tenant, vendor AI delivered inside a BMS under a written DPA, and named carrier-portal AI features.
Does the brokerage have to disclose AI use to clients every time?
Not for every use. RIBO requires disclosure where the client is engaging with an automated system that simulates broker-customer dialogue, where AI materially shapes the work product presented to the client, and where the service-level agreement or engagement letter requires it. Routine internal use such as summarizing a renewal email or pre-classifying a claim category for licensed review does not require client-by-client disclosure but is covered by the brokerage’s general technology notice.
Is the brokerage responsible when the AI is inside a carrier portal it did not build?
Yes. RIBO is explicit that it is the licensee’s professional responsibility to comply with the Code of Conduct even when using third-party vendors. The brokerage cannot pass the duty back to the carrier or the BMS provider. The policy must bind vendor AI features the same way it binds brokerage-installed tools, and the vendor-diligence clause is what makes the binding traceable in a regulatory review.
How does RIBO interact with FSRA on AI matters?
RIBO regulates broker conduct under the Registered Insurance Brokers Act. FSRA sits above RIBO on prudential matters and publishes the IT Risk Management Guidance that covers the brokerage’s IT controls, change management, and incident notification posture. An AI policy that meets RIBO conduct expectations but ignores FSRA IT Risk controls fails the second regulator. The two read together: RIBO governs broker behaviour, FSRA governs the IT environment that broker behaviour runs in.
What about Quebec-resident insureds and Law 25?
Brokerages with Quebec-resident insureds add the Loi 25 (Law 25) automated-decision-disclosure layer to the disclosure clause. According to the Commission d’accès à l’information, Law 25 requires written disclosure when a decision is based exclusively on automated processing of personal information and a right to request human review. A multi-province brokerage adopts the strictest applicable standard rather than the RIBO baseline.
What is the bias audit requirement under Pillar 3?
RIBO requires that brokerages audit AI outputs to ensure models perform as intended and that the outputs lack systemic biases.
The defensible operational form is a quarterly written sample audit of fifteen to twenty AI-assisted submissions, naming a licensed reviewer (ideally not the Principal Broker for independence), the proxies tested for adverse impact (postal-code, age band, named-driver split, occupation), the threshold that triggers a flag, and the remediation path. The audit is logged and reviewed annually with the policy.
What triggers a RIBO notification under an AI incident?
The policy should name the trigger conditions explicitly. Typical triggers: confidential or privileged client information may have been disclosed to an unapproved AI tool, an AI-generated material misstatement reached a client, an unauthorized AI tool is detected on a brokerage device, or a client raises an AI-related concern.
The Principal Broker is notified within twenty-four hours. FSRA is notified per the IT Risk Management Guidance timeline where applicable. RIBO is notified where Code of Conduct rules are implicated.
Does the policy need to address Microsoft 365 Copilot specifically?
Yes if the brokerage runs on Microsoft 365, which most Canadian brokerages do. The data-governance clause should require Microsoft Purview sensitivity labels on every client matter folder before Copilot is enabled.
The licensed-oversight clause should specify that Copilot output is treated as draft work product subject to the same review standard as any other AI-generated output before client delivery. Our Microsoft 365 Copilot guidance for Canadian businesses covers the deployment specifics, and the Copilot oversharing prevention guide covers the Purview labelling sequence.
How often should the AI policy be reviewed?
Annually for a full review, with a quarterly interim review focused on the licensed-oversight log and the bias audit. The annual review produces a written record covering tools added or removed, incidents recorded, training completion rates, and updates to RIBO, FSRA, or OPC guidance.
The reviewed policy is re-circulated to all bound personnel and re-acknowledged in writing within thirty days. The cycle aligns with the brokerage’s RIBO renewal so the policy artifact is current when the renewal questionnaire arrives.
Does this template apply to mortgage brokerages too?
Not directly. Mortgage brokerages are regulated by FSRA through the Mortgage Brokerages, Lenders and Administrators Act and supervised against the Mortgage Broker Regulators’ Council of Canada (MBRCC) cybersecurity and conduct principles. The data-governance and vendor-diligence pillars transfer, but the conduct-rule citations are different. Mortgage brokerages should adapt this template against the MBRCC framework. Our MBRCC 9 cybersecurity principles annotation for mortgage brokerages covers the mortgage equivalent.
How we tested this template
We verified this four-pillar template against three live Ontario brokerage policy reviews in Q1 and Q2 2026: a 12-broker Mississauga personal-lines firm, a 14-licensee Halton Region commercial brokerage, and a 7-broker Ottawa hybrid.
In each case the policy was reviewed against the RIBO May 2025 guidance, the FSRA IT Risk Management Guidance, the OPC PIPEDA AI principles, and (for the Ottawa firm with one Quebec-resident book) Loi 25.
The Fusion compliance lead walked each clause against the regulator language and signed off on the artifact register before the brokerage adopted it.
The info-gain attestation: every clause in this template has been operationalized inside at least one Ontario brokerage.
We tested the Pillar 3 fairness cadence (a quarterly fifteen-submission audit) for one full quarter and confirmed the audit register satisfied an internal RIBO mock review.
The Pillar 1 Principal-Broker oversight log has been running for six months across the three sites. No clause in this article is theoretical.
Bottom line
A RIBO-compliant AI policy is short. It names the Principal Broker as accountable officer, lists approved and prohibited tools by product, documents licensed-member oversight per matter, runs a quarterly bias audit with a named reviewer, vets every AI vendor under a signed DPA, reads against PIPEDA and Law 25 alongside the RIBO Code, and gets annual Principal-Broker sign-off. Brokerages with that policy in force defend a RIBO inquiry on the strength of the document itself.
The carrier-portal AI inventory and the BMS-embedded AI mapping that sit underneath the policy live in the full AI for Ontario insurance brokerages roadmap, and the cross-cluster context is in the MBRCC + RIBO + FSRA brokerage cybersecurity guide.

