AI for Ontario Insurance Brokerages: A Pre-FSRA-Rules Compliance Roadmap for 2026
Written by Mike Pearlstein, CISSP, CEO of Fusion Computing Limited. Helping Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver.
AI use inside Ontario insurance brokerages is now governed by the Registered Insurance Brokers of Ontario’s May 2025 Responsible Use of AI guidance, the Financial Services Regulatory Authority’s IT Risk Management Guidance, and federal privacy expectations published by the Office of the Privacy Commissioner of Canada. None of these documents prescribe a specific stack. All of them put the obligation on the brokerage. The compliance gap is operational, not legal.
This roadmap sits underneath our FSRA-aligned cybersecurity playbook for Ontario financial brokerages and turns the regulator’s expectations into a vendor decision matrix, a six-step rollout, and a policy posture that survives an FSRA examination. It pairs with the RIBO Responsible AI Use policy template that operationalizes the four pillars described in section three.
Key Takeaways
- RIBO’s May 2025 Responsible Use of AI guidance puts four pillars on every licensed brokerage: governance, fair-treatment of customers, transparency, and oversight of third-party AI vendors (Registered Insurance Brokers of Ontario, 2025).
- FSRA has signalled rules-based supervision is the next step. The 2025-2026 enforcement priorities document treats AI use as an extension of IT risk and operational risk, both already in scope under existing FSRA IT Risk Management Guidance.
- Canadian data residency is not optional for policyholder records. Personal information collected during quoting, binding, and claims sits inside PIPEDA and, for Quebec policyholders, Law 25. Any AI tool that routes through US cloud regions without a signed DPA fails this test on day one.
- Applied Epic, Vertafore, EZLynx, and Power Broker have each shipped AI add-ons in 2024-2025. Microsoft 365 Copilot is the realistic horizontal layer for a small brokerage. Dedicated insurance AI tools sit on top.
- The disclosure rule that catches brokerages off-guard is the producer-of-record duty under RIBO Code of Conduct sections 13 and 14. If AI shaped the recommendation, the client gets told.
The five AI use cases a brokerage actually runs
AI inside a 12-broker Ontario brokerage clusters into five repeatable workflows, and according to the May 2025 RIBO Responsible Use of AI guidance, each one creates a separate regulatory surface the policy has to treat on its own terms. Lumping every workflow under one clause is the drafting mistake the regulator wrote the guidance to correct.
The first is intake. A submission lands by email, the AI parses the body, extracts the named insured, the policy number, the line of business, and the broker of record, and writes the record into the BMS.
Second is quote preparation. The AI populates a market submission package, reads the underwriting questions, and pre-fills answers from the client file. Third is claims triage. A first notice of loss arrives, the AI categorizes the claim, scores severity, and routes the file to the right adjuster intake queue.
Fourth is renewal automation. Sixty days before expiry, the AI surfaces the renewal worklist, flags premium changes, drafts the client-facing summary, and queues the e-signature package.
Fifth is marketing. Newsletter copy, blog drafts for the brokerage website, lead-form follow-ups, and social-channel posts. Each of these workflows touches different fields of personal information, and each lands at a different RIBO Code obligation. If you want a baseline policy frame that pre-dates the RIBO guidance, our general AI acceptable-use policy framework is the parent document the brokerage clauses build on.
RIBO governance: the four pillars in summary
The May 2025 Responsible Use of AI guidance from the Registered Insurance Brokers of Ontario sets four pillars every licensed brokerage is expected to operationalize. The guidance is principles-based, not prescriptive, which means the regulator measures the brokerage on whether the four pillars are visible in the policy, the records, and the day-to-day practice. A brokerage that cannot point to documented evidence under each pillar is the brokerage RIBO investigators flag first.
Pillar one is governance. A named principal broker accountable for AI use across the brokerage. A written policy. A documented inventory of AI tools in use, including ones embedded inside the BMS.
Pillar two is fair treatment of customers. AI cannot produce a recommendation that disadvantages the client compared to a non-AI broker, and the brokerage owns the test for that. Pillar three is transparency. Where AI materially shapes the recommendation, the client is told, in writing, before the binder.
Pillar four is oversight of third-party AI vendors. The brokerage is responsible for what its BMS-embedded AI does, what its claims-triage AI does, and what its marketing AI does, including outputs the brokerage never sees in real time. According to RIBO’s May 2025 announcement of the Responsible Use of AI guidance, the regulator’s starting position is that AI use does not transfer accountability away from the licensed brokerage.
“The use of artificial intelligence by registered brokers does not displace the obligations owed to clients under the Code of Conduct. A registered broker remains responsible for the suitability of the insurance recommended and for the fair treatment of every customer.”
Registered Insurance Brokers of Ontario, Responsible Use of Artificial Intelligence guidance (May 2025)
The full clause-by-clause template that maps each pillar to a draft policy section lives in our RIBO Responsible AI Use policy template for Ontario brokerages. This page covers the deployment roadmap that sits underneath the policy.
The vendor landscape in 2026
According to Applied Systems’ 2025 product roadmap, the realistic vendor field for an Ontario brokerage in 2026 has three tiers, and the boundary between them sets the compliance ceiling. Tier one is the broker management system you already run, plus its native AI inside Applied Intelligence, Vertafore AI Studio, EZLynx, or Power Broker.
Tier two is the horizontal Microsoft 365 layer. Microsoft 365 Copilot inside the brokerage’s tenant handles email drafting, Word document generation, Excel quote-comparison sheets, and Teams meeting summaries.
The Copilot data boundary keeps brokerage content inside the Microsoft Canada Central or Canada East region when the tenant is configured that way. The oversharing trap that hits law firms hits brokerages identically, and the fix path is the same. The Copilot oversharing guide for Canadian SMBs covers the SharePoint permission cleanup that has to happen before Copilot turns on.
Tier three is dedicated insurance AI. Indico Data, Roots Automation, and Quandri target submission processing and renewal automation as standalone platforms. CCC Intelligent Solutions runs in the claims-triage lane on the carrier side and increasingly through carrier-facing broker portals. These tools layer on top of the BMS rather than replacing it, and they introduce a third-party data-processor relationship that the policy has to name explicitly.
The vendor decision matrix
This is the table that prints. Each row maps one vendor option to the four dimensions Ontario brokerages have to test before procurement. Pricing bands are 2025-2026 published figures or vendor-confirmed bands; brokerages should treat them as starting points for a quote.
| Vendor | Canadian residency | RIBO fit | FSRA IT risk overlap | Monthly cost (per seat) | Claims-handling ready |
|---|---|---|---|---|---|
| Applied Epic + Applied Intelligence | Canadian hosting available; confirm in MSA. | Strong. Native BMS integration; audit trail per producer. | Within scope of existing BMS controls. | $190-$260 BMS + AI add-on. | Partial. Strong on intake; carrier-side for adjusting. |
| Vertafore AMS360 / QQCatalyst + AI Studio | US-hosted by default; Canadian option for AMS360. | Strong. Quote and renewal automation focus. | Verify DPA covers AI processing. | $140-$220 BMS + AI tier. | Limited. Quote and renewal lean. |
| EZLynx (Applied) | US-hosted; confirm regional option. | Strong for personal lines; lighter on commercial. | Within BMS scope; SOC 2 documented. | $120-$180 per seat. | No. Front-end focus. |
| Power Broker | Canadian-built; Canadian hosting. | Strong. Canadian-broker first; FSRA familiarity. | Within BMS scope; smaller footprint. | $110-$170 per seat. | Partial. Integrations available. |
| Microsoft 365 Copilot (in-tenant) | Canada Central / East tenant supported. | Horizontal. Strong for drafting, weak for matter-specific reasoning. | Inside M365 control plane; Purview required. | $45 CAD per user (Copilot add-on). | No. Drafting only. |
| Indico Data / Roots Automation / Quandri | US-hosted by default; Canadian option requires negotiation. | Strong for submission processing; thin on regulatory fit. | Third-party processor; DPA mandatory. | Pricing not published; quote-driven. | Partial. Submission and renewal lean. |
| CCC Intelligent Solutions | Carrier-routed; brokerage rarely controls residency. | Limited. Carrier-side adjudication tool. | Outside direct brokerage control; oversight clause needed. | Carrier-bundled; not brokerage-priced. | Yes. Claims-triage focus. |
The matrix is a starting point, not a recommendation. Brokerages with a strong existing Applied or Vertafore relationship usually find the native AI add-on is the lowest-friction path, and the Microsoft 365 Copilot layer covers everything outside the BMS.
The dedicated insurance AI tools earn their seat only when the BMS native features fall short on a specific workflow the brokerage has volume in. If you want a tool-selection conversation grounded in your brokerage’s actual workflows, book a free IT assessment and we will map the matrix to your stack →.
Privilege and disclosure: when the broker has to tell the client
According to the RIBO Code of Conduct sections 13 and 14, Ontario brokerages do not hold solicitor-client privilege but do carry a producer-of-record fiduciary duty to every client on the file. That duty interacts with AI use in three places where the broker has to disclose, document, and stand behind the recommendation under examination.
First, where AI materially shapes the recommendation the broker makes. Second, where AI is used in claims advocacy on the client’s behalf. Third, where AI ingests policyholder personal information in a way the client did not anticipate at the time the binder was signed.
The compliant posture is to disclose AI use in writing where it touches recommendation or claims advocacy, and to cover routine internal use (intake parsing, renewal worklist generation, marketing) in the general technology-use clause of the brokerage’s service agreement.
According to the Office of the Privacy Commissioner of Canada’s guidance on AI under PIPEDA, meaningful consent for AI processing of personal information requires the data subject to understand the purpose, the nature of the AI involvement, and the consequences of the processing. A generic privacy notice does not meet the standard.
The Quebec wrinkle is Law 25, which has been in force since 2024. A Quebec policyholder whose data flows through a US-hosted AI processor without an enforceable contractual safeguard creates a notification obligation under Law 25 that PIPEDA alone does not match. Brokerages writing into Quebec should treat Canadian residency as a hard floor for the AI stack.
The six-step AI adoption rollout
This is the sequence Ontario brokerages have run when they roll AI in cleanly. The order matters. Brokerages that go to pilot before governance always come back to fix governance under examination pressure, and that costs more than doing it once.
- Governance. Name the principal broker accountable for AI. Draft the policy aligned to the four RIBO pillars. Inventory every AI feature already active inside the BMS, Microsoft 365 tenant, and marketing stack. The inventory line item is the one most brokerages miss because BMS-embedded AI runs by default.
- Vendor select. Run the matrix above against the brokerage’s workflows. Negotiate the data processing agreement before procurement, not after. Confirm Canadian residency in writing. Confirm what the vendor does with brokerage inputs (training opt-out is the clause to watch).
- DPIA. Privacy impact assessment for each AI tool that touches policyholder personal information. The DPIA names the lawful basis, the data flows, the retention period, the sub-processors, and the breach-notification path. PIPEDA does not mandate the DPIA artifact by name; FSRA examiners and RIBO investigators ask for it anyway.
- Pilot. One workflow, one team, ninety days. Measure quote turnaround, intake throughput, error rate, and any client-facing surprises. Document the supervisor cadence. The pilot exists to find the policy gaps before they become incidents at scale.
- Train. Every licensed broker, every CSR, every clerk gets a documented training session covering the approved-tools list, the prohibited-tools list, the disclosure rules, and the supervision standard. New hires inside thirty days of start. Signed acknowledgement filed.
- Audit. Quarterly review of the AI inventory, the supervision records, the disclosure log, and the incident register. Annual full review with principal-broker sign-off and re-acknowledgement. This is the step examinations read for.
FIELD NOTE FROM MIKE
In a Q1 2026 engagement with a 12-broker Mississauga property-and-casualty brokerage, the AI inventory step (governance, step one) surfaced four tools nobody on the leadership team knew were running.
The BMS had AI-assisted intake on by default for two of the three carrier portals. The Microsoft 365 tenant had Copilot in pilot with three users from an earlier IT-led test that never closed out. The marketing platform was using AI-generated subject lines on the renewal email campaign. None of these were in the draft policy.
The fix was the same fix every time: inventory first, document the tools you have, then decide which ones stay. FC has run this inventory exercise with four Ontario brokerages in 2025-2026, and every single one had at least two AI features active that the principal broker had not authorized.
“We were spending 40 minutes per submission cleaning up intake forms and re-keying applicant data into Applied Epic. After we wired Indico into the workflow and tied it to the RIBO governance log, that came down to 12 minutes, and the producer signs off on every AI-drafted summary before it leaves the file. The compliance side ended up being easier than the productivity side.”
Common adoption mistakes
Mistake 1: Buying the dedicated AI tool before fixing Microsoft 365 Copilot oversharing
Brokerages that buy Indico or Quandri before they fix the SharePoint and OneDrive permission inheritance inside their Microsoft 365 tenant get an AI tool that surfaces every policyholder file every employee has ever had access to. The fix is the same fix law firms use, and the order matters: clean the M365 oversharing first, then layer AI on top.
Mistake 2: Treating BMS-embedded AI as out of scope for the policy
Applied Intelligence and Vertafore AI Studio are AI tools under the RIBO May 2025 guidance regardless of whether the brokerage thinks of them as “features.” The inventory step has to include every BMS feature that uses machine learning to summarize, classify, or recommend. Brokerages that exempt the BMS from the policy lose the audit on the first principal-broker examination.
Mistake 3: Skipping the data processing agreement on US-hosted AI
A US-hosted AI vendor without a signed DPA covering Canadian personal information is a PIPEDA exposure and a Law 25 exposure simultaneously. Brokerages writing into Quebec face a notification obligation that the broker stack is not configured to meet. The DPA is the cheapest control. Skipping it is the most expensive mistake on this list.
Mistake 4: Disclosing AI use in the privacy notice and stopping there
A privacy notice update that says “we may use AI tools” does not meet the meaningful-consent standard the OPC has set for AI processing under PIPEDA. The compliant posture is a privacy-notice update plus a recommendation-level disclosure where AI shapes the advice the broker gives. The two-sentence add-on to renewal letters is the lift most brokerages need.
According to Insurance Brokers Association of Ontario member resources on responsible AI adoption, the most common gap in brokerage AI policies entering 2026 sits underneath the supervision clause: the documented audit trail.
The policy text usually reads fine. What goes missing is the evidence. The policy that survives a RIBO examination is the policy with quarterly review records that name the supervisor, the supervised producer, and the AI workflows reviewed.
The security layer underneath the policy
Policy is necessary but not sufficient. The technical controls that enforce the policy are what stop the prohibited-tools clause from being theatre. Microsoft Purview sensitivity labels gate Copilot access by policyholder matter. Conditional access policies block personal-account sign-ins to consumer AI on brokerage devices. Data loss prevention rules block policy numbers, SIN, and claim numbers from being pasted into unapproved tools. Audit logging retains the activity record for the FSRA-aligned retention period.
Our cybersecurity services for Canadian businesses deploy these controls for brokerages as part of the AI rollout, not after. The natural cross-reference for regulated SMBs evaluating AI under a sectoral regulator is our AI deployment guide for Canadian law firms, which covers the same governance frame applied to LSO-regulated practices. The RIBO and LSO frames diverge on disclosure mechanics, but the underlying control stack (Purview, conditional access, DLP, logging) is the same.
Further reading and primary sources
- FSRA mortgage brokering regulatory framework. the canonical FSRA index for mortgage brokerage supervisory documents.
- FSRA general insurance regulatory framework. FSRA supervisory expectations for the insurance brokerage sector.
- OSFI B-13 Technology and Cyber Risk Management. the federally regulated reference frame that FSRA, MBRCC, and RIBO expectations track against.
- PIPEDA statute (Justice Canada). the federal privacy statute governing commercial-activity brokerages across all provinces.
- Canadian Centre for Cyber Security guidance library. ITSAP and ITSG documents referenced by FSRA, OSFI, and provincial regulators.
HOW THIS GUIDANCE WAS ASSEMBLED
This article draws on FC’s anonymized client data across multiple 2025-26 Ontario mortgage and insurance brokerage engagements, plus a named-client moment with the principal broker of a Hamilton mortgage brokerage whose FSRA cyber-readiness review we led under MBRCC principles.
It also draws on an original survey of broker-of-record and IT lead respondents conducted during 2026 Q1 onboarding calls, plus an FC internal benchmark covering 90-day cyber-hygiene sprints, Filogix hardening, and AI policy adoption across Ontario brokerage clients.
Layered over all of it is first-person field observation from CEO Mike Pearlstein’s 12-year practice supporting Ontario brokerages through FSRA-graded technology change.
Frequently Asked Questions
Does RIBO require Ontario brokerages to have a written AI policy?
The May 2025 Responsible Use of AI guidance from RIBO does not amend the Code of Conduct to mandate a written AI policy by name. It does set four pillars (governance, fair treatment, transparency, vendor oversight) that the regulator expects every brokerage to operationalize. A written policy is the standard way brokerages demonstrate they have operationalized those pillars, and the absence of one is a material adverse factor in any RIBO investigation involving AI use.
Which RIBO Code of Conduct sections apply to AI use?
Sections 13 and 14 of the RIBO Code carry most of the weight. Section 13 governs the duty to recommend suitable insurance products, which AI engages whenever it shapes the recommendation. Section 14 governs fair treatment of customers and disclosure, which AI engages whenever the client experience is changed by automation. The May 2025 Responsible Use of AI guidance reads as an interpretive overlay on the existing Code, not a replacement.
Can an Ontario brokerage use ChatGPT to write renewal letters?
Not the consumer version. Consumer ChatGPT may train on input and stores data in unknown jurisdictions, which creates a PIPEDA exposure the moment any policyholder name, policy number, or claim detail is pasted in. ChatGPT Enterprise or Microsoft 365 Copilot inside the brokerage tenant with Canadian data residency configured can handle renewal drafting safely. The approved-tools list in the policy should name the product and the deployment configuration, not the category.
What is the producer-of-record disclosure trigger for AI?
The disclosure obligation under sections 13 and 14 of the RIBO Code activates where AI materially shapes the recommendation, where AI is used in claims advocacy on behalf of the client, or where the engagement letter requires it. Routine internal use such as intake parsing or renewal worklist generation does not require client-by-client disclosure but should be covered in the general technology-use clause of the brokerage service agreement.
Does Applied Epic AI count as an AI tool under the RIBO guidance?
Yes. Any feature that uses machine learning to summarize, classify, recommend, or generate text on policyholder data is an AI tool under the May 2025 guidance, regardless of whether the vendor markets the feature as “AI” or as “automation.” Applied Intelligence, Vertafore AI Studio, EZLynx automation features, and Power Broker AI add-ons all fall inside the policy scope and have to appear on the brokerage AI inventory.
How does FSRA examine AI use during a brokerage examination?
FSRA’s IT Risk Management Guidance treats AI as an extension of operational risk and third-party processor risk, both already inside the examination scope. Examiners ask for the AI inventory, the policy, the supervision records, the DPIA artifacts for each AI tool processing personal information, and the disclosure log. A brokerage that produces none of these is treated as having no AI governance, which is a material finding.
Do brokerages need a privacy impact assessment for each AI tool?
PIPEDA does not mandate a privacy impact assessment artifact by name. The Office of the Privacy Commissioner’s AI guidance and FSRA’s IT risk expectations both read as requiring a documented assessment of the lawful basis, data flows, retention, sub-processors, and breach-notification path for each AI tool. The DPIA is the standard artifact. Quebec brokerages writing under Law 25 face a stricter version of this requirement.
What is the supervision standard for AI use under the RIBO Code?
Pillar one of the May 2025 RIBO guidance puts governance accountability on the principal broker. Documented supervision means a named supervisor for each producer using AI on client matters, a written cadence for reviewing AI-influenced recommendations, and a filed record. Quarterly review is the cadence Ontario brokerages have adopted in practice. The supervision artifact is the piece RIBO investigators look for first when an incident is reviewed.
How does Microsoft 365 Copilot interact with the brokerage AI policy?
Microsoft 365 Copilot deployed inside the brokerage tenant is a horizontal AI layer that touches email, Word, Excel, and Teams. The policy should require Microsoft Purview sensitivity labels on every policyholder matter folder before Copilot is enabled, conditional access blocking personal-account sign-ins to consumer AI, and DLP rules covering policy numbers and SIN. The deployment configuration belongs in the approved-tools list, not in a separate IT document.
What happens if a US-hosted AI tool processes Quebec policyholder data?
Quebec Law 25 requires a privacy impact assessment for any communication of personal information outside Quebec, and that assessment must conclude the receiving jurisdiction offers adequate protection. A US-hosted AI vendor without a signed data processing agreement that meets Law 25 standards triggers a notification obligation and, on the facts, can be ruled non-compliant. Canadian residency is the cleanest path for brokerages writing into Quebec.
Do brokerages have to tell clients which AI tool was used?
Not the product name. The compliant disclosure names the category of AI involvement (intake summarization, quote preparation, claims triage, renewal automation) and the verification controls applied. Naming the specific vendor is not required by RIBO or PIPEDA, but the brokerage should be able to produce the vendor name on request and should document it in the internal record. Vendor confidentiality clauses do not override the producer-of-record disclosure duty.
Does this roadmap apply to life and health brokerages?
The four-pillar frame translates but the rule citations change. Life and health brokerages in Ontario fall under FSRA and the Canadian Council of Insurance Regulators rather than RIBO, with different licensing supervision. The governance, fair treatment, transparency, and vendor oversight pillars still apply in substance. Life and health brokerages should cross-check against CCIR’s 2024 statement on the use of AI by insurers and intermediaries before adapting this roadmap.
Bottom line
Ontario brokerages that get AI right in 2026 do five things. They run the inventory before the policy. They map the four RIBO pillars to documented controls. They keep policyholder data inside Canadian residency wherever the workflow allows. They disclose AI use at the recommendation level, in writing. They review the supervision records quarterly.
The deployment frame underneath this roadmap lives in the full MBRCC, RIBO, and FSRA brokerage cybersecurity guide. The policy artifact that operationalizes RIBO’s four pillars lives in the RIBO Responsible AI Use template.

