What Should Be in an AI Acceptable Use Policy? A Canadian MSP’s Field Guide for 2026

N/A

What Should Be in an AI Acceptable Use Policy? A Canadian MSP’s Field Guide for 2026

Written by Mike Pearlstein, CISSP, CEO of Fusion Computing Limited. Helping Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver.

Note: Daniel and Priya in the story below are composites drawn from engagements with several Canadian firms. Specific details have been changed to protect client confidentiality. The policy, timeline, and outcome numbers are real.

Daniel read question fourteen on the supplemental PDF his broker had sent that morning and called me before his nine o’clock. He’s a commercial litigator who runs the professional-services practice at a 42-person Toronto firm. He’d managed cybersecurity policy the way most managing partners do, which is to say once a year in June when the insurance renewal lands. That morning in February 2026, the renewal came in with a new page.

Question fourteen read: “Does the Applicant have a written Artificial Intelligence Acceptable Use Policy, and is employee training on that policy completed annually?” Two yes-or-no answers. He didn’t have either. The renewal was due in twenty-three days.

“Can I just tell them yes?” he asked.

No, I told him. They check. And he had a second problem he hadn’t thought about yet.

I’m an MSP, not a lawyer. What follows is the operational and technical half of an AI Acceptable Use Policy, built from the engagement that call kicked off. The clause language ultimately belongs with qualified Canadian privacy counsel. This piece walks through the engagement beat-by-beat, and it hands you every artefact I gave Daniel along the way: the email he forwarded to his HR lead, the clause skeleton we built with his lawyer, the objection scripts he used in stakeholder meetings, the twelve-week runbook, the budget we landed on, and the three things every Canadian SMB leader should do before end of next week.

What you’ll take away

  • The seven operational concepts that ended up in Daniel’s policy, with a copy-ready clause skeleton for each one.
  • A stakeholder kickoff email you can forward to HR, legal, and a line-of-business owner.
  • A reconstructed cyber insurance supplemental showing what carriers are actually asking.
  • An eight-objection playbook for the meetings where this work nearly dies.
  • A twelve-week runbook you can work against starting this afternoon.
  • Real numbers: what a 40-person engagement costs, in dollars and internal hours.
  • Three things to do before Friday so this post doesn’t just become another tab you’ve starred.

THE QUESTION DANIEL WAS READING

Question 14. Artificial Intelligence Governance. Does the Applicant have a written Artificial Intelligence Acceptable Use Policy that (a) identifies approved AI tools, (b) classifies data that may be input into those tools, (c) prohibits the use of generative AI on regulated or client-confidential information without appropriate contractual safeguards, and (d) requires annual employee training? If yes, please attach the policy and most recent training completion report.

What a passing answer looks like:

Yes. Policy attached (dated within 12 months), training completion report attached showing 95%+ of employees completed within 12 months, tool list current, data classification scheme explicit.

What a failing answer looks like:

No. Or yes, but the attached document is a two-page memo from HR that doesn’t classify data, doesn’t list tools, and has no training record. Insurers treat the second answer worse than the first. A bad policy reads as bad governance; no policy reads as a pending one.

Reconstructed from 2026 Canadian cyber insurance supplementals (Coalition, At-Bay, Beazley, Chubb types). Actual wording varies.

Week One: What Defender for Cloud Apps Showed Us

Before the first stakeholder meeting, I asked Daniel for three things. Admin access to their Microsoft 365 tenant, a list of everyone with a company email, and sixty minutes of their IT contractor’s calendar. Then I turned on Defender for Cloud Apps shadow-AI discovery and walked away for five business days.

When we reconvened, Defender had catalogued fourteen different generative AI tools in active use across 42 people. Not fourteen logins. Fourteen distinct products. Consumer ChatGPT was the biggest. Free Gemini was second. A half-dozen specialty tools I’d never heard of. Four image generators. One code assistant the firm didn’t realize was reading files from the junior associate’s local drive. And one chatbot.

“There’s a chatbot on our website?” Daniel asked. I showed him the telemetry. The marketing lead had installed it through a WordPress plugin the previous summer to help visitors book consultations. It had been answering roughly six questions a day for nine months without anyone watching what it said. “How the hell.” That was a moment. We came back to that chatbot thirty-six hours later, and it was the reason the disclosure clause ended up where it did. Keep reading.

The pattern looked identical to what Samsung reported in April 2023. Three separate leaks inside twenty days through consumer ChatGPT, including proprietary semiconductor source code and confidential meeting transcripts. Samsung is a 250,000-person enterprise with a security team. Daniel’s firm had 42 people and a single contractor. The probability space wasn’t different; only the blast radius was.

I drew a two-by-two on his office whiteboard. Data sensitivity on one axis, tool approval on the other. Every AI tool Defender had found went into one of four quadrants. I’ve drawn this grid in every policy engagement since, and it’s ended up with a name. Daniel’s fourteen tools populated the grid like this.

The FC Shadow AI Risk Matrix

Daniel’s 14 tools at week one, by data sensitivity and tool approval.

LOW DATA SENSITIVITY
HIGH DATA SENSITIVITY
SANCTIONED TOOL

Q1 LOW RISK

Marketing copy in Copilot, public research in Claude Enterprise.

Daniel week one: 0 of 14

Q2 MANAGED RISK

Client matters in Copilot with Purview DLP, contracts in Claude Enterprise with DPA.

Daniel week one: 0 of 14

UNSANCTIONED TOOL

Q3 GROWING PROBLEM

Consumer ChatGPT for drafting, free Gemini, four image generators, a niche code assistant.

Daniel week one: 9 of 14

Q4 CRITICAL EXPOSURE

Consumer ChatGPT handling client matter notes. WordPress chatbot on the public site. The Samsung pattern.

Daniel week one: 5 of 14

FC Shadow AI Risk Matrix applied to the February 2026 engagement. Source: Fusion Computing.

Zero tools sanctioned. Fourteen unsanctioned. Five of those touching confidential or privileged material. The matrix stopped being an abstract framework and started being Daniel’s problem list, ranked.

The question isn’t rhetorical. IBM’s 2025 Cost of a Data Breach Report studied organizations exactly like his. One in five experienced a breach linked to shadow AI in 2025. Those breaches added USD $670,000 to the average cost. 63% had no AI governance policy in place when the incident occurred. 97% of organizations reporting AI-related breaches lacked proper AI access controls. When the breach involved customer PII specifically, it did so 65% of the time.

What Daniel's Associates Were Actually PastingTop 5 data types found in week-one Defender prompt telemetryClient matter notes71%Discovery & deposition summaries52%HR performance reviews38%Contract redlines31%Financial records & invoices24%0%25%50%75%Client matter notes carried privileged information and were the firm's largest exposure.Source: FC engagement telemetry, February 2026. fusioncomputing.ca
Every professional services firm I’ve run this scan against since has produced a curve with the same shape. Client-facing material dominates.

Why the First Meeting Wasn’t About Tools

Daniel’s instinct was to ask which AI tool to buy. That’s almost every managing partner’s instinct, and it’s almost always wrong. Most AI governance writing online was produced by people who’ve never configured a Purview DLP rule. You can tell which ones. They open with tool recommendations. Real engagements open with a room.

I’ve watched enough of these break at rollout to know the pattern. IT alone writes something the rest of the organization routes around. HR alone ignores the data-security half. Legal alone produces language nobody can operate. I told Daniel that before we talked about Copilot or Claude, I wanted four seats at the next meeting. His IT contractor. His HR lead. His outside counsel on a dial-in. And Priya, a senior associate who was running discovery summaries through ChatGPT twice a week and had been for eight months.

Daniel asked how to get all four in one room on short notice. I sent him the email I send every client for this. He adapted it, forwarded it, and had the meeting booked by end of day.

ARTEFACT: STAKEHOLDER KICKOFF EMAIL

Copy, edit the two bracketed fields, send.

Subject: 30-minute working session: our AI Acceptable Use Policy (before [renewal / audit] date)

To: HR lead, IT lead, outside counsel, one line-of-business owner

Our cyber insurance renewal (or our upcoming client vendor attestation, or both) now asks whether we have a written AI Acceptable Use Policy and evidence that employees are trained on it. The honest answer right now is no and no. The renewal is due in [X] days.

I’ve asked our MSP to run a shadow AI discovery scan across the tenant. The output is attached. It surfaced [N] generative AI tools in active use across our team, including tools that are handling [type of sensitive material].

I’d like 30 minutes with the four of you before end of next week to align on three decisions: (1) who owns what in the policy, (2) which tools we sanction as tier-one, and (3) how we classify data for AI-tool use. The MSP will facilitate. Counsel will draft clause language against the decisions we make. HR will own the training cadence.

I’m not asking anyone to write a document in this meeting. I’m asking us to make three decisions that will let counsel produce a draft within two weeks.

The meeting runs 45 minutes if it runs long. Mine always does. The stakeholder objections show up in the first ten. I sent Daniel the scripts for those too, because the four objections below will arrive in his meeting whether he’s ready for them or not.

ARTEFACT: OBJECTION PLAYBOOK (EIGHT COMMON, WITH RESPONSES)

  1. “We’re too small to need this.” IBM 2025: one in five organizations of every size had a shadow-AI-linked breach last year. Breaches cost $670K more when shadow AI was involved. Your cyber insurer already disagrees with you.
  2. “AIDA died, so this is speculation.” PIPEDA, Quebec Law 25, Alberta PIPA, and BC PIPA all still apply. The OPC’s nine generative AI principles (December 2023) and Ontario IPC-OHRC’s six principles (January 2026) are already being used by regulators to assess real-world practice. Federal statute didn’t pass; the floor still rose.
  3. “Legal wants to draft it alone.” Legal can draft the clause language. Legal can’t define the four data tiers without input from the people handling that data. Counsel produces better clauses faster when the operational decisions are already made.
  4. “HR wants to call this a training policy.” The training policy is concept seven, not concept one. Call it training and the tool tiers, data classification, prohibited uses, human-in-the-loop, and disclosure clauses never get written. The insurer wants all seven.
  5. “We can’t afford counsel review this quarter.” Budget 4-8 hours of counsel time against a pre-decided policy structure. That’s how the engagement stays affordable. Starting from a blank page with counsel is what runs the hours up.
  6. “Our clients aren’t asking yet.” They will be, and your FRFI, healthcare, or public-sector clients are asking first. Vendor attestation questionnaires running this quarter already include the question.
  7. “Copilot is all we need.” Copilot is one sanctioned tool. The policy is how you govern it plus the others people will use anyway. Without the policy, Copilot admin settings default to permissive and nobody catches it.
  8. “The insurer won’t actually check.” Carriers are filing AI-related exclusions at renewal and auditing claims against policy evidence at payout time. Assume the check happens at the worst possible moment.

Not sure which of these objections will hit hardest in your own room? Book a free 30-minute IT assessment →

The Six Briefings I Sent His Counsel

When Daniel’s lawyer dialled in for session two, she asked the question every Canadian privacy lawyer asks in 2026. “What does the regulator actually expect?” The answer is complicated because Canada has no federal AI statute in force. Bill C-27, which contained the Artificial Intelligence and Data Act, died on the Order Paper in January 2025. Minister Solomon confirmed in June that it wouldn’t return in its previous form, describing the preferred approach as “light, tight, right.”

I don’t write regulatory briefs. I send lawyers to other lawyers. For this engagement I sent counsel six links before the meeting. MLT Aikins’ 2026 AI governance briefing for the overview. McMillan’s “No-Go Zones” analysis of the OPC’s guidance on inappropriate purposes. Torys LLP’s post-Bill C-27 landscape piece for the state of play after AIDA collapsed. Blakes’ analysis of OSFI Guideline E-23, because one of Daniel’s firm’s clients is a federally regulated financial institution. Lerners LLP’s write-up of the Ontario IPC-OHRC joint principles that dropped in January 2026. And Osler’s 2026 privacy priorities report on how insurers are weaving AI governance into underwriting questionnaires.

The Blakes briefing mattered because Daniel’s FRFI client was the second pressure vector. That client’s vendor attestation questionnaire, due ten weeks out, asked for written evidence that Daniel’s firm governed AI use on client data. The supplemental was longer than Daniel’s insurance questionnaire. His firm was now solving two problems on the same quarterly clock.

“How does this work with the Bank matter we’re running?” Daniel asked partway through session two, meaning his FRFI client. The honest answer was that his firm’s policy would be read by that client’s vendor risk team inside sixty days. Which meant the policy had to do double duty. It had to pass the cyber insurer’s supplemental and survive an FRFI vendor attestation. The first wanted existence. The second wanted evidence.

Three facts from those six briefings shaped the policy. The OPC published nine generative AI principles on December 7, 2023, jointly with all provincial and territorial regulators. Ontario’s IPC and OHRC released six principles for responsible AI use on January 21, 2026. OSFI published final Guideline E-23 Model Risk Management on September 11, 2025, effective May 1, 2027. E-23 binds FRFIs directly. Every vendor in an FRFI supply chain feels it through third-party risk management. Fasken’s March 2026 privacy update and BLG’s “Turning Point for AI in Canada” piece both map the sectoral implications. KPMG Canada’s 2026 governance maturity survey shows how far behind Canadian organizations sit on operationalizing any of this.

None of the six law firms would have written Daniel’s policy for him. They all write clause language once the scope, data classification, and operational controls have been decided. That’s the handoff between my seat and counsel’s seat. I’m describing the handoff, not crossing the line.

Daniel’s lawyer summarized the regulatory position in one sentence. No statute, but a rising floor of guidance, with an effective enforcement regime about to land through cyber insurance renewals and vendor attestations. She was right. Both landed on Daniel inside the same ninety days.

The Moffatt Conversation

The WordPress chatbot was still answering questions while we ran session two. Defender had flagged it. Daniel’s marketing lead had installed it the previous summer and nobody had thought about it since. I pulled up the chat logs. The chatbot had been invited to answer questions about consultation bookings. Over nine months it had drifted. It had started offering opinions about case strengths, fee ranges, and intake procedures. The fee ranges were wrong. The case strength opinions weren’t grounded in anything. It had been talking to about 180 prospective clients a month.

I pointed Daniel at Moffatt v. Air Canada. On February 14, 2024, the BC Civil Resolution Tribunal held Air Canada liable for misinformation its AI chatbot had given a customer. Jake Moffatt booked a ticket after a family death. The chatbot told him he could claim a bereavement-fare discount retroactively. That policy didn’t exist. The tribunal rejected Air Canada’s argument that the chatbot was a separate entity and awarded C$812.02. McCarthy Tétrault’s analysis and the American Bar Association both cover the ruling in detail.

“We’ve been giving legal opinions through that thing?” Daniel asked. Yes. To about 180 people a month. For nine months.

He muted the chatbot within the hour. We put it back online five weeks later, sanctioned and scoped, with the disclosure clause in the policy doing the work that had been missing. The settlement amount in Moffatt is almost a punchline. Eight hundred and twelve dollars. The precedent isn’t. It’s the first Canadian case law confirming that an organization is responsible for everything an AI system on its behalf says to a customer. Every Canadian SMB with a customer-facing AI surface, chatbot, email assistant, quote generator, owns what the AI produces.

The Seven Concepts, in the Order They Showed Up

Over three working sessions, seven concepts ended up in the policy. They didn’t arrive in the order I’d have drawn them up in advance. They arrived in the order Daniel’s firm’s specifics forced them. Each concept below shows the moment it surfaced, followed by the clause skeleton we handed to counsel. Copy these. Bracketed fields are yours to fill.

Concept 1: Scope and Stakeholders (Session one, first thirty minutes)

Priya hadn’t been sure why she was in the meeting. By minute ten she understood. This concept closed fast because everyone in the room agreed they should each own a piece. IT owned controls. HR owned training and attestation. Counsel owned clause language. Daniel owned final sign-off. Priya owned being the practice’s voice on what workflows broke if the policy went too far.

CLAUSE SKELETON, CONCEPT 1

Section 1. Scope and Ownership. This policy applies to all [employees, contractors, partners] of [Organization] who access Organization data or systems. It governs the use of all artificial intelligence tools, whether provided by [Organization] or obtained by the individual. Operational ownership rests with [Role, typically IT lead or vCIO]. Training and attestation rest with [Role, typically HR lead]. Clause language and legal interpretation rest with [Role, typically General Counsel or outside counsel]. Strategic sign-off rests with [Role, typically Managing Partner or CEO]. Review of this policy occurs quarterly for the tool list and annually for the full document.

Concept 2: Tool Tiers (Session one, rest of the meeting)

I pushed for three tiers instead of the “approved list” Daniel’s broker seemed to want. Sanctioned tools work on any data that fits the classification in Concept 3. Conditional tools work on specific data for specific use cases with a documented approval. Prohibited tools don’t touch company data, period. The tier model closed the meeting because it gave everyone something to do next. IT had to define sanctioned. HR had to define the request path. Counsel had to define the words.

CLAUSE SKELETON, CONCEPT 2

Section 2. Tool Tiers. AI tools are classified in three tiers. Sanctioned: [list current tier-one tools, e.g., Microsoft 365 Copilot, Claude Enterprise, ChatGPT Enterprise with executed DPA]. Conditional: tools that may be used for specific, pre-approved workflows on specific data classes, subject to documented approval from [Role]. Prohibited: all other AI tools, including consumer-tier versions of sanctioned products. Employees may request a tier change via [defined request path, typically an internal ticket with business case]. Requests will be reviewed within [X] business days.

Concept 3: Data Classification (Session two, nearly broke the room)

Priya’s pushback came in session two. “So I can’t run a discovery summary through anything anymore?” She’d been doing it twice a week for eight months and it had cut her review time by a third. The honest answer was not “no,” it was “not until we sanction a tool that can.” We spent ninety minutes classifying data into four tiers, mapping each tier to which tool tier could touch it, and translating Priya’s actual workflow into the new model. At the end of those ninety minutes she had a sanctioned path (Copilot with Purview DLP) for the low-sensitivity portion of her discovery work and a manual-only path for the privileged portion. Her productivity gain survived. That’s the point.

A policy that bans every AI tool fails the same way a policy that allows everything fails. Employees route around it. The operational job isn’t picking winners. It’s three things. Classify the data so people know what they can and can’t paste. Name a short list of tier-one approved tools. Give employees a fast path to request a new one. Everything else, the DLP rules, the training cadence, the incident response plan, hangs off those three decisions.

CLAUSE SKELETON, CONCEPT 3

Section 3. Data Classification. [Organization] classifies data in four tiers. Public: information intended for external distribution. May be used with any AI tool tier. Internal: business information not intended for external distribution. May be used with Sanctioned tools only. Confidential: client data, employee personal information, financial details, contracts. May be used only with Sanctioned tools operating under an executed Data Processing Agreement. Restricted: [define, typically privileged communications, PHI, regulated financial data, client records subject to contractual AI restrictions]. May not be used with any AI tool without case-by-case approval from [Role, typically General Counsel].

Concept 4: Prohibited Uses (Session two, closing)

Categories, not lists. Long prohibited-use lists read as exhaustive. Categories read as interpretable. The short list we landed on: regulated data into non-DPA tools, binding decisions without human review, AI-generated customer-facing content without disclosure, AI outputs in privileged contexts without counsel review.

CLAUSE SKELETON, CONCEPT 4

Section 4. Prohibited Uses. Regardless of tool tier, the following uses of AI are prohibited: (a) input of Restricted data into any AI tool without written approval from [Role]; (b) use of AI outputs to make binding decisions affecting clients, employees, or counterparties without documented human review; (c) publication of AI-generated customer-facing content without review and without applicable disclosure; (d) use of AI on matters subject to solicitor-client privilege, litigation hold, or regulatory confidentiality without counsel review; (e) use of AI tools not in the Sanctioned or Conditional tiers.

Concept 5: Human-in-the-Loop (Session three, opening)

Daniel’s list for this firm was specific to their practice. Hiring, firing, performance reviews, fee decisions over a threshold, any advice to clients carrying the firm’s name. Priya pushed back on the fee threshold. We landed on a dollar figure. AI may draft; a human must approve before anything leaves the firm.

CLAUSE SKELETON, CONCEPT 5

Section 5. Human Oversight. AI tools may assist in analysis, drafting, summarization, and research. A designated human must review and approve before any of the following: (a) any communication to clients, counterparties, or regulators; (b) hiring, termination, or performance-management decisions; (c) pricing or fee decisions above [$ threshold]; (d) advice on matters subject to professional regulation; (e) any output where [Organization] is represented as the source.

Concept 6: Disclosure Expectations (Session three, after Moffatt)

Three scenarios. AI-generated customer-facing content gets human review before it leaves. Automated decisions affecting an individual carry disclosure on request. The chatbot carries a one-line disclosure that it’s an AI assistant and defers substantive questions to a human. OPC’s “Openness” principle maps onto this clause. Moffatt is why the language is this specific.

CLAUSE SKELETON, CONCEPT 6

Section 6. Disclosure. Where AI is used to generate content that will be delivered to clients, counterparties, or the public, [Organization] stands behind the output as its own and reviews it accordingly. Automated decisions affecting a specific individual must be disclosed to that individual on request. Any customer-facing conversational AI (including website chatbots, email reply assistants, and quote generators) must display a clear statement that the user is interacting with an AI system and must defer substantive inquiries to a human representative.

Concept 7: Review Cadence and Incident Response Tie-In (Session three, closing)

Quarterly tool list, annual full document, explicit link to the firm’s incident response plan. Daniel now chairs the quarterly review himself for fifteen minutes inside his regular IT steering meeting.

CLAUSE SKELETON, CONCEPT 7

Section 7. Review and Incident Response. The Sanctioned tool list is reviewed quarterly by [Role]. The full policy is reviewed annually. AI-related incidents, including inadvertent input of Restricted data, public AI outputs found to be inaccurate, or discovery of unsanctioned AI tools in active use, are triaged under [Organization’s] incident response plan. All AI-related incidents are recorded in the incident log regardless of severity.

The tools we sanctioned (Mike’s opinion, short)

For a Canadian SMB on Microsoft 365, default your tier-one sanctioned list to three tools. Microsoft 365 Copilot because the data stays in your tenant. Claude Enterprise because its retention defaults and DPA terms are the cleanest in the market. ChatGPT Enterprise with executed DPA for teams already on OpenAI. Skip consumer versions of any of the three. Skip the long tail of AI startups without DPAs. Daniel’s firm landed here, and none of my 2026 engagements have needed a different list.

What We Turned On, in What Order

The policy signed on a Friday. The following Monday I started the controls rollout. Most of the tools a Canadian SMB needs are inside the Microsoft 365 Business Premium or E5 license they’re already paying for. The MSP’s job is to turn them on, wire them together, and map each policy clause onto a specific control. Here’s the order we used for Daniel’s firm, and the order I’ve used in every engagement since.

The Three-Week Controls RolloutWhat we turned on, when, and what it caughtDAY 1Conditional AccessMFA + compliant devicescoped to Copilot,Claude, ChatGPT Ent.Caught: 2 unmanagedlogins day oneDAY 3Purview AI HubCentral visibility +DLP for Copilot Studioprompt blockingCaught: 14 PII-typeprompts first weekDAY 7SharePoint labelingAuto-label by 4-tierclassification; Copilotrespects labelsCaught: 860 docsauto-reclassifiedWEEK 2Defender enforcementDiscovery → policy.10 of 14 tools blocked;3 sanctioned; 1 conditionalCaught: 71% drop inunsanctioned usageWEEK 3All-hands training60-minute session +quarterly remindercadence locked inCaught: 41/42 attested;insurer evidence readyCumulative result at day 60: unsanctioned AI usage down 82% vs week-one baseline.Source: FC engagement telemetry, February to April 2026. fusioncomputing.ca
Every row above is a single action your MSP can take in a typical Canadian M365 tenant. Licensing is included in Business Premium or E5.

The policy made the firm’s tool tiers real, but the tier assignments depended on how each AI product actually handles enterprise data. Consumer versions of any of the four tools below fail most dimensions by default. Enterprise and Workspace editions pass most of them with proper configuration. This was the comparison we ran with Daniel’s IT contractor the afternoon after session two.

How the Four Major AI Tools Handle Canadian Business Data

Enterprise and Workspace editions only. Consumer accounts fail most of these by default.

DimensionM365 CopilotChatGPT EnterpriseClaude EnterpriseGemini (Workspace)
In-tenant data residencyYESPARTIALPARTIALYES
Zero-retention availableYESCONFIGDEFAULTYES
Not used for trainingYESYESYESYES
Native SSO / adminENTRASAMLSAMLWORKSPACE
Signed Canadian DPAYESYESYESYES

Consumer ChatGPT, personal Claude, and personal Gemini accounts fail most dimensions by default. Source: vendor documentation, Q2 2026.

All four enterprise tiers support DPAs and zero-training. They differ mainly on data residency and admin integration.

The Twelve-Week Runbook

This is the calendar Daniel ran against. Every engagement I’ve done in 2026 has stayed inside these twelve weeks or has compressed by skipping one of the drafting cycles. If you’re reading this before you’ve started, copy the table and your engagement starts this Monday.

Week What happens Who does it
1Turn on Defender for Cloud Apps shadow AI discovery. Pull tenant admin access for MSP. Identify stakeholders.IT lead + MSP
2Review shadow AI telemetry. Send stakeholder kickoff email. Book session one.Managing partner / exec sponsor
3Session one: scope, stakeholders, tool tiers (Concepts 1 and 2). Book session two.Full stakeholder group + MSP facilitates
4Send counsel the six law firm briefings. Pre-read for session two.MSP + outside counsel
5Session two: data classification and prohibited uses (Concepts 3 and 4). Book session three.Full stakeholder group + counsel
6Session three: human-in-the-loop, disclosure, review cadence (Concepts 5, 6, 7).Full stakeholder group + counsel
7Counsel drafts policy language against clause skeletons. Exec sponsor reviews.Counsel + exec sponsor
8Policy signed. MSP begins controls rollout (day 1: Conditional Access).MSP
9Purview AI Hub, DLP for Copilot Studio, SharePoint auto-labeling online.MSP
10Defender moves from discovery to enforcement. Shadow tools blocked or sanctioned.MSP
11All-hands training (60 min). Attestation records captured. Quarterly reminder cadence scheduled.HR + MSP
12First quarterly review standup. Insurer evidence packet assembled (policy + training report + tool list). Ready for renewal or audit.Exec sponsor + IT lead

The seven concepts align to the major voluntary frameworks at the clause level. Scope maps to ISO/IEC 42001 Clause 4 and NIST AI RMF Govern. Data classification maps to 42001 Clause 7 and NIST Map. Human oversight maps to 42001 Clause 8 and NIST Manage. If your FRFI client, enterprise buyer, or insurer asks for evidence of framework alignment, the mapping is already there. You don’t need ISO 42001 certification at 40 employees. You need the clauses your policy already contains.

What It Cost Them

Daniel’s ninety-day engagement landed at roughly $14,000 in MSP fees (12 weeks of facilitation, controls configuration, training delivery, and insurer evidence package), plus approximately 8 hours of outside counsel time on clause language at his counsel’s regular billing rate. Internal time from Daniel, Priya, the IT contractor, and HR came in at about 22 hours combined across the twelve weeks.

Ongoing costs are modest. Microsoft Purview add-on licensing runs roughly $5 to $12 per user per month depending on tier. Defender for Cloud Apps is already included in E5 and several Business Premium add-on bundles. Quarterly training is a one-hour block. The total ongoing cost for a 40-person firm sits around $300 to $600 per month, not including baseline Microsoft 365 licensing.

Compare that to a shadow-AI breach. IBM’s 2025 figure, again: an additional USD $670,000 when shadow AI is involved. The math doesn’t take long.

Six Weeks Later, Another Call

Daniel’s insurance renewal closed at the same premium as the year before. His FRFI client’s vendor attestation went through three weeks after that, with Daniel’s signed policy, training attestations, and Defender telemetry as evidence. Defender showed unsanctioned AI usage down 82% from week-one baseline. Two employees had used the request path to move tools from conditional to sanctioned. Priya’s discovery workflow ran faster under the sanctioned Copilot path than it had under consumer ChatGPT. The marketing lead’s chatbot went back online five weeks after its muting, this time with the disclosure clause in the policy doing the work that had been missing.

Six weeks after the policy signed, I took another call. Different firm, different industry, same questionnaire. The carrier had rolled the AI supplemental into broader applications. I’m now running one of these a week. The 2027 version of the questionnaire probably won’t ask whether you have a policy. It’ll ask for a policy audit. The difference between those two asks is the difference between a document and a practice. The firms that sign policies this quarter will be in a position to answer the 2027 question. The firms that don’t will be explaining why.

In 2026, the de facto AI regulator for Canadian small and mid-sized business isn’t the federal government. It’s your cyber insurance carrier and your FRFI clients. Every renewal questionnaire I’ve seen since Q4 2025 asks whether the organization has a written AI acceptable use policy and whether employees are trained on it. Every FRFI vendor attestation I’ve seen since January asks for more. IAPP has tracked the insurer shift for readers who want the industry perspective.

Do These Three Things Before End of Week

If you’re reading this and you don’t have a policy, do these three things before Friday. None of them require budget approval or outside counsel.

  1. Turn on Defender for Cloud Apps shadow AI discovery. If you’re on Microsoft 365 Business Premium or E5, the capability is included. Microsoft’s own how-to lives at the Entra Global Secure Access shadow AI discovery page. Five business days of telemetry tells you what you’re dealing with.
  2. Forward the stakeholder kickoff email. Scroll up, copy the email artefact, edit the two bracketed fields (renewal date and stakeholder list), send it before lunch. The meeting you book from that email is the first real step. Everything else hangs off it.
  3. Count back ninety days from your next cyber insurance renewal. That date is your deadline. If it’s closer than ninety days, your engagement compresses but still finishes in time. If it’s further than ninety days, you have runway to do this properly. Write the deadline in your calendar now.

If you do those three things this week, you’ll know by next Monday whether you have a shadow-AI problem, you’ll have a stakeholder meeting on the calendar, and you’ll have a deadline. That’s more than 63% of breached organizations had in 2025, and it’s the only part of this work you can’t delegate to an MSP.

How Fusion Computing Can Help

Fusion Computing has helped Canadian businesses build and manage secure IT infrastructure since 2012 across Toronto, Hamilton, and Metro Vancouver. The AI governance engagements I run all look like Daniel’s. Shadow AI discovery with Defender for Cloud Apps. A stakeholder meeting that puts IT, HR, legal, and leadership in one room. A policy co-authored with your own counsel. Microsoft controls configured in the order that matches the clauses. Training cadence that keeps the document from going stale.

I don’t write your legal language. That belongs with your counsel. What I do is the operational half: running the stakeholder meeting, surfacing the shadow AI, sending your lawyer the best Canadian analyses I can find, and making sure the policy gets operationalized in the same quarter it’s written.

TRUSTED BY CANADIAN BUSINESSES SINCE 2012

CISSP-Certified  •  Microsoft Solutions Partner  •  CompTIA Managed Services Trustmark  •  50 Best Managed IT Companies (2024)

Fusion Computing helps businesses deploy AI governance and the technical controls behind it across Toronto and the GTA, Hamilton, and Metro Vancouver. Our virtual CIO, managed IT, and AI consulting practices share the same playbook.

Schedule Your Free Assessment

Frequently Asked Questions

What should be in an AI acceptable use policy?

Seven operational concepts: scope and stakeholders; tool tiers (sanctioned, conditional, prohibited); data classification; prohibited uses; human-in-the-loop requirements for high-impact decisions; disclosure expectations for customer-facing AI; and a review cadence tied to the organization’s incident response plan. Specific clause language should be reviewed by qualified Canadian privacy counsel before the policy is published.

Is an AI policy legally required in Canada in 2026?

No federal AI statute is in force. Bill C-27 (containing AIDA) collapsed in January 2025. However, PIPEDA, provincial privacy laws (Quebec Law 25, Alberta PIPA, BC PIPA), the OPC’s nine generative AI principles from December 2023, and Ontario IPC-OHRC’s six principles from January 2026 all apply to AI use that involves personal information. Federally regulated financial institutions also face OSFI Guideline E-23, effective May 2027.

How much does an AI policy engagement cost for a small Canadian firm?

For a 40-person firm on Microsoft 365 Business Premium or E5, a typical twelve-week engagement runs roughly $12,000 to $18,000 in MSP fees, 6 to 10 hours of outside counsel time, and 20 to 30 hours of internal time distributed across IT, HR, legal coordination, and leadership. Ongoing costs sit around $300 to $600 per month for a 40-person firm on top of baseline Microsoft 365 licensing.

What is shadow AI?

The use of AI tools, typically consumer ChatGPT, Gemini, or Claude accounts, by employees without IT, security, or legal approval. IBM’s 2025 Cost of a Data Breach Report found 20% of breaches involved shadow AI, adding USD $670,000 per incident and exposing customer PII in 65% of those cases. The cause is usually a gap between what employees need to do their jobs and what the organization has sanctioned.

Can employees legally use ChatGPT for work in Canada?

Yes, if the use complies with PIPEDA, the employer’s policy, and the organization’s data-handling obligations. Pasting customer, employee, or patient personal information into a consumer ChatGPT account generally violates PIPEDA’s consent and safeguards principles unless a proper data processing agreement is in place through an enterprise or team plan.

How is Microsoft Copilot different from ChatGPT for business data?

Microsoft 365 Copilot processes data inside the organization’s Microsoft 365 tenant boundary. Prompts and responses don’t leave the tenant and aren’t used to train foundation models. Consumer ChatGPT sends data to OpenAI’s infrastructure. ChatGPT Enterprise offers a DPA and zero-training option but requires separate provisioning, SSO, and admin controls before it behaves similarly to Copilot inside a tenant.

What are the OPC’s nine generative AI principles?

On December 7, 2023, the Office of the Privacy Commissioner of Canada and provincial and territorial regulators jointly issued nine principles: Legal Authority and Consent, Appropriate Purposes, Necessity and Proportionality, Openness, Accountability, Individual Access, Limiting Collection Use and Disclosure, Accuracy, and Safeguards. They extend existing Canadian privacy law to generative AI use and are being used by regulators to interpret PIPEDA in AI contexts.

What are the IPC-OHRC six principles for AI?

On January 21, 2026, Ontario’s Information and Privacy Commissioner and the Ontario Human Rights Commission jointly released six principles for responsible AI use: Validity and Reliability, Safety, Privacy-Protective, Human Rights-Affirming, Transparent, and Accountable. They apply directly to Ontario’s broader public sector and are considered a practical framework by Canadian law firms for any organization operating in Ontario.

What was the Moffatt v. Air Canada ruling?

On February 14, 2024, the BC Civil Resolution Tribunal found Air Canada liable for C$812.02 in damages after its AI chatbot gave Jake Moffatt incorrect information about bereavement fares. The tribunal rejected Air Canada’s argument that the chatbot was a separate entity, establishing the first Canadian precedent that organizations remain responsible for all outputs produced by AI systems operating on their behalf.

How do MSPs help enforce an AI acceptable use policy?

MSPs configure the technical controls the policy depends on: shadow AI discovery via Microsoft Defender for Cloud Apps (which catalogs over 400 generative AI applications), prompt-level DLP via Microsoft Purview AI Hub, Conditional Access for approved AI tools, data classification in SharePoint and OneDrive, and employee training tied to the policy’s review cadence. The policy defines the rules; the MSP enforces them.

What should a stakeholder kickoff meeting for an AI policy cover?

Three decisions in 30 to 45 minutes: who owns what (scope and stakeholders), which tools are tier-one sanctioned, and how data is classified for AI-tool use. Attendees: IT lead, HR lead, outside counsel (dial-in), one line-of-business owner. The MSP facilitates. Counsel drafts clause language after the decisions are made. Avoid trying to draft anything in the meeting itself.

How long does it take to deploy an AI policy and the controls behind it?

Twelve weeks is the typical runway for a Canadian SMB on Microsoft 365 Business Premium or E5. Three weeks of discovery and stakeholder sessions, four weeks of drafting and counsel review, four weeks of controls rollout, one week of training and first quarterly review. Engagements compress to eight weeks when the client has already run Defender discovery; they extend past twelve weeks when the client is FRFI-adjacent or regulated.

Related Resources

Fusion Computing has provided managed IT, cybersecurity, and AI consulting to Canadian businesses since 2012. Led by a CISSP-certified team, Fusion supports organizations with 10 to 150 employees from Toronto, Hamilton, and Metro Vancouver.

93% of issues resolved on the first call. Named one of Canada’s 50 Best Managed IT Companies two years running.

100 King Street West, Suite 5700
Toronto, ON M5X 1C7
(416) 566-2845
1 888 541 1611