AI Knowledge Management for Canadian SMBs: A 90-Day Playbook

N/A

The first time a 60-person Canadian professional services firm called me about “an AI knowledge management project,” the operations lead opened with a sentence I have heard four other ways since: our best knowledge is locked in three people’s heads and a SharePoint nobody curates, and we are about to lose one of those three people to retirement.

That is the actual problem. Not whether to buy Copilot. Not whether to fine-tune a model. Whether the firm can answer its own questions when the senior people are not in the room.

Over the last eighteen months Fusion has scoped, deployed, or remediated AI knowledge platforms for Canadian companies between 30 and 200 employees in professional services, construction, light manufacturing, and healthcare-adjacent verticals. The pattern is consistent enough that I can hand you the playbook we now run in roughly the order it actually works. Read this as a 90-day plan, not a vision deck.

Key takeaways

Permissions before knowledge architecture: Copilot inherits SharePoint permissions before it inherits anything you build in the knowledge layer. The Pre-Copilot SharePoint Audit is the prerequisite cleanup.

  • The 30 to 200 employee Canadian SMB has 3 to 4 high-value knowledge sources. The mistake is indexing all of SharePoint on day one.
  • A 90-day pilot is enough to prove value if you scope to one workflow, one user group, and one source set.
  • Permission-aware retrieval, audit logging, and Canadian data residency are non-negotiable under PIPEDA and Quebec Law 25. Build them in week one, not month six.
  • Adoption is an operations problem, not a product problem. The post-pilot owner matters more than the model choice.
  • The right next step is a structured 90-minute readiness review, not a tool demo.

What “AI knowledge management” actually means for a 30 to 200 person company

The operations lead at that 60-person firm did not use the phrase “retrieval-augmented generation” once during our first call, and she should not have. Industry vendors describe AI knowledge management as a stack: connectors, an embedding model, a vector index, an LLM, a chat surface. That description is accurate and useless, because nobody runs a project that way.

For a 30 to 200 person Canadian company, the working definition I use on engagements is shorter: a system that lets a new hire ask a question and get the same answer one of your three senior people would give, with the source cited, and without that senior person getting interrupted. If a tool does that, it is doing knowledge management. If it does not, the architecture diagram does not matter.

Where the knowledge actually lives (and the trap of indexing everything)

When I sit down with the operations team in week one, I ask the same question I asked at the 60-person firm: where does someone go today, in priority order, when they cannot find an answer? The list never comes out as “SharePoint.” It comes out as a person, a Slack channel, a folder inside SharePoint that one person owns, and a spreadsheet on someone’s desktop.

I have watched a 110-person engineering firm spend six weeks indexing every byte of their SharePoint estate and end up with an assistant that hallucinated quotes from policies that had been rescinded in 2019. The fix was not a better model. The fix was deleting two thirds of the index and pointing it at the four folders the partners actually maintain. Adoption tripled inside ten days.

The 90-day playbook (Days 0 to 90)

Below is the timeline Fusion now runs for a 30 to 200 employee Canadian deployment. It is deliberately compressed. Anything longer becomes a procurement project that never ships.

Phase Deliverable Who is involved Success metric
Days 0 to 14
Audit & scope
Source inventory, permission map, use-case shortlist, residency & compliance scope, success criteria Ops lead, IT lead, one senior SME, MSP architect Signed scope: 1 user group, 3 to 4 sources, 5 priority questions to answer
Days 15 to 30
Connect high-value sources
Permission-aware connectors live, embeddings indexed, audit logging on, Canadian region confirmed MSP, IT lead, source owners (one per source) 5 priority questions answered correctly with sources cited; 0 permission leaks in red-team test
Days 31 to 60
First workflows live
Pilot group (5 to 15 users) onboarded, 2 named workflows in daily use, weekly tuning loop Pilot user group, MSP, internal champion ≥60% weekly active users in pilot group; ≥65% of repeat questions answered without escalation
Days 61 to 90
Tune & expand
Retrieval tuning, prompt library v1, second user group onboarded, governance & review cadence written Internal champion, MSP, HR or compliance lead Time-saved baseline measured; documented owner; 12-month roadmap signed

The two phases that get cut on bad deployments are days 0 to 14 and days 61 to 90. Skip the audit and you index everything, including the salary spreadsheet a manager left in the wrong folder. Skip the tuning and you ship something that works for two weeks and then drifts. Run both phases anyway.

Scope Your 90-Day Pilot →

What this actually looks like when Fusion delivers it

The architecture under the playbook is what we publish as a productized custom AI platform: a private workspace that ingests your sources, indexes them with permission awareness, runs retrieval through a Canadian-resident inference layer, and surfaces answers inside Microsoft 365.

“The 90-day playbook was the most disciplined IT engagement we’ve done. Day 1 we scoped 4 question types from real staff. Day 90 the platform was answering them with citations our compliance team accepted. Our average new-hire ramp time on policy questions dropped from 6 weeks to 2.”

VP Operations, 130-person Greater Toronto Area healthcare-services firm. Engagement scoped through the Fusion 90-day AI knowledge management playbook in Q1 2026.

The point of productizing it is that the 30 to 200 person company does not need to assemble five vendors and hope the integration holds.

I will say the practitioner version directly. Most platforms on the market today were built for the United States enterprise market and treat Canadian data residency as a configuration toggle.

For a Canadian SMB selling into regulated buyers, residency is not a toggle. It is the contract clause your client’s lawyer will redline first. Our custom business AI platform is the version of the stack we now run for clients who need to answer that question on the first call.

The before and after, illustrated

The 60-person professional services firm I opened with is a useful illustrative example, anonymized and with figures presented as the kind of outcome a properly scoped 90-day deployment can produce. Treat them as a target, not a guarantee.

Before the engagement, three senior staff fielded an average of 200 plus internal questions per week from billable team members: where is the template, what did we do for client X last year, what is the policy on this, who handled the manufacturing engagement in 2023. Each interruption cost roughly 15 minutes of senior partner time.

By day 90 of the deployment, the AI assistant was answering an estimated 65% of repeat questions directly, with citations the partner could verify in two clicks. The partners reclaimed an estimated 8 plus hours per week of focus time. The retiring senior’s knowledge was captured in the index before her last day rather than after.

What 30 to 200 person Canadian companies get wrong

I have seen the same five mistakes often enough to name them. Avoiding any one of these protects the project; avoiding all five is the difference between a 90-day pilot that ships and an 18-month project that does not.

  1. Indexing the entire SharePoint estate on day one. The model returns confident answers from documents nobody has maintained since 2019. The fix is to scope to 3 to 4 curated sources and earn the right to expand.
  2. No permission-aware retrieval. If the AI can read everything every user can read, it will quote a salary memo to the wrong person on a Tuesday. Permission-aware retrieval is week-one work, not month-six work.
  3. No audit logging. When the first user asks the AI something embarrassing, you need to know it happened. When a regulator or insurer asks, you need a log. Build the log on day one.
  4. Treating it like a chatbot project. Chatbot projects ship a UI and stop. Knowledge management projects ship an owner. Without a named internal champion who runs a weekly review, retrieval quality drifts inside a quarter.
  5. No residency or compliance scoping. A Canadian SMB selling into regulated buyers (legal, finance, healthcare-adjacent, public sector) will get a vendor questionnaire that asks where the data sits. If the answer is “US-East,” the deal slows. Scope residency on day one.
  6. Skipping the readiness assessment. The most expensive deployments I have seen are the ones that skipped the 90-minute readiness conversation and started with a tool. The readiness step is what catches the four issues above before they cost a quarter.

Where to start: the 90-minute readiness review

The right first step for a 30 to 200 person Canadian company is not a tool demo and not an RFP. It is a structured 90-minute readiness conversation that produces a one-page scope: which user group, which sources, which compliance constraints, which success metrics. We run that as our AI readiness assessment, and it is the input that every later phase of the 90-day playbook depends on.

From the practitioner

“In every Canadian SMB deployment between 30 and 200 employees, the knowledge worth capturing lives in three or four sources, owned by two or three people, and the most common mistake is treating the project as a tooling decision instead of an ownership decision. The companies that succeed pick a named internal owner before they pick a model. The ones that struggle pick a model and hope an owner emerges.”

Mike Pearlstein, CISSP, MSc AI · CEO, Fusion Computing

Microsoft Solutions PartnerModern Work + Security
CISSP-Led PracticeMike Pearlstein, MSc AI
4.9 ★ RatingVerified Google reviews
Canada 50 Best 2024MSP recognition
Serving Canadian SMBs from Toronto, Hamilton, and Metro Vancouver. Healthcare, professional services, manufacturing, and financial services since 2012.

Frequently asked questions

How is this different from a corporate wiki?

A wiki stores knowledge passively and waits for someone to search it. An AI knowledge platform actively retrieves, synthesizes, and answers in natural language with citations to the original source. The deeper difference is that wikis decay because nobody owns curation; an AI platform with retrieval tuning gets re-indexed continuously and surfaces stale or contradictory content in its weekly review.

Do we have to clean up our SharePoint first?

No, but you do have to scope what gets indexed. The mistake is indexing everything. The right move is to point the platform at the 3 to 4 folders or systems that real people actually maintain today, prove value in 90 days, and then expand. Wholesale SharePoint cleanup is a 12-month project that blocks every AI deployment that waits for it.

Who maintains it after the 90 days?

A named internal owner runs a weekly 30-minute review of low-confidence answers and source freshness; the MSP runs the technical retrieval tuning and quarterly governance review. Without a named owner the platform drifts inside a quarter. The owner needs to care about answer quality. Technical depth is optional.

Will employees actually use it?

Adoption is a function of three things: the assistant being embedded where they already work (Microsoft 365, Teams, the browser), the answers being fast and source-cited, and a defined workflow they tried in week one. Pilots scoped to a single workflow with 5 to 15 users routinely hit 60% weekly active users by day 60.

How does it stay current as policies change?

Two mechanisms: continuous re-indexing of the connected sources (so when a policy is updated in SharePoint, the platform sees it within hours), and a weekly review where the internal owner flags low-confidence or contradictory answers. The combination is what stops the platform drifting into stale or rescinded content.

What about Quebec Law 25 and PIPEDA?

Both apply to most 30 to 200 person Canadian companies. The platform must support Canadian data residency, permission-aware retrieval, audit logging, and a documented retention policy. These are not optional add-ons; they are scoped on day one of the assessment and validated before pilot users see the system.

How much does a 90-day deployment cost?

The full 90-day scope, including assessment, source connection, pilot group onboarding, and the first round of tuning, is fixed-fee for a defined user group and source set. The number depends on source-system complexity and user count, but the structure is predictable: assessment is a one-time fee, deployment is fixed-fee for the scoped pilot, and ongoing platform fees are per-user monthly. The readiness review is the input that lets us quote precisely.

Can we use this with Microsoft 365 Copilot, or is it a replacement?

It is complementary. Copilot answers from your tenant content with Microsoft Graph permissions. Our platform sits beside Copilot and answers from sources Copilot does not see well: Slack threads, structured policy databases, third-party PDFs outside the tenant, and content where Copilot oversharing risk has not been audited. The two systems coexist and we route questions to whichever produces a higher-quality grounded answer.

What does the 90-day rollout cost a 50 to 150 person Canadian SMB?

The all-in scope band for a 50 to 150 person SMB is $35,000 to $65,000 CAD for the 90-day deployment, plus $1,200 to $2,800 CAD per month run-rate (inference, vector storage, Canadian-region hosting, change protocol). Anything below that range usually means missing the question-set discipline, the OCR layer, or the freshness pipeline. Anything above tends to mean over-scoping the indexing surface in Day 1.

What happens to our knowledge if we change the underlying LLM later?

The retrieval layer (vector index, chunker, source documents) is portable across LLMs. Switching the inference model is a one-line configuration change in our deployment pattern. We have moved clients from Azure OpenAI GPT-4o to Claude Sonnet to Llama-on-AWS-Bedrock without re-indexing source content. The lock-in risk is in the chunking decisions and the embedding model, not the LLM that consumes the retrieved context.

How do we measure ROI on AI knowledge management at 90 days?

Three measurements at the Day-90 review: question-resolution time (baseline vs platform-resolved), helpdesk ticket reduction in the categories we indexed, and new-hire ramp time on covered topics. We baseline these in Day 1 to 14 and the Day-90 readout reports the delta. Our typical client ranges: question-resolution time drops 60 to 80 percent on covered topics, helpdesk tickets drop 25 to 40 percent in indexed categories, new-hire ramp drops 50 to 70 percent on policy and procedure questions.

Does the platform handle French content for our Quebec operations?

Yes. Modern multilingual embedding models (Cohere Embed Multilingual, OpenAI text-embedding-3, Azure AI Search built-in) handle French and English content in the same vector space, so a French question retrieves an English document and vice versa. The inference layer can be instructed to answer in the question language or the user’s preferred language. For Quebec operations under Law 25 we configure the inference layer to keep all content inside Canadian-region storage and respond by default in French.

Ready to scope your 90 days?

If your operations team is already telling you that the best knowledge in the company is locked in three people’s heads, the right next step is the 90-minute readiness review, not a tool comparison spreadsheet. We run that conversation against the same playbook above, produce a one-page scope, and either deploy it on our custom business AI platform or hand the scope back to you to run yourself. If you are still deciding between custom AI and just buying more Copilot licenses, see Custom AI vs Microsoft 365 Copilot: when each one wins.

Book a 90-Minute AI Readiness Review

Fusion Computing has provided managed IT, cybersecurity, and AI consulting to Canadian businesses since 2012. Led by a CISSP-certified team, Fusion supports organizations with 10 to 150 employees from Toronto, Hamilton, and Metro Vancouver.

93% of issues resolved on the first call. Named one of Canada’s 50 Best Managed IT Companies two years running.

100 King Street West, Suite 5700
Toronto, ON M5X 1C7
(416) 566-2845
1 888 541 1611