Shane White Shane White

AI Enablement: Where to Actually Start

It All Begins Here

Every organization is somewhere on an AI adoption curve, whether they chose to be or not. This guide is for the people trying to make it intentional.


Why small orgs need their own approach

You can't copy the enterprise playbook. You don't have twenty people to run an AI program, and you don't need them. But you also can't let adoption happen by accident. In a small organization, what happens by accident sticks. One person's workflow becomes the team's workflow. One rushed tool choice gets locked in for two years.

The upside is speed. A small team can pilot, train, and adjust in weeks. The risk is moving fast in several directions at once.

What enablement actually means

AI enablement isn't an IT project. It's three jobs happening at the same time: picking and maintaining the right tools, helping every staff member build real skill, and turning that new capability into something the organization can use. Skip any one of them and you get a familiar result. Tools nobody opens. Trained people with nothing to work on. A strategy deck that never touches anyone's actual job.

The three jobs of AI enablement, built on reliability and safety Three columns labeled Tools, Training, and Strategy sit above a wide foundation bar labeled Reliability and safety. Tools Select and maintain Training Teach and support Strategy Advise and align Reliability and safety

Reliability and safety are the foundation

"Reliability and safety" can sound like a brake. It's the opposite. It's what turns a promising pilot into something an organization can run on.

In practice, that means a few specific things. No critical workflow depends on one person's account or password. Clear rules about what data goes into which tools. Documentation a colleague could actually use. A plan for when something breaks.

None of that is glamorous. It's also the difference between AI adoption that sticks and adoption that falls apart the first time the person running it takes a vacation.

What's ahead

The rest of the guide walks through a phased roadmap: the first 90 days, the first six months, the first year, and what comes after. Each phase covers what to deliver and what to avoid.

Read More
Shane White Shane White

The Three Jobs of an AI Enablement Lead

It All Begins Here

AI enablement encompasses three responsibilities: tools, training, and strategy. If you don’t tackle all three, you aren’t getting the most out of the tools available to you and your team.


Here's what each of the three jobs actually looks like.

1. Tools: the technologist

This is the most visible job, which is why it absorbs more attention than it deserves. The work is picking a small number of AI systems that fit your team, setting them up properly, and keeping them running.

Good tool selection starts with the workflow, not the product page. One job listing for an AI enablement lead puts it plainly: help teams clearly define the problems they are trying to solve before recommending anything. (Greenhouse)

The failure mode is buying breadth. A small organization doesn't need seven tools. It needs two or three that people actually open.

2. Training: the educator

Most of the adoption work is human, not technical. Oracle, citing Gallup data, notes that only 6% of workers feel very comfortable using AI in their roles (Oracle). Closing that gap is not a one-time workshop. It's baseline training, role-specific coaching, practice on real work, and a feedback loop for what's landing.

A sharp reframe from Moveworks is worth holding onto: what reads as cultural resistance is usually an information problem (Moveworks). People don't know what's changing, what's safe to try, or who to ask. Training is how you fix that.

What training usually gives people versus what they actually need Two columns compare what training typically provides, such as tool demos and policy documents, with what staff actually need, such as knowing what is safe to try, who to ask, and what good use looks like for their role. What training usually gives What people actually need A tool demo Here are the features What good use looks like in my role Show me someone like me using it A policy document Read these twelve pages What is safe to try Clear green light, clear red light A single training session One hour, then back to work Someone to ask when stuck A person, not a ticket queue A login and a link You are on your own A reason it matters to me What problem of mine this solves

3. Strategy: the advisor

The third job is turning AI capability into organizational direction. What should leadership invest in? Which workflows are worth automating? What's the line between a useful tool and one the team is dependent on?

This has become more urgent as agents move from novelty to standard. Google Cloud's 2026 report describes the shift as employees moving from routine execution to higher-level strategic direction (Google). That shift doesn't happen without someone at the table who can see the whole picture and advise accordingly.

Why balance matters

Each job fails without the others. Great tools and no training gives you shelf-ware. Strong training and weak tools gives you frustration. Good tools, good training, and no strategy gives you a year of effort in no particular direction.

The enablement lead's real job is keeping all three moving at the same time.

Read More
Shane White Shane White

The First 90 Days: Foundations

It All Begins Here

The first 90 days are about learning the ground, picking the right tools, and getting every person started.


The first 90 days set the shape of everything that follows. The goal is not to deploy AI across the whole organization. It's to understand the ground, pick a small set of tools, and get every person started on a real learning path.

Four things happen in parallel.

  • Assess what you have

  • Select and set up tools

  • Train the first wave

  • Build Relationships

Assess what you have

Before picking tools, audit the workflows and systems you already run. You are looking for two things: where people lose time today, and where AI could plausibly help without adding risk.

A simple audit loop works well.

How to audit your existing tools in five steps A five step sequence: list the tools in use, sit with the people using them, note where time is lost, spot the patterns across teams, and rank by effort and risk. List tools What is in use Sit with users Watch real work Find time sinks Where hours go Spot patterns Across teams Rank Effort, risk

You repeat this for each team. It takes longer than you expect. It's also where most of the real insight comes from, because the gap between how a tool is supposed to be used and how it's actually used is almost always a flag for finding good AI use cases: the smaller the gap, the easier it will be to implement AI.

AI is an amplifier. A well defined SOP can be empowered reliably with AI, and poorly defined one will go wrong more quickly.

Select and set up tools

Pick a small number of tools. For most small and medium organizations, that means one general-purpose AI assistant plus any domain-specific tools the work genuinely needs.

Set them up properly. That means real accounts (not personal ones), single sign-on if you have it, clear data rules about what can go into which tool, and a documented owner for each system.

Train the first wave

Every staff member gets a baseline session and a personal learning plan tied to their actual job. An accountant's plan should not look like a coach's plan.

The Axios job listing for this exact role puts it well: meet teams where they are, and provide tailored support based on their level of familiarity and confidence Greenhouse. A one-size training ends up fitting no one.

Build relationships

Spend real time with the people whose workflows you are about to change. In this case that means coaches. You cannot prescribe AI use for work you don't understand, and the people doing the work know things you don't.


A worked example

Here's what this can look like in a workplace already using Claude Cowork, Monday.com, and the Microsoft Office suite including Teams.

Week 1 to 2: audit. You sit with two people from each team and watch them work. You notice the operations team spends several hours every Friday pulling project data out of Monday.com and writing status updates that get posted in Teams. Coaches keep their session notes in Word documents across scattered OneDrive folders. Nobody can find anything from last quarter.

Week 3 to 4: pick the first wins. You define two concrete use cases. First, a weekly status summary: Cowork reads the relevant Monday.com board and a set of Teams channels, then drafts the Friday update in a Word template. Second, notes cleanup: Cowork takes a folder of coach session notes and produces a structured summary by scholar. Both use Cowork's ability to read local files and move between applications without the person coordinating each step.

Week 5 to 6: set up and document. You configure Cowork with access to the specific OneDrive folders and Monday.com board that matter. You write a one-page runbook for each use case: what it does, what data it touches, who owns it, and what to do if it breaks. You set data rules, for example that scholar names are fine in Cowork but financial account numbers are not.

Week 7 to 10: train in waves. Operations team first, because they have the clearest problem and the most to gain. They see the Friday summary get drafted in ten minutes instead of three hours. Word spreads. Coaches come next, with a session focused on their specific notes workflow. Every staff member leaves with a personal learning plan: one thing to try this week, one thing to try next month.

Week 11 to 12: measure and hand off. You track two things: how many staff members are actively using the tools, and how much time the two use cases are actually saving. You write the first version of the strategy note for the CEO. You make sure both workflows could be picked up by someone else if you disappeared tomorrow.

What good looks like at day 90

  • You have finished a real assessment of the tech stack and team-by-team readiness.

  • Tools are selected, configured, and documented.

  • Every staff member has had a training session and a personal learning plan.

  • You have built working relationships with every team.

Read More
Shane White Shane White

From Integration to Fluency

It All Begins Here

Once the foundations hold, the work shifts. Integration, scaling, and fluency are three distinct stages, each raising the bar on what the organization can do.


The first 90 days are about building the foundation. What comes after is the longer, less visible work: turning a working pilot into shared capability, and shared capability into something the organization just does.

Integration (Months 1–3)

Every staff member is using AI on real work, and a few routine tasks are now handled by agents they manage. The infrastructure — documentation, shared prompts, a written strategy note for the CEO — is solid enough that the work survives anyone's departure.

Scaling (Months 3–12)

Use spreads horizontally. Internal champions emerge, the prompt library sees real traffic, and governance moves from ad hoc rules to documented policy, now informed by months of actual usage data.

Fluency (Year 1+)

AI literacy is baked into onboarding, scholars graduate with meaningful AI skills, and the enablement role itself shifts — less teaching, more strategy. The organization no longer needs someone to "run AI adoption." It runs it as a matter of course.

The through-line

The AI Enablement Cycle Three stages of AI adoption — Integration, Scaling, and Fluency — arranged as a continuous clockwise cycle. Each new wave of AI technology restarts the cycle. The AI Enablement Cycle Integration AI implemented on tasks Scaling Default appoach for new processes Fluency Crystalizing strategy NEW TECH restarts the cycle

Each stage is defined less by what you add than by what becomes automatic. Integration makes AI part of the daily workflow. Scaling makes it part of the team's operating rhythm. Fluency makes it part of the organization's identity.

The enablement lead's work tracks the same curve — from setting things up, to spreading what works, to eventually making the role itself less necessary.

As new technologies in the AI space emerge, the cycle must repeat, but as AI literacy on the team grows and the tools get better, the timeline becomes shorter.

Read More
Shane White Shane White

Six Principles and Their Shadows

A small set of principles can steady the work. Each one has a shadow — the failure mode that shows up when the principle is missing.


Everything before this has been about what to do. This post is about what to hold onto while doing it.

The principles aren't abstract. Each has a shadow — a specific, repeatable failure mode that shows up in organizations that skip it. If a pilot is stalling, the stall is usually one of these.

Six Principles and Their Shadows Two-column paired comparison. Left column (teal header: PRINCIPLE) lists six guiding principles for AI enablement. Right column (coral header: SHADOW) lists the corresponding failure mode for each. A footer note explains that each shadow is what shows up when the principle is missing. Six Principles and Their Shadows PRINCIPLE SHADOW Start with workflows, not tools Buying tools before defining use cases Change management first, tech second Treating AI as an IT project Train for the role, not the tool One-size-fits-all training Build for the second person in the role Undocumented heroics Personal wins earn team trust Top-down mandates, login theater Guardrails enable confident use Shadow AI fills the gap Each shadow is what shows up when the principle is missing.

1. Start with workflows, not tools

Find where time is actually being lost before picking software. Watch people work. Name the friction. Then ask what could plausibly help.

The shadow: Buying tools before defining use cases. A year later, the team has licenses for four AI products and can't point to a single workflow that changed.

2. Enablement is change management first, technology second

The hard part is never the install. It's getting people to change how they work, which takes trust, time, and someone paying attention.

The shadow: Treating AI as an IT project. The rollout email lands, the logins get distributed, and nothing else happens. "We adopted AI" quietly becomes "we have accounts."

3. Train for the role, not the tool

A coach, an accountant, and a program manager need different skills. A single generic training leaves all three underserved.

The shadow: One-size-fits-all training. Attendance is high, the evaluations are polite, and nobody changes anything about how they work the next week.

4. Build for the second person in the role

No critical workflow should depend on one person's account, one person's prompts, or one person's memory. Write things down as you go, not at the end.

The shadow: Undocumented heroics. A single staff member becomes "the AI person." The moment they leave, change teams, or go on vacation, the capability disappears with them.

5. Personal productivity wins build organizational trust

Let people feel the value in their own work first. Only then is it reasonable to ask them to change how a team operates.

The shadow: Top-down mandates and login theater. Everyone has a ChatGPT account; usage numbers look fine on a dashboard; nothing has actually changed.

6. Guardrails are an enabler, not a brake

Clear rules about what data goes where, and what requires a human in the loop, let people use AI more confidently, not less.

The shadow: Shadow AI. Without guidance, people make their own judgments — often pasting work data into personal accounts, or avoiding AI entirely in places where it would have helped.

How the pairs work

Each shadow is what happens by default when the principle is absent. That's why lists of "things organizations get wrong" all look similar across industries: the wrongs are the gravity of the space. The principles are what it takes to push against it.

If something is stalling, the diagnostic is quick. Name the principle that's missing. Most of the time, it names itself.

A practical closing

Keeping high level ideas like the ones laid out here in your head at all times when working doesn’t really make sense, so here are three practical stances about AI as a technology worth holding as you build this out.

  1. Prefer skills over general agents. Right now, building structured, well-scoped skills is a safer and more reliable approach than deploying open-ended general agents. Skills are testable, auditable, and their failure modes are smaller and more visible. A general agent drifts; a well-scoped skill doesn't.

  2. Build your own skills. Skills are buildable by anyone, and there's always organization-specific context worth encoding. It's also a security posture: depending on someone else's skill means trusting their prompts, their data handling, and their updates. Build your own. Review them. Own them.

  3. Remember what AI actually is. LLMs and related AI tools are transformers — at their best when turning one arrangement of information into another. If you design flows around the context you already have and the context you can reliably bring in, you'll get more accurate, more reliable output than by typing questions into a generic chat interface. Chat interfaces invite hallucination and random generation; context-rich flows push against it.

Taken together: the organization that wins with AI isn't the one with the most licenses or the flashiest agents. It's the one that builds structured skills on top of its real context — and keeps running the cycle as the technology shifts.

Read More