Six Principles and Their Shadows
A small set of principles can steady the work. Each one has a shadow — the failure mode that shows up when the principle is missing.
Everything before this has been about what to do. This post is about what to hold onto while doing it.
The principles aren't abstract. Each has a shadow — a specific, repeatable failure mode that shows up in organizations that skip it. If a pilot is stalling, the stall is usually one of these.
1. Start with workflows, not tools
Find where time is actually being lost before picking software. Watch people work. Name the friction. Then ask what could plausibly help.
The shadow: Buying tools before defining use cases. A year later, the team has licenses for four AI products and can't point to a single workflow that changed.
2. Enablement is change management first, technology second
The hard part is never the install. It's getting people to change how they work, which takes trust, time, and someone paying attention.
The shadow: Treating AI as an IT project. The rollout email lands, the logins get distributed, and nothing else happens. "We adopted AI" quietly becomes "we have accounts."
3. Train for the role, not the tool
A coach, an accountant, and a program manager need different skills. A single generic training leaves all three underserved.
The shadow: One-size-fits-all training. Attendance is high, the evaluations are polite, and nobody changes anything about how they work the next week.
4. Build for the second person in the role
No critical workflow should depend on one person's account, one person's prompts, or one person's memory. Write things down as you go, not at the end.
The shadow: Undocumented heroics. A single staff member becomes "the AI person." The moment they leave, change teams, or go on vacation, the capability disappears with them.
5. Personal productivity wins build organizational trust
Let people feel the value in their own work first. Only then is it reasonable to ask them to change how a team operates.
The shadow: Top-down mandates and login theater. Everyone has a ChatGPT account; usage numbers look fine on a dashboard; nothing has actually changed.
6. Guardrails are an enabler, not a brake
Clear rules about what data goes where, and what requires a human in the loop, let people use AI more confidently, not less.
The shadow: Shadow AI. Without guidance, people make their own judgments — often pasting work data into personal accounts, or avoiding AI entirely in places where it would have helped.
How the pairs work
Each shadow is what happens by default when the principle is absent. That's why lists of "things organizations get wrong" all look similar across industries: the wrongs are the gravity of the space. The principles are what it takes to push against it.
If something is stalling, the diagnostic is quick. Name the principle that's missing. Most of the time, it names itself.
A practical closing
Keeping high level ideas like the ones laid out here in your head at all times when working doesn’t really make sense, so here are three practical stances about AI as a technology worth holding as you build this out.
Prefer skills over general agents. Right now, building structured, well-scoped skills is a safer and more reliable approach than deploying open-ended general agents. Skills are testable, auditable, and their failure modes are smaller and more visible. A general agent drifts; a well-scoped skill doesn't.
Build your own skills. Skills are buildable by anyone, and there's always organization-specific context worth encoding. It's also a security posture: depending on someone else's skill means trusting their prompts, their data handling, and their updates. Build your own. Review them. Own them.
Remember what AI actually is. LLMs and related AI tools are transformers — at their best when turning one arrangement of information into another. If you design flows around the context you already have and the context you can reliably bring in, you'll get more accurate, more reliable output than by typing questions into a generic chat interface. Chat interfaces invite hallucination and random generation; context-rich flows push against it.
Taken together: the organization that wins with AI isn't the one with the most licenses or the flashiest agents. It's the one that builds structured skills on top of its real context — and keeps running the cycle as the technology shifts.