008 2026 · 04 · ai readiness · 6 min read

why most companies are not ready for ai yet

the problem is almost never the model. it is what sits underneath it.


every quarter, a new wave of companies decides this is the year they "do ai." most of them will not finish. not because the technology fails — because the system underneath was never ready in the first place.

i've sat in enough discovery sessions to recognize the pattern. the team picks a use case. they pick a tool. they pilot. the demo works. then six months later the project is quietly archived and a slide deck appears explaining what was learned.

what was learned is almost always the same thing. the data was not where it was supposed to be. the workflow was not actually documented. nobody owned the outcome. there was no measurement. the team did not adopt it.

so here is the test i now run before any model gets opened.

checklist five preconditions
  1. 01 data is the data captured, clean, and reachable inside one query?
  2. 02 process is the workflow stable enough that you could write it on a napkin?
  3. 03 ownership is there a single person whose name shows up next to the failure mode?
  4. 04 measurement is success defined as a number — not a feeling?
  5. 05 adoption will the team actually use this on monday morning, in their existing tools?

if any of these fail, ai is the wrong starting point. you do not have an ai problem. you have a workflow problem dressed up as an ai problem.

the good news: every one of these is solvable without machine learning. and once they're solved, the question of "should we add intelligence here" becomes obvious — usually because the answer is no, automation is enough, and the team is already faster than they were a quarter ago.

takeaway

foundations before ai. fix the system. then — and only then — ask what should learn.