AI doesn't have a readiness problem. You might.
Thinking formed in practice, published as part of the Bearing & Course Points of View library.
The organisations moving fastest on AI are not the ones with the best models. They are the ones with the cleanest foundations. That distinction matters, and it is one that the current conversation around AI tends to obscure.
I was recently working with a client in the middle of a significant transformation. Modernising legacy platforms, untangling data, rebuilding the operational foundations that determine whether anything new actually holds. Unglamorous work. Important work. The kind that rarely appears in a board presentation but ultimately determines whether the more visible investments pay off.
Partway through, the conversation turned to AI. Not as a considered strategic question, but as an expectation. It is on every agenda at the moment. But what it did in this case was introduce a tension I see consistently in organisations at this stage of maturity. The conversation began moving toward what was possible while the underlying work was still very much in progress.
The question I asked was not what should we build. It was: are we in a position to make it work reliably once it is built? Because in practice, AI is less constrained by what it can do than by the environment it is introduced into. A capable model in an unreliable data environment does not produce capable outcomes. It produces confident-sounding outputs that cannot be traced, explained or trusted.
AI doesn't replace the foundational work. It inherits it.
A lens widely used at Thoughtworks tests six conditions that determine whether an organisation is genuinely ready to sustain AI, not just to demonstrate it. The six are findable, observable, reliable, explainable, secure and trusted. FOREST is the shorthand. It is not a framework. It is a set of questions that most teams can answer honestly without a maturity model or a consultant.
Findable: can your teams consistently locate and access the data they need without friction, or does it still depend on knowing the right system, the right person, the right workaround.
Observable: do you have clear visibility across how data flows and how decisions are made, or are you still piecing things together after the fact.
Reliable: does the environment behave consistently under real conditions, or does it start to break down as complexity increases.
Explainable: if a decision is made or an output changes, can someone explain why in terms the business is comfortable standing behind.
Secure: are controls, access and risks managed consistently across the environment, or does it vary by system, by data set, by team.
Trusted: do the people who would use the outputs of AI trust them enough to act on them, or are there still workarounds and quiet overrides built into the workflow.
Most teams know, without being asked formally, where they sit against these six. The honest answers tend to surface quickly. What the questions do is make it harder to proceed on optimism.
In the case of the client I mentioned, working through these questions produced something useful. The transformation already underway was directly addressing most of them. The work on data consistency, platform reliability and operational visibility was not pre-AI work. It was AI readiness work. It just was not labelled that way.
The organisations that will do well out of AI over the next decade are not the ones that moved first. They are the ones that understood early that the technology was not the constraint. The constraint was everything around it: the data it would rely on, the controls that would govern it, the people who would need to trust it, and the environment that would have to sustain it under real conditions.
AI is less constrained by what it can do than by the readiness of the organisation it is introduced into. That part is yours to solve.
