← Points of View

Release is not the same as useful.

Thinking formed in practice, published as part of the Bearing & Course Points of View library.

More release is assumed to mean more transparency, more innovation and better accountability. That assumption is doing a lot of work. It is also, in a significant number of cases, wrong.

Open data has become, in many organisations and in most governments, an objective in its own right. The act of release has displaced the question of what the release is for. Data gets published because publication has become the measure of progress, not because anyone has clearly defined what decisions the data should improve once it leaves the organisation that collected it.

That matters because availability and usability are not the same thing. A poorly structured spreadsheet does not become analytically useful because it is downloadable. A PDF containing fragmented contract information does not become comparable because it is accessible. In some cases, release creates false confidence. The data appears available, the system appears transparent, and meaningful comparison remains as difficult as it ever was.

Publication is not the same as infrastructure.

The strongest examples of open data working well are almost never the largest catalogues. They are narrow, highly maintained datasets with clear standards and obvious external demand. OpenStreetMap succeeded because people could immediately build navigation, logistics and humanitarian tools on top of it without first reconstructing the underlying data. GS1 became global supply chain infrastructure because inconsistent product identifiers create immediate commercial friction. SWIFT works because poor data prevents transactions from settling, and a transaction that does not settle has immediate, visible consequences for everyone involved.

In each case, quality is maintained because poor data has immediate costs for the organisation responsible for it. That is the discipline that makes these systems function. It is not better technology. It is consequences that are felt by the right party at the right time. That is precisely what most open data programmes lack. The cost of poor usability falls on whoever attempts to consume the data, not on whoever published it. That asymmetry changes behaviour in a predictable way. Release becomes compliance rather than service.

A researcher trying to understand hospital discharge patterns spends weeks reconciling definitions that shifted between reporting periods. A small business trying to identify government procurement opportunities works through thousands of records formatted differently across agencies. A journalist trying to hold an institution to account downloads data that requires specialist tools to open. In each case, the data exists. The friction is real. The external user carries a cost that the publishing institution never sees.

Every hour spent cleaning data is an hour not spent using it.

The private systems that share data successfully solve this differently. They invest in standards because inconsistency costs money immediately and visibly. Government data often does not face equivalent discipline because poor data rarely interrupts the administrative system that produced it. The institution continues. The programme continues. The external user absorbs the cost invisibly.

The question worth asking of any data release is not how much has been published. It is whether someone can reliably build on what has been released without first reconstructing it. Whether the release reduces friction for an external user or merely transfers friction outside the organisation.

Availability is not usability. More is not more. The test is whether someone can build on what you released without first fixing it.