UNIBROW
Operations Infrastructure | AI Strategy
AI Strategy

Free the Tools.
Govern the Data.

Why your AI deployment model is architecturally backwards — and what to build instead.

4%
of organizations generate
significant AI returns
BCG, 2024
80%
report no material
impact on earnings
McKinsey, 2025
56%
of CEOs: no cost savings
or revenue gains from AI
PwC, Jan 2026
78%
of AI users bring their
own tools to work
Microsoft, 2024
The Idea in Brief

The problem: AI consistently delivers individual productivity gains of 15–40%, but almost no organization has translated those gains into organizational returns. The technology works. The deployment model doesn’t.

The diagnosis: Most AI strategies miss a layer. AI doesn’t just automate tasks. It gives the people closest to the work the ability to build tools that fit their specific needs. Those tools capture operational knowledge that previously evaporated into emails, spreadsheets, and workers’ heads. Miss this layer, and individual gains stay individual.

The proposal: A three-layer architecture that compounds what individuals produce into organizational intelligence. A Builder Tier where employees create tools that fit how they actually work. A Legibility Layer where IT governs shared schemas and data contracts. An Agent Layer where AI reads across the entire structured data surface, detecting patterns no single team could see.

Here is a thing that companies do. They spend a lot of money on AI. They roll it out. They measure adoption by counting how many people log in. Then they are disappointed, and they commission a study to find out why, and the study tells them to invest more and be patient. So they do. And then they are disappointed again.

This is not a niche phenomenon. Study after study, across thousands of executives and millions of workers, arrives at the same conclusion: AI delivers individually and flatlines organizationally.123 The standard explanation is that the technology is immature. Be patient. Invest more. Iterate. This explanation has the advantage of being unfalsifiable. It has the disadvantage of being wrong.

The technology works fine. Controlled experiments and large-scale field studies consistently show individual productivity gains of 15 to 40 percent.45 The gains are real, measurable, and replicated. They just don’t add up to anything at the organizational level. The technology works for people. It doesn’t work for companies. This is a strange outcome that deserves a better explanation than “be patient.”

Exhibit 1AI Works for Individuals. It Doesn’t Compound for Organizations.
Individual Gains (Proven)
Task completion speed
+25.1%
Output quality
+40%
Novice improvement
+34%
Overall productivity
+15%
Organizational Returns (Flat)
Significant returns
4%
Material EBIT impact
20%
Cost + revenue gains
12.5%
Costs went up
20%
Sources: Harvard/BCG (2023), Stanford/MIT (QJE, 2025), BCG CxO Survey (2024), McKinsey (2025), PwC CEO Survey (2026)

The Playbook Everyone Runs

Every company deploys AI the same way. IT evaluates platforms. Leadership picks one. It rolls out broadly, usually a chatbot or copilot bolted onto existing software. There’s training. There’s an internal newsletter celebrating early wins: someone automated a report, someone drafted a client proposal in half the time. These wins are real. They are also completely isolated, living and dying inside individual workflows, never compounding into anything the organization can use.

This feels rational because it mirrors how every previous enterprise technology got deployed. It worked for email. It worked for ERP. It worked for CRM. But nothing is wrong with the AI. Something is wrong with the model.

The Old Bargain

For decades, if you wanted structured data across an organization, the kind that lets leadership make decisions and keeps operations visible, you had to force people through structured tools. You deployed SAP or Salesforce or ServiceNow, and everyone used the same interface with the same fields, because that was the only way to produce legible information at scale.

This was a real tradeoff: you sacrifice how well the tool fits any individual worker’s workflow in exchange for data that’s visible across the whole organization. Rigidity was the price of legibility.

The world changed.

The Three Framings

Most organizations think about AI in one of two ways, and both lead to the same deployment model that keeps failing.

Exhibit 2Three Framings of AI — Only One Changes the Model
AI as EndpointThe ChatbotA smarter search bar bolted onto existing software. This is where the vast majority of enterprise deployments live. If this is what AI is, the standard playbook makes perfect sense. You’re deploying a product.Centralize
platform ✓
AI as AgentThe WorkerExecutes tasks, orchestrates workflows, handles multi-step processes autonomously. Powerful, but still operating within a centrally managed platform. When the agent lives inside the enterprise system, the old deployment model still applies.Centralize
platform ✓
AI as BuilderThe FactoryNot a feature inside software. The means of production for applications themselves. The people closest to the work build the tools they actually need. This reframing changes everything.Centralize
data
AI made the cost of building applications approach the cost of creating documents. That changes what you centralize.

Once you see AI as a means of production rather than a product, the question shifts from which platform to buy to what data architecture to build underneath whatever people create.


The Inversion

If AI followed the same rules as traditional enterprise software, the old playbook would make sense. Evaluate vendors, pick a platform, roll it out, centralize control. That model works when tools are expensive, complex, and risky. But AI as builder is something new. It collapsed the cost of creating applications to roughly the cost of creating documents. And no company centralizes who’s allowed to write documents. Companies set standards for how documents are formatted and shared. Same logic.

Don’t restrict who can build tools. Set standards for how data flows out of them. Once you see it this way, the old deployment model isn’t suboptimal. It’s inverted.

Exhibit 3The Inversion: What to Lock Down, What to Free
Current Model
LOCKED
Tool CreationCentralized, locked down. Months-long procurement.
OPEN
Data ArchitectureUngoverned, fragmented. Knowledge evaporates.
Proposed Model
OPEN
Tool CreationDistributed to builders. Built in days, not quarters.
LOCKED
Data ArchitectureCentrally governed schemas. Intelligence compounds.

Most organizations have strong data architecture for what’s inside their enterprise systems. What they haven’t built is the architecture for the operational knowledge that still lives outside them. Meanwhile, tool creation stays centralized by legacy default, even as AI has made building applications nearly free.

The correct model flips this. Let employees build applications tailored to how they work. Govern the schemas, the structured outputs, the shared vocabulary underneath. Centralize the data layer, free the tools.

This sounds radical until you look at what employees are already doing. The data is stark: the majority of AI users are already bringing their own tools to work, and most of that activity bypasses IT entirely.

Exhibit 4Shadow AI: A Market Signal, Not a Threat
Bring own AI tools to work
78%
AI data to high-risk platforms
83.8%
Claude/Perplexity bypass SSO
~60%
Self-develop over off-the-shelf
60%
Sources: Microsoft Work Trend Index (2024), Cyberhaven 7M-worker telemetry (2025), UBS Corporate Survey (2025). Samsung banned ChatGPT in 2023 after engineers uploaded proprietary chip designs; the ban failed; an internal platform reduced unauthorized usage by 80%.

The security-minded read these numbers as a threat. They are more usefully read as a market signal. These employees are not confused about company policy. They are telling you, through their revealed preferences, that the official tools don’t match how they work.678 Companies are starting to act on that signal: outright bans on generative AI dropped 21 percentage points in a single year as organizations learned that prohibition simply pushes usage underground.22

But not every worker using unauthorized tools is prompting a chatbot for better emails. Some of them are building things. Work order apps, inspection tools, budget trackers, asset dashboards. Every organization already has these builders. They’re invisible, and their work is ungoverned, which means the organization gets the risk without the benefit. Bring them out of the shadows and something else happens: they find each other. Builders who can see each other’s work share templates, avoid duplicating effort, and get feedback from users outside their immediate team. The tools get better. The organization gets smarter.

This Has Been Tried Before

Someone in the audience is already objecting: “This is knowledge management. We tried this. It failed.”

They’re right that it was tried. In the late 1990s, an entire movement built knowledge bases, created documentation workflows, hired Chief Knowledge Officers. It failed. The databases became “where lessons learned went to die.”9 The diagnosis was correct: tacit knowledge is valuable, and losing it is expensive. The mechanism was wrong: it asked everyone to document what they knew, on top of their real work, producing artifacts nobody consulted.

Exhibit 5Knowledge Management vs. Builder-Led Capture
1990s KM
This Architecture
Who does the work
Everyone documents
Small number of builders create tools
Adoption
Extra work on top of real work
Knowledge captured as byproduct of use
Value to the user
None to the documenter
Immediate: tool matches their workflow
Cost
Expensive to build and maintain
Nearly free; apps are disposable
Durable asset
The document (nobody reads)
The template + the data

This architecture doesn’t ask people to document knowledge. It lets a small number of builders create tools that match actual workflows. The builder designs the schema. The worker fills it in by doing their job. The data accumulates without anyone being asked to “capture knowledge.” And if a tool breaks or a builder leaves, the template persists, the data persists, and someone else can rebuild on the same foundation. The template is the durable asset. The app is ephemeral.

Why This Compounds

If you’re skeptical that expanding an organization’s data surface produces compounding returns, it helps to note that this has happened before, in domains that had nothing to do with AI.

Exhibit 6How Structured Data Surfaces Compound: Two Precedents
Healthcare: Paper Charts → EHR
InitialPaper → digitalFaster retrieval, fewer lost files
Year 2-3Cross-patient dataPopulation health analytics
Year 3-5Cross-institutionDrug interaction detection at scale
Year 5+System-wideEpidemiological surveillance, clinical decision support
Retail: Register Totals → Clubcard
InitialDaily totals → transactionsBetter inventory management
Phase 2Customer-level dataCustomer segmentation
Phase 3Behavioral patternsPredictive purchasing
Phase 4Individual pricingPersonalized pricing at scale
In both cases, the initial digitization benefit was modest. The compounding came from structured data accumulating across units, making previously invisible patterns detectable.

These cases share a mechanism: data that previously lived in informal channels got captured in structured form. The structured surface grew. Cross-unit patterns became detectable. Those patterns informed better data capture. The cycle compounded. Neither case involved AI. The compounding came from expanding the observable surface of the organization.

AI compresses this cycle, but not in the way most people assume. It doesn’t just make the analysis smarter. It collapses the cost of building the tools that capture the data in the first place. When tool creation takes months and costs six figures, you digitize only what’s worth the investment. When it takes hours and costs almost nothing, you digitize everything. The intelligence isn’t a new kind of analysis. It’s analysis applied to data that was previously too expensive to capture.

What the Agents Actually See

For most of enterprise history, roughly 80% of business-critical information has lived in formats that organizational systems couldn’t process: emails, call transcripts, meeting notes, the tacit knowledge in experienced workers’ heads.

Exhibit 7The Invisible Knowledge Gap
68% never
leveraged
32%
Captured Data
Actually Used
IDC/Seagate, 2020
Unique to
individual
42%
Workplace Knowledge
Entirely Unshared
Panopto, 2018
What walks out the door: David DeLong’s research at MIT and Harvard, spanning NASA, Siemens, Shell Chemical, and the World Bank, documented what happens when experienced workers leave: critical operational intelligence vanishes permanently.11 The inspection findings that live in a text thread. The budget variances that only surface when someone manually cross-references three spreadsheets. The asset performance patterns that exist only in a property manager’s memory.

The tools employees build don’t need to be smart to change this. They need to exist. A work order app that captures issues in the field instead of losing them in email threads. An inspection tool that records QA findings in real time against actual infrastructure. Budget automation that connects work orders, invoices, and project trackers so variance reporting happens automatically. Because people use tools that fit their workflows, data that previously evaporated into informal channels now flows into structured formats.

The Stanford and MIT customer service study shows the mechanism at the micro level. The AI captured what top-performing agents knew (tacit patterns, conversational judgment, workflow intuitions) and made that knowledge accessible to novice workers.12 Nobody asked the top performers to change how they worked. The AI handled the translation. That’s the macro argument in miniature: you don’t need everyone to work the same way to get data you can read. You need a translation layer.

Organizational legibility doesn’t require organizational uniformity. It requires a translation layer.

The Architecture

If the old constraint is gone, what does the new model look like? Three layers, and a genuine reckoning with what each one demands.

Exhibit 8Three Layers of Organizational Intelligence
Agent Layer
Compounds returns
Reads across the entire structured data surface. Patterns emerge across teams: vendor anomalies visible only in aggregate, failure signatures detectable only across properties, cost drift no single site could identify.
Legibility Layer
Extended data architecture
IT and builders co-design shared templates and data schemas. A work order app, an inspection tool, and a budget tracker at three different properties all emit the same structured data. The template is the Lego stud. The app is the model.
Builder Tier
Distributed creation
Operationally curious employees build lightweight tools tailored to their team’s workflows. Low switching costs. If one fails, you lose a weekend, not a fiscal year. The good ones spread through demonstrated utility.

The Builder Tier

In every department, a small number of people, not necessarily technical but operationally curious, build tools tailored to their team’s workflows. A communications manager at a Kenyan fintech built a team management tool by chatting with an AI.23 A three-person IT team at a private equity portfolio company shipped a production compliance application across eight subsidiaries.24 Moveworks found that 78% of IT executives have seen their most successful AI projects originate from non-leaders tackling persistent challenges.15 Companies that back these builders score 33% higher on innovation measures.16 MIT Sloan researchers, after more than a hundred interviews, reported “no disaster stories” from organizations that gave citizen builders proper governance.17

Not every department will have builders on day one, and some never will. The architecture doesn’t require every department to build from scratch. It requires the legibility layer to offer templates at varying levels of abstraction, the way you’d fill out a form rather than design one. Northern Trust’s “orange zone” serves exactly this function: a productivity team bridging builders and IT, standing in where organic builders haven’t emerged.18 Peer-led adoption outperforms top-down rollouts because trust is local.19

The builders get better. The work order app someone builds in month one teaches them what data matters. The inspection tool they build next is sharper. By the time they’re building budget automation that connects to the work order data, they’re designing the data architecture for their department from the ground up, and IT can see what they’re doing, learn from it, and incorporate the patterns into shared templates. When builders across departments can see each other’s work, the learning compounds across the organization, not just within a single builder’s trajectory. The builder improves. The templates improve. The data surface expands. That’s a human-driven flywheel, not a data-driven one.

The Legibility Layer

This is where IT and the business work hand in hand, and it’s the critical piece that makes everything else work. Builders discover what data matters through actual use. IT works with the business to standardize those patterns, turning them into shared templates and data components that other builders plug into their tools. One property builds a work order app tailored to their specific needs. The app is built on a shared template that ensures every work order emits the same structured data: timestamp, category, status, assignor, resolution. A second property takes that same template and customizes it for their context. A third builds an inspection tool on a different template with the same data contract. Different tools, different workflows, identical data output. An agent can now read across all three without any team having changed how they work.

The template is the Lego stud. The app is the model. Different shapes, same connection point.

This doesn’t replace enterprise systems. ERP and CRM still do what they’ve always done. What changes is the scope. The legibility layer extends the same data architecture to the tools that capture what enterprise systems were never designed to reach. And it redefines the requirements for new enterprise procurement: not just what a platform does, but whether it plugs into the same shared schema.

Not every tool carries the same risk, and the architecture shouldn’t treat them as if they do. Edge tools, work order trackers, inspection apps, budget connectors, are built by people closest to the work for small teams with specific needs. If one breaks, you rebuild it in an afternoon. The blast radius is small, which is why speed is affordable. Center tools, ERP, CRM, financial systems, identity management, carry real consequences at scale. That’s where enterprise procurement earns its cost.

The mistake is applying the center’s process to the edge’s tools. The edge is where organizations discover what they actually need, through use, not requirements documents. When an edge tool proves valuable enough to scale, it graduates: either IT rebuilds it on enterprise infrastructure, or it becomes the requirements spec for the next procurement. Either way, the organization is buying or building from validated need, not a vendor demo. The edge doesn’t replace the center. It makes the center smarter.

AI has collapsed the cost of building tools to near zero while raising the value of data architecture dramatically. That shifts where organizations should focus their resources and effort: less vendor management and procurement, more ongoing curation of the data layer. AI makes that curation feasible at scale in ways it never was before. Sumit Johar, CIO at BlackLine, described the transition: his company invested in a centralized AI team, and it worked, until demand outstripped anything a central team could serve. The answer wasn’t to hire more AI specialists. It was to embed AI goals across every business unit and let them build, while the central team focused on governance and infrastructure.21

That governance includes something the templates alone don’t solve: security, stability, and long-term support. The templates can embed authentication, access controls, and encryption standards so that every tool built on them inherits those properties automatically. That’s the security layer, and it’s architecturally clean. But stability and support are harder. Who fixes the bug when the builder is on vacation? Who ensures a tool that works for four users at one property can handle five hundred across a portfolio? These questions don’t have fully solved answers yet. What they have is a graduation model.

Green zone tools are designed to be small, local, and disposable. They’re not supposed to scale to five thousand users. When a tool proves valuable enough to need scale, stability, and professional support, it graduates. IT takes the proven pattern, the validated schema, and the working data contract and rebuilds on infrastructure designed for durability. The builder’s job was never to build enterprise software. It was to identify the right tool and the right data architecture through actual use. IT’s job is to make the winners durable and scalable.

Exhibit 9The Three-Zone Governance Model (Northern Trust)
Green
Zone
Citizen developers build freely with AI and shared templates. Small user base, low stakes. If a tool breaks, rebuild it in an afternoon. Security inherited from the template layer. Northern Trust’s green zone produced an automated email routing system processing 17 million emails/month, with one team reducing email-to-case volume by 67%.25
Orange
Zone
Productivity team stabilizes and bridges. Tools that prove valuable get documentation, broader testing, and preparation for wider adoption. Standing in where organic builders haven’t emerged. Helping risk-averse departments configure templates rather than build from scratch.
Red
Zone
IT takes full ownership. Graduated tools get enterprise-grade infrastructure, professional support, and scale testing. Core data architecture, security, compliance, and identity remain here. But IT isn’t starting from a blank requirements document. They’re inheriting a tool that’s been validated by actual users doing actual work.

The work changes shape. More data architecture, more API design, more systems thinking. Less vendor management and procurement. Not every question about stability, support, and scale has a clean answer yet. But the shift isn’t additional overhead on top of the old model. It’s a reallocation: from managing tool access to curating the data layer, from evaluating vendor demos to validating patterns that builders have already proven in the field. This isn’t a technical migration. It reorganizes how IT and the business work together. That kind of structural change is genuinely difficult, even when the economics clearly favor it.

But compared to what? The alternative isn’t the disciplined enterprise process that exists on paper. The alternative is the reality already unfolding: 78% of AI users bringing their own tools to work, 83% of enterprise AI data flowing to unvetted platforms, builders creating ungoverned applications on personal accounts with zero security review, zero data contracts, and zero visibility. And suppressing the shadow activity doesn’t recover the status quo. Builders who can’t build inside the organization will build outside it, redirecting that energy into side projects, startups, or their next employer. Only 12% of organizations describe their AI governance as mature.22 The unsolved problems in this architecture are real. They are also smaller, more manageable, and more visible than the unsolved problems in the shadow system that already replaced the old model. The question is whether organizations figure this out now, while the change is still manageable, or later, after the technology has gotten away from them.

The Agent Layer

Once data is legible, agents orchestrate across the entire tooling surface. They don’t care whether a tool was built in React or a spreadsheet or a no-code platform. They care that it emits structured data they can reason over. This is where individual productivity gains finally compound: patterns emerge across teams and organizational intelligence accumulates.

This is the same mechanism that played out in healthcare EHR systems and retail loyalty programs: a structured data surface expands, cross-unit patterns become detectable, those patterns inform better capture. AI compresses the cycle by collapsing the cost of building the tools that generate the surface. But the agent layer is the easiest part of this architecture to imagine and the hardest to evidence. It depends entirely on the two layers below it existing first, and most organizations are trying to build it without the foundation.

These agents aren’t collaborating with each other in some autonomous swarm. They’re reading across a structured data surface, the way a skilled analyst reads across a well-organized database. The difference is that the database now contains information that didn’t exist in structured form six months ago, because someone built a simple tool that captured it.


What This Is Actually About

Step back far enough and this isn’t about AI.

Every organization in history has faced the same tension: the center needs to see, and the edges need to work. Bureaucracies impose legibility through standardized forms, uniform processes, mandated systems, because without it, leadership operates blind. But imposing legibility destroys the local knowledge that makes operations work. The person filling out the form knows the form doesn’t capture what matters. The team using the enterprise tool knows it doesn’t match how the work flows. Everyone complies on paper and builds workarounds in practice.

This has been treated as permanent. A cost of organizing. The price of scale. It isn’t permanent anymore. AI doesn’t eliminate this tradeoff; that would overstate the claim. But it transforms it fundamentally. For the first time, there is technology that generates legibility without requiring uniformity. That translates between the idiosyncratic and the structured without flattening either one. That lets the center see and the edges work, simultaneously, without forcing either to give up what makes them valuable.

This article is not a finished blueprint. It is a starting point: a framework for thinking about a problem that most organizations haven’t named yet. The builders are already building. The data is already scattering. The only question is whether the architecture catches up to the reality, or the reality keeps outrunning the architecture.

The companies that figure this out won’t be more productive. They’ll be more intelligent — not because they bought better AI, but because they stopped forcing the people who know the most to work in systems designed for people who know the least.

1 BCG, “Where’s the Value in AI?” Oct 2024. 1,000 CxOs, 59 countries.

2 McKinsey, “The State of AI,” Nov 2025. N=1,993. Only 39% report any EBIT impact; most of those less than 5%.

3 PwC, 29th Annual CEO Survey, Jan 2026.

4 Dell’Acqua et al., HBS/BCG, Organization Science, 2023. N=758.

5 Brynjolfsson, Li & Raymond, QJE, 2025 (working paper 2023). N=5,172 agents.

6 Microsoft & LinkedIn Work Trend Index, May 2024. N=31,000.

7 Cyberhaven, “2025 AI Adoption & Risk Report,” April 2025. 7M workers. 83.8% of enterprise AI data to high-risk platforms. Corroborated by Netskope (2025) and Cyberhaven’s 2026 follow-up (222 companies).

8 Samsung ChatGPT incident, March 2023. Internal platform reduced unauthorized usage 80%.

9 PMI finding on lessons-learned documentation. Cornell/ADB Knowledge Solutions.

10 Seagate/IDC, “Rethink Data,” July 2020. 68% of captured data goes unused.

11 DeLong, Lost Knowledge, Oxford University Press, 2004. Panopto Workplace Knowledge Report, 2018.

12 Brynjolfsson, Li & Raymond, ibid. AI as conduit for tacit expert knowledge.

13 Brynjolfsson & Hitzig, “AI’s Use of Knowledge in Society,” U Chicago/NBER, Sept 2025.

14 Walker, “Context Engineering: Why Hayek’s Knowledge Problem Survives AI,” March 2026.

15 Moveworks, “The New Face of AI Leadership,” Nov 2025. N=200+ IT execs.

16 McKinsey Developer Velocity Index, 2020. N=440 enterprises.

17 Davenport & Barkin, MIT Sloan / All Hands on Tech (Wiley), 2024. 100+ interviews.

18 Northern Trust SVP Shaelyn Otikor-Miller, three-zone governance model, May 2024.

19 AI Ready America (2025); GitHub, “Internal Playbook for Building an AI-Powered Workforce.”

20 Dell’Acqua et al., ibid. 19 percentage points worse outside AI’s capability frontier.

21 Sumit Johar, CIO, BlackLine. TechTarget/CIO.com, Dec 2025.

22 Cisco, 2026 Data and Privacy Benchmark Study. N=5,200. 12% describe AI governance as mature. UBS Corporate Survey (2025): 60% self-develop.

23 Rest of World, Dec 2025. Reporting on AI-enabled tool building in emerging-market workplaces.

24 Thomas J. Sweet, CIO, IR Pros. “How a PE Portfolio Company CIO Built a Production AI Application Without a Development Team.” HellerSearch, March 2026.

25 Microsoft Customer Stories, “Northern Trust invests in client services with Dynamics 365.” UNLEASH interview with Otikor-Miller, April 2024. Get Reworked podcast, July 2023.

26 MIT NANDA, “The GenAI Divide: State of AI in Business,” 2025. 80%+ explored/piloted; 5% reached production.