Build What Differentiates, Buy What Needs to Work

The gap between AI investment and enterprise impact is structural 

Across every sector we work in, the initial response to generative AI followed a consistent and largely rational trajectory: rapid procurement, broad distribution of tools, and a mandate to experiment. For organizations navigating an inflection point in an unproven technology, this was a defensible strategy. It was also, by design, temporary. 

Eighteen to twenty-four months later, the provisional has become permanent. The experiments shipped, the pilots matured into production systems, and the strategic reassessment that was supposed to follow never arrived. McKinsey’s latest State of AI data quantifies the consequence: while eighty-eight percent of enterprises have adopted AI in some capacity, fewer than ten percent have achieved scale in any single function, and only thirty-nine percent report measurable EBIT impact. Enterprise AI is generating activity without generating commensurate organizational value, and the disparity is widening rather than narrowing. 

Talent constraints and data infrastructure deficits explain a portion of this gap. But an underexamined structural factor is compounding underneath: the build-versus-buy decisions that most organizations made once, under fundamentally different market conditions, and have not revisited since. 

The economics of ownership have shifted faster than the strategy 

The original rationale for internal development was well-founded. The vendor ecosystem was immature, the underlying technology carried meaningful uncertainty, and organizations with sophisticated engineering capabilities rightly prioritized architectural control, data sovereignty, and governance independence. In a landscape where no external solution had proven itself at enterprise scale, building was not merely reasonable. It was responsible. 

What has shifted, and shifted rapidly, is the total cost of sustained ownership across a broadening AI portfolio. Foundation model updates now require quarterly retraining cycles that consume significant engineering bandwidth. Platform migrations in adjacent systems cascade into integration failures that demand immediate remediation. And because the barrier to initial development has dropped so precipitously, with tools like Cursor and Copilot Workspace compressing what once required a team into what a single engineer can prototype in days, the proliferation of departmental AI initiatives has far outpaced any organization’s capacity to maintain them coherently. The result is a compounding technical debt that accrues silently across a fragmented portfolio, absorbing the very engineering capacity that was earmarked for strategic advancement.

The question that merits sustained executive attention is deceptively straightforward: where is your most constrained technical talent deploying its time? The honest answer, in most organizations we advise, is maintenance of what has already shipped, not development of what comes next. That asymmetry represents a significant and largely invisible drag on enterprise AI maturity. 

Governance is constant. Capacity allocation is the variable. 

There is a foundational insight that most build-versus-buy deliberations neglect entirely: the governance obligation is identical regardless of which path an organization selects. Data policies, access controls, regulatory compliance, model oversight. These responsibilities belong to the enterprise whether the underlying system was developed internally or procured externally. What differs, materially, is whether an organization’s most capable technical leaders invest their finite capacity in defining and evolving governance frameworks, or in simultaneously architecting, implementing, and troubleshooting the systems those frameworks are meant to govern. The distinction has profound implications for how effectively an enterprise can scale AI beyond isolated use cases into cross-functional operational capability. 

A strategic framework: differentiation versus infrastructure 

McKinsey’s 2026 Global Tech Agenda identifies a defining characteristic of the highest-performing technology organizations: their CIOs have shifted from managing technology delivery to architecting enterprise strategy. The practical expression of that shift is a disciplined distinction between two categories of AI investment. 

Build what constitutes competitive differentiation. Proprietary data architectures, algorithms that encode unique market positioning, customer intelligence systems that reflect relationships no external vendor can replicate. These capabilities represent enduring strategic advantage and belong unambiguously within the enterprise. Internal investment is not merely preferable; it is the only viable path, because no third party possesses the institutional knowledge required to build them. 

Buy what constitutes operational infrastructure. Capability that must function reliably across organizational boundaries, scale to meet enterprise-wide demand, and remain continuously current as regulatory and technological environments evolve. When OneOncology doubled its workforce through a series of acquisitions while its HR function remained essentially flat, the organization required AI that could deliver accurate, entity-specific benefits guidance to thirteen thousand employees across

dozens of subsidiaries, including at peak demand periods like the final hours before Open Enrollment closes. Purpose-built operational AI reduced manual HR inquiry volume by fifty percent before the acquired entities had completed onboarding. As Justice West, who leads Enterprise Solutions at OneOncology, observed: “You’re not gonna see results from AI if that’s the approach. Being intentional really matters.” 

The pattern is consistent across operational domains. Organizations that deploy purpose-built AI for high-volume, high-stakes workflows are realizing time-to-value in weeks rather than quarters, bypassing the integration backlog and the compounding maintenance burden that characterizes internally developed alternatives, and liberating their engineering organizations to concentrate on genuinely differentiating work. That reallocation of technical capacity is, in aggregate, the most significant return on the buy decision. 

Operationalizing the framework  

The organizations achieving measurable enterprise-scale AI impact share several operational disciplines. They conduct rigorous portfolio inventories before authorizing new initiatives, establishing visibility into the full scope of active AI systems, their maintenance requirements, and their interdependencies. They enforce strategic coherence by requiring every proposed initiative to articulate its connection to a declared enterprise AI direction, ensuring that departmental innovation reinforces rather than fragments the broader architecture. And they apply honest lifecycle economics: if the original business case omitted quarterly maintenance, model retraining, and integration sustainment, the financial assumptions underlying the investment warrant immediate reassessment. Copilot has become shorthand for AI that assists a human user. 

The imperative 

The accessibility of AI development tools has never been greater. The organizational complexity of sustaining a growing portfolio of AI systems has never been more demanding. The widening distance between those two realities defines the central challenge confronting enterprise technology leadership today. 

Over the next twelve months, the organizations that translate AI investment into demonstrable enterprise impact will be distinguished not by the volume of what they built, but by the strategic discipline with which they allocated their most valuable resource: the capacity to build at all. 

 

Cascade AI

Ready to Transform How Your
Organization Runs?

Deploy orchestrated AI agents across HR, IT, and Operations — in weeks, not months.

Get your AI Roadmap

Get a roadmap for AI transformation across HR and Operations.