Sixty-seven percent of AI CoE proposals are rejected in the first budget cycle. Not because the business case for AI is weak. The business case for enterprise AI is among the strongest in the history of enterprise technology investment. These proposals fail because they are written for the wrong audience, use the wrong financial framework, and make commitments that experienced executives immediately recognize as unrealistic.
Building a CoE is a significant organizational investment. You are asking for headcount, infrastructure, tooling, and executive attention over a multi-year horizon. The executives who approve that investment have seen a hundred technology initiatives come and go. They know that most of them promised transformation and delivered complexity. Your business case needs to address that skepticism directly, not assume goodwill.
This guide covers why AI CoE proposals fail, how to build the financial model that survives CFO scrutiny, how to map and address stakeholder concerns before they surface in the approval meeting, and the proposal structure that high-approval-rate business cases use. The average 12-month ROI documented by organizations with mature AI CoEs is $87 million. The practices below are how those programs got funded in the first place.
Why 67% of Proposals Are Rejected
The rejection patterns for AI CoE proposals are consistent enough that they constitute a predictable set of failure modes. Understanding them is the first step toward avoiding them. The most common reasons proposals fail are structural, not substantive: the underlying business case is sound but the proposal violates the implicit standards that senior decision-makers apply.
- ROI claims that cannot be traced to specific use cases Proposals that cite industry analyst figures for "AI ROI" without connecting them to the organization's actual cost structure and specific deployment plans are dismissed as speculative. CFOs need numbers they can interrogate, not benchmarks they cannot verify.
- Year-one returns that require perfect execution Proposals that show positive ROI in year one based on aggressive deployment timelines signal that the author has not built real AI programs. Experienced executives know that infrastructure setup, governance development, and organizational change take time. Optimistic year-one projections destroy credibility for the entire proposal.
- Cost estimates that omit the true cost of organizational change Technology costs are easy to estimate. Change management, training at scale, process redesign, and the productivity dip during transition are the costs that make CFOs revise projections upward by 40 to 60 percent. Proposals that omit these costs are returned for revision or rejected outright.
- No clear accountability structure Proposals that describe what the CoE will do but do not identify who is accountable for delivering results, how performance will be measured, and what happens if the program underperforms leave executives with no confidence that anyone is actually responsible for the outcome.
- Scope that competes with existing programs AI CoE proposals that overlap with active digital transformation programs, cloud initiatives, or data governance workstreams create political resistance from program owners. Proposals that do not explicitly address how the CoE will coexist with and complement existing programs generate cross-functional opposition that kills approvals in governance committees.
Building a CFO-Grade Financial Model
The financial model in your CoE business case needs to meet a different standard than a project budget. CFOs are not evaluating whether your estimates are precise. They are evaluating whether you understand the categories of investment required, whether your return projections have identifiable sources, and whether the risk-adjusted case is still compelling. The model below is the structure that survives CFO scrutiny.
The first principle of a CFO-grade model is that every return number traces to a specific use case, a specific business unit, and a specific mechanism. Not "productivity improvements across the enterprise" but "customer service ticket resolution time reduction of 35% in the North America operations center, based on current ticket volume of 180,000 per year at $12 average handling cost." Specificity is not just credibility: it is the mechanism by which you get buy-in from the business unit leaders who will ultimately deliver the results.
3-Year AI CoE Financial Summary (Illustrative)
Several things distinguish this model from the ones that fail. Year one shows a net negative, which is honest and credible. Returns ramp significantly in year two as production deployments generate documented value. The investment categories include change management and external advisory, which are the line items CFOs add when they revise optimistic proposals. Conservative return assumptions mean that if performance exceeds projections, the story improves rather than the committee having to revisit a failed forecast.
The conservative scenario is not pessimism. It is the protection against the most common CFO response to AI CoE proposals: "We'll approve this, but we'll hold the program accountable to the numbers you gave us." Programs that commit to achievable numbers and exceed them build the internal credibility that funds expansion. Programs that commit to aggressive numbers and miss them get restructured or cut.
Mapping the Approval Stakeholders
Budget approval for an AI CoE involves more stakeholders than most proposals acknowledge. Understanding what each stakeholder cares about and addressing those concerns before they become objections in the approval meeting is one of the highest-leverage actions you can take in the business case development process.
CEO / President
Cares about strategic differentiation, competitive threat response, and organizational transformation at scale. Needs to see the AI program connected to a specific competitive advantage narrative, not just cost reduction.
CFO
Cares about the financial model's completeness, the accountability structure, and the measurement mechanism. Needs specific use cases with traceable ROI, not aggregate industry benchmarks. Will test whether you included all cost categories.
CTO / CIO
Cares about technical coherence with existing systems, governance and risk management, and whether the CoE will compete with or complement existing technology programs. Needs clear boundaries and integration points.
Business Unit Leaders
Cares about disruption to current operations, resource demands on their teams, and whether the AI program will actually solve problems they recognize as important. Skeptical of technology-led programs that impose solutions rather than addressing real needs.
General Counsel / Chief Risk Officer
Cares about regulatory compliance (EU AI Act, sector-specific requirements), data privacy, and whether the CoE has adequate governance to prevent liability-generating AI failures. Needs to see the governance architecture before approving.
CHRO
Cares about workforce impact, talent strategy, and the change management plan. Needs clarity on job displacement concerns, reskilling investment, and the talent acquisition strategy for roles the CoE requires.
Need Help Building Your AI CoE Business Case?
Our senior advisors have supported over 200 AI CoE business case development processes. We help you build the financial model, stakeholder narrative, and governance architecture that wins approval and sustains multi-year funding.
Start With a Free AssessmentThe Proposal Structure That Wins
High-approval-rate AI CoE proposals follow a consistent structure. The sequence matters because executives reviewing dozens of proposals apply pattern recognition to document structure before they engage with content. Proposals that deviate from the expected structure create friction that pre-disposes readers toward skepticism.
-
01Strategic Context (1 page) Connect the CoE directly to two or three specific strategic priorities the executive team has already articulated. This is not background. It is the proof that the proposal is connected to what leadership is already trying to accomplish, not a new initiative competing for attention.
-
02Current State Problem Statement (1 page) Quantify what is being lost or missed today in the absence of a CoE. Duplicated AI spend, failed pilots, capability gaps relative to competitors, regulatory exposure. This section answers "why now" without requiring the reader to accept that AI is generically important.
-
03Proposed CoE Model and Scope (2 pages) Define exactly what the CoE will do, what it will not do, how it fits with existing programs, and what governance structure it will operate under. Ambiguity about scope is one of the most common reasons proposals generate cross-functional opposition that defeats approval.
-
04Use Case Portfolio and Year-One Roadmap (2 pages) Name the specific use cases the CoE will tackle in year one, ranked by value and feasibility. This is where the financial model gets its traceable returns. Executives need to see that the CoE has an agenda, not just a mandate.
-
05Financial Case: 3-Year Model (1 page) The CFO-grade model described above. Conservative base case, specific return sources, all cost categories included. Show year-one net negative explicitly rather than burying it or hiding it through overly optimistic assumptions.
-
06Governance, Risk, and Accountability (1 page) Define who is accountable for CoE results, how performance will be measured and reported, what governance mechanisms will manage AI risk, and what the decision process is for AI investments above specified thresholds. This section addresses every legal, risk, and governance stakeholder concern in one place.
-
07Phased Implementation Plan (1 page) Show the program in 90-day phases with specific milestones, decision gates, and what success looks like at each stage. This gives the approval committee the ability to fund in phases if they are not ready to commit the full 3-year investment.
The total proposal is eight to ten pages. Longer proposals signal that the author cannot make decisions about what is important. Executive attention is finite, and proposals that require more than 30 minutes to evaluate are typically reviewed by a deputy and summarized, which means your proposal narrative never reaches the decision-maker intact.
Handling the Hard Objections
Every AI CoE proposal faces a predictable set of objections in the approval process. Preparing responses to these objections before they surface is more effective than improvising answers in the room. The objections below represent the most common challenges that derail otherwise strong proposals.
Securing Multi-Year Funding
Single-year AI CoE funding is a trap. Year one is the infrastructure year, and the most likely outcome of single-year funding is that the program gets reviewed at the end of year one, is not yet showing the returns that only begin to materialize in year two, and gets defunded just as it is reaching productive scale.
The case for multi-year funding is made in the proposal, not in the renewal meeting. The elements that make multi-year funding approval more likely are phased milestones that give leadership confidence the program is on track without requiring annual reapproval of the full budget, governance structures that include an executive steering committee that provides ongoing oversight without creating annual decision-making overhead, and a year-one use case portfolio that delivers early wins demonstrating the CoE can execute.
The 340% average ROI that well-structured AI programs deliver does not happen in year one. It accumulates from year two onward as production deployments compound and capability reuse multiplies the value of infrastructure investment. Programs that understand this dynamic, and explain it clearly in their business cases, are far more likely to get the multi-year commitment that makes those returns achievable.
For the full picture of how to structure your CoE to deliver these returns, the AI Center of Excellence service covers operating model design, staffing strategy, and the governance architecture that makes sustained ROI delivery achievable. The CoE setup guide provides the implementation framework that connects your approved business case to an executable program plan.
AI CoE Business Case Toolkit
The AI CoE Guide includes the complete financial model template, stakeholder presentation structure, and objection handling framework that high-approval-rate proposals use. Built from 200 enterprise AI program engagements across every major industry.
Download the AI CoE GuideFrom Approval to Execution
Winning budget approval is the beginning of the accountability cycle, not the end of the business case process. The commitments you made in the proposal become the performance standards against which the program will be evaluated. This means the business case document needs to be treated as a living accountability framework, not a submission artifact.
In practice, this means sharing the financial model with the business unit leads whose use cases generated the return projections and getting their explicit commitment to the baselines, building the measurement infrastructure described in the AI CoE metrics guide before you need to report on it, and establishing the governance reporting cadence before the program is running at full capacity.
The CoE operating model covers how to translate approval into an organizational structure that can actually deliver on the business case commitments. The structure you build in the first 90 days determines whether your year-two results confirm the projections that won approval or force a difficult conversation about revised expectations.
Organizations that approach the post-approval period with the same discipline as the pre-approval business case are the ones that reach the $87 million 12-month ROI benchmark. Those that treat approval as the finish line typically find themselves defending the program rather than expanding it at their first annual review.