Staff it like a lab; isolate it from your business.
Most AI Centers of Excellence (AI CoE) stall before they scale. At OpenNova, we’ve studied the giants—NVIDIA, AWS, Verizon—and found a missing piece: momentum. This is how the smartest teams avoid dead ends and accelerate tangible outcomes.
31% of employees are actively sabotaging their company’s AI strategy—not by hacking systems or staging walkouts, but by quietly refusing to use the tools handed to them. This stat should rattle any executive or data leader betting big on transformation. The truth is that most AI initiatives don’t fail because of a lack of models or computing—they fail at the human layer.
That’s why OpenNova sees the most innovative enterprises—from Verizon to AWS— rethinking how to scale AI, not with more pilots but through an AI Center of Excellence (CoE), weaving AI into the fabric of every decision, every team and every win. In fact, at OpenNova, we see AI CoES as the hiring secret that no one is talking about enough.
When done correctly, it becomes a bonafide talent pipeline, and we’re actively involved in helping our clients make this a success.
In our conversations with AI, cloud and data team leaders, we see that without a AI CoE blueprint, you don’t just fall behind; you risk becoming irrelevant.
Too many of you still staff AI like a lab experiment, disconnected from the business. That’s a blueprint for “dead on arrival.” You can spend millions building more intelligent machines without figuring out how to make them practical.
In fact, even at the consultancy level, McKinsey confirms what we see when we advise companies how to blueprint their first AI CoE and staff it. Multiple teams develop AI use cases in silos without sharing tech or best practices with one another. In one example, two teams at a significant telecom built similar AI systems without knowing about each other.
Nvidia also shares that silos like these and other wasted resources thrive in decentralized AI models in 85% of businesses. We can clearly see that over the last year or so, when your AI experts and tools are scattered and not working together, it takes longer to profit, while expenses—especially from renting cloud services—can get out of hand.
We share your goal and survival instincts to not fall into the same trap.
However, if you’re the leader tasked with rolling out your AI CoE, you can often feel pulled in two or more directions. Do you lock it all in one big control room (centralized) or spread it out so everyone’s got a piece of the action (federated) to drive results faster? Or do you take a blended approach to unlock the full value of GenAI to solve some of the biggest problems in the world today?
At this point, it’s too early to pick a winner or loser—all three approaches produce results. However, while we at OpenNova lean towards a federated model for elite managed or on-demand talent teams to fill gaps in hiring frameworks, we carefully monitor the progress of alternatives as we position our customers for success in the GenAi age.
NVIDIA: AI CoEs as central control towers
When NVIDIA talks about building a successful AI CoE, it doesn’t start with software or data but with the infrastructure.
They warn: If you build AI on generic cloud systems, it’ll cost more, run slower, and probably fail. Instead, they steer you towards setting up a purpose-built AI platform to get things done over three times faster:
- On-prem hardware to handle heavy model training and tuning
- Cloud services for spikes in usage (so you’re not overpaying all the time)
- Edge computing for fast decisions close to where the data lives (like in factories or vehicles)
- One control panel to see what’s running where and shift resources easily
- Strong data protection, so sensitive info doesn’t get tossed around the internet
They raise one big red flag: if you don’t plan for this kind of setup—or “go with what you’ve got”—your AI projects will likely fall apart or deliver weak results.
And the numbers back it up:
- They get cleaner, more reliable data almost 2.5 times more often
- They’re twice as good at rolling out AI across the business
- They reuse what they build 90% of the time instead of starting from scratch like most others
- And because of all that, they make more money from AI with less waste and better teamwork
When you treat AI like a mission-critical system, not a side project, you unlock real value.
Verizon’s AI maturity favors a hybrid approach.
With a global footprint, the telecom giant has been leveraging AI in various forms for decades. But instead of isolating AI in a lab, Verizon built something far more scalable — an AI engine. This reflects its enterprise maturity and bold ambition, combining hybrid governance with deeply embedded teams.
Most recently, Verizon hired 70+ AI specialists, embedding them directly into product, security, customer, and operations teams — ensuring that AI isn’t a side project but a core part of how the business runs.
The company is developing a Responsible AI charter at the governance level, with leaders from Legal, IT, Commercial, and Security overseeing five pillars: Strategy, Adoption, Tech, Talent, and Responsible AI.
Verizon’s Generative AI Center of Excellence brings together different uses of AI across the company. It helps everyone learn from each other, try out new ideas safely, and see things in the same way.
The outcomes show it’s working well
- 400+ AI models in production
- 90% drop in false cyber alerts
- 95% of customer questions are answered by virtual agents
- 7 minutes shaved off each of their 70 million annual store visits
- Their internal support assistant? 96% accuracy, 95% answerability
But the deeper story is how they did it. Similar to NVIDIA, they built AI—like infrastructure with modular, reusable components, solving just about any business problem. Instead of siloed pilots, they design sharing, adaption and scaling models. With heavy investment in people, including engineers, they ensure no one is left behind with enterprise-wide AI education, from frontline staff to leaders.
Amazon: Federated approach for small, powerful wins
Amazon, on the other hand, does not preach big-bang AI. It relentlessly pursues small, embedded wins closely approximating OpenNova’s approach to building a AI CoE.
Through AWS, Amazon has helped dozens of companies establish AI CoE.
Still, their AI CoE model differs from the typical approach.
Instead of parachuting in an army of consultants or launching a massive new unit, they start with just 2 to 4 embedded experts inside a business unit. Engineers, data scientists, and ML ops leaders roll their sleeves and work alongside plant managers, product leads, or supply chain teams — wherever the action is.
What we like about their playbook is their goal of building internal muscle so companies can scale AI independently.
Lean by design, it works:
- At Georgia-Pacific, the AI CoE helped reduce defects at scale, improving detection rates 24× and saving millions in lost production.
- At Baxter, AI-driven monitoring helped prevent over 500 hours of factory downtime.
Trust builds from every win, adding momentum to the AI CoE, with a crucial caveat. They don’t aim to own the AI. They aim to leave behind capability. They arm you with modular toolkits, best-practice templates, and coaching for frontline teams. Once momentum builds, the original AWS team steps back — and the customer’s AI experts take over.
It’s not centralized. It’s not even hybrid. It’s federated from the start, with a blueprint that assumes the future belongs to those who can embed AI close to the work.
For companies just starting out, Amazon’s AI CoE says don’t build the control tower first. Instead, create the runway and let people take off from where they already stand.
Conclusion
When it comes to scaling AI, most companies look to the giants. NVIDIA tells you to build the perfect platform. Verizon shows you how to structure and staff it. AWS embeds talent at the edge to generate traction. But OpenNova sees something they often overlook: momentum.
In many instances, AI doesn’t stall due to a lack of GPUs, org charts, or even embedded experts. It stalls in the grey zone between strategy and execution — when teams don’t have the right roles in place, the systems aren’t quite ready, and everything moves just a little too slow to stick.
That’s the rarified atmosphere where OpenNova operates and one we are deeply committed to.
Where NVIDIA begins with architecture, OpenNova plugs into what’s already there — whether you’re on cloud, hybrid, or legacy infrastructure — and focuses on speed-to-impact, not just speed-to-deploy.
Where Verizon builds structure top-down, OpenNova takes a more agile route, inserting elite AI talent on demand to support the real needs of product lines, customer ops, or security teams.
While AWS builds trust by starting small, OpenNova often enters when time is short, and the stakes are high—when companies can’t afford to experiment slowly. Our goal isn’t to replace any of them, but stitch the gaps between them — creating motion between vision and results.
When done correctly, teams get faster traction, a sharper focus, and less waste.
Sometimes, what companies need most isn’t just a AI Center of Excellence. They need a catalyst.