How a French startup is quietly reshaping who gets to build artificial intelligence, especially inside the next wave of enterprise AI Centers of Excellence.
While tech giants lay off thousands of engineers and push enterprises toward expensive AI subscriptions, a quiet shake-up is unfolding in the open-source world. At its center sits Hugging Face, a company that we see doing something radical: making AI accessible to everyone, not just the chosen few with computer science degrees and million-dollar budgets.
If you feel a long way from AI maturity, you’re not alone. Despite massive investments, just 1% of companies say they’ve reached it, according to recent research. But the problem isn’t technical—it’s organizational. Employees are eager to learn and try new AI tools, but often lack support from leadership. That’s where platforms like Hugging Face help: they soften the technical barriers that make AI feel out of reach, even for teams without a full stack of engineers.
The startup, valued at $4.5 billion, has become the GitHub of AI. Here, millions of developers share, modify and deploy machine learning models.
But unlike the winner-take-all battles between OpenAI, Google, and Microsoft, Hugging Face is playing a different game entirely.
They’re democratizing AI by making it as easy to use as a spreadsheet at the very moment software engineers are cautiously retooling themselves as context engineers in the age of cloud and local LLMs.
“We’re enabling companies to take control of their AI destiny,” says Clem Delangue, Hugging Face’s co-founder and CEO.
It’s a bold claim, but the numbers suggest he might be right. The platform now hosts over one million AI models, with a new one uploaded every ten seconds.
More importantly, these aren’t just toys for hobbyists.
Major corporations, including Bloomberg, Pfizer and Intel, are building production systems on Hugging Face’s foundation. Even IBM has come to the table, integrating Hugging Face into its Watson X platform to help enterprises create, deploy and fine-tune foundation models across multiple domains.
Pushing talent to their limits…then letting them go
The timing couldn’t be more crucial. As Microsoft recently laid off 9,000 employees while simultaneously partnering with Hugging Face, a choice emerges for enterprises: become dependent on Big Tech’s APIs or build internal AI capabilities with focused teams.
The traditional approach, hiring armies of machine learning engineers, is arguably becoming prohibitively expensive and strategically risky. We are now seeing industry insiders take a new direction under the label “analyst-to-developer transformation.”
But it may be more accurate to think of it as the rise of the AI builder which reflects a broader shift in how companies approach problem-solving: This is less about getting rid of code; it’s about giving people who understand the business smarter tools, so they don’t have to wait on engineers to get things moving for more advanced AI projects.
Companies like Snowflake are now offering tools that enable business analysts to build AI applications using familiar languages, such as SQL, powered by models from Hugging Face.
A marketing manager can now deploy a sentiment analysis model to understand customer feedback without knowing a line of Python code. Or, audio files from customer service calls get transcribed using Hugging Face’s Whisper model, analyzed for emotional tone using wav2vec2, then scored for sentiment using Snowflake’s Cortex AI—all within a single SQL workflow.
“This unified approach transforms what would traditionally require data science expertise and weeks of development into straightforward queries that business analysts can build and modify in minutes,” Snowflake explained in a recent announcement.
They’re calling it “turning analysts into AI superheroes.”
The data from McKinsey’s first comprehensive survey of enterprise AI adoption tells a story that most executives aren’t prepared for. Among 703 technology leaders across 41 countries, 63% regularly use open-source AI models.
More striking: 51% of companies using open-source tools report positive ROI, compared to just 41% using proprietary solutions alone.
But buried in that research is a more unsettling finding.
When asked about the most significant barriers to AI adoption, 56% cited “security and compliance concerns” about open-source tools.
Yet the same companies are adopting them anyway. Why would enterprises knowingly choose what they perceive as riskier technology?
The platform that took everyone by surprise
What makes Hugging Face’s strategy particularly clever is how it solves problems for both sides of the AI equation. For model builders, from academic researchers to startup teams, the platform eliminates the need to build infrastructure from scratch.
Why spend months creating APIs, billing systems, and deployment pipelines when Hugging Face handles all of that for you?
Mark Surman, president, Mozilla Foundation: “The next big bet is building open tools and a stack that make AI truly accessible—like an AI Lego box that anyone can use. If we get this right, open-source AI won’t just be an alternative to closed systems. It will be the foundation for a more competitive, creative, and innovative future.”
If Royal Caribbean—featured in our last AI CoE article—were starting in 2025, they wouldn’t just be untangling COBOL. They’d perhaps be using task-specific models from Hugging Face to automate that process, or fine-tuning tools like AutoTrain to extract business logic into reusable components. Deployment would likely happen through Snowflake, Ollama, or even edge systems onboard ships, owning their AI stack instead of renting it.
For enterprises, it offers something even more valuable: choice without complexity. Instead of being locked into OpenAI’s ecosystem or Google’s tools, companies can experiment with thousands of models, swap them as needed, and maintain control over their data.
The most fascinating dynamic is Microsoft’s relationship with Hugging Face. On the surface, it seems contradictory: why would Microsoft, which has invested billions in OpenAI, also promote open-source alternatives through Hugging Face?
The answer reveals Microsoft’s deeper strategic thinking. By integrating Hugging Face into Azure, Microsoft wins regardless of which path enterprises choose. If companies go with proprietary models like GPT-4, Microsoft wins.
If they decide to use open-source models from Hugging Face, Microsoft still wins because they’re running on Azure infrastructure.
“We’re enabling customers to innovate faster and more securely with the best models the community has to offer,” said Asha Sharma, Corporate Vice President at Microsoft, when announcing expanded collaboration with Hugging Face.
It’s platform capitalism disguised as openness—and it’s working brilliantly.
What this means for the rest of us?
The question of who controls access is of enormous importance. Hugging Face’s approach suggests a future where AI capabilities are distributed rather than concentrated, allowing teams to compete with tech giants and innovation to emerge from unexpected places.
But embracing open-source tools is only half the battle. As OpenNova explored in The Fastest Way to Kill an AI Center of Excellence, the real breakdown often happens at the human level. Most AI initiatives don’t fail because of bad models or weak infrastructure. They fail because teams are siloed, tools go unused, and talent is disconnected from outcomes.
Hugging Face is changing that equation, by making AI easier to share, deploy, and adapt. It helps companies move faster, not just in building models, but in building momentum.
The company’s recent policy recommendations to the U.S. government emphasize that they’re advocating for open-source AI as a matter of national competitiveness, arguing that closed systems stifle innovation and concentrate power in too few hands.
From an OpenNova perspective, your choice is becoming clearer. You can either rent AI capabilities on a month-by-month basis from Big Tech or build sovereign capabilities using open tools.
That includes specialized connectors, such as the Model Context Protocol (MCP), which function as secure bridges that enable AI models to safely retrieve information from your company’s existing systems, including SharePoint documents, Oracle databases, and even HIPAA-protected medical records.
It means standing up local inference systems on air-gapped machines or GPU clusters, not just using models in the cloud.
And it means talent that can do all this, such as context engineers, infra-savvy developers and people who don’t just prompt a model but build the wrappers, guardrails and pipelines that make it trustworthy and compliant in enterprise-grade settings.
Instead of calling an API from OpenAI or Google, organizations can fine-tune their own small, efficient models in-house, trained on their own data, tailored to their workflows, and deployed wherever needed (cloud, edge, or even on-device).
In this world, AI becomes infrastructure, not service, and companies regain control over cost, privacy and performance.
If you’re interested in fusing talent with open source to build real, working AI, not just prototypes, join Ryan, OpenNova’s CEO, at this year’s AI Conference in San Francisco.