OpenAI and Anthropic are expanding their reach into professional services through joint ventures and acquisition talks, moving model providers closer to implementation roles traditionally held by systems integrators.
Joint ventures tied to the two AI companies have held talks to acquire services companies that help businesses deploy AI, with OpenAI’s venture in advanced stages on three deals, Reuters reported.
The companies are reportedly looking to add engineers and consultants as enterprise customers try to move generative AI from experiments into production.
Separately, Anthropic announced plans for a new enterprise AI services company backed by Blackstone, Hellman & Friedman, and Goldman Sachs, aimed at helping mid-sized businesses bring Claude into core operations.
Anthropic said its applied AI engineers will work with the new company’s engineering team to identify use cases, build custom systems, and support customers over time.
Market drivers for services expansion
For CIOs, the issue is whether AI vendors are starting to take over more of the work traditionally handled by consulting firms, systems integrators, and managed service providers. The developments suggest model providers want a stronger hand in enterprise AI implementation, even as systems integrators remain central to large-scale rollouts.
The push also reflects a problem many CIOs are already facing: AI pilots can be launched quickly, but turning them into secure production systems usually requires months of integration and process work.
“IT deployments in the enterprise domain have been consultations or advisory-driven,” said Faisal Kawoosa, founder and chief analyst at Techarc. “So if they have to expedite adoption, because that is where the real money is, they will have to align with the existing framework and go-to-market model of enterprises.”
Kawoosa said AI companies are currently at the top of the value chain and want to remain “in the driver’s seat” rather than become just another IT vendor.
Deepika Giri, head of research for AI, analytics, and data for Asia Pacific at IDC, said the shift could point to a broader restructuring of enterprise AI.
“AI model providers are moving beyond being platform vendors to actively shaping the entire AI value chain,” Giri said. “By expanding into implementation, consulting, and managed services, they are positioning themselves closer to enterprise outcomes rather than just supplying underlying technology.”
Kawoosa added that some IT services companies may be cautious about AI because the technology is still unreliable, and because wider adoption could weaken their role in enterprise IT projects. “With this change in go-to-market strategy, AI players are taking charge,” he said.
Lower deployment risk, but deeper lock-in
Buying AI services directly from model providers could make early deployments easier.
The process could reduce deployment risk in the short term because enterprises get tighter integration and access to specialized expertise, said Tulika Sheel, senior vice president at Kadence International.
But that convenience could come with a longer-term trade-off.
“It also creates deeper dependency across the stack, from models to data pipelines and workflows,” Sheel said. “Over time, this could increase lock-in, making it harder to switch vendors without significant disruption.”
AI model providers are trying to become a “one-stop shop” for enterprises by tying AI applications and services more closely to their usage-driven business models, according to Neil Shah, VP for research and partner at Counterpoint.
“Controlling the application and services layer allows them to lock in enterprises and also benefit from optimizing the model better by understanding the enterprise needs, pain points, and way of working firsthand,” Shah said.
Lock-in is not inevitable, according to Giri, but avoiding it requires CIOs to make deliberate architecture choices early.
“While the model layer can increasingly be abstracted through modular architectures, avoiding lock-in requires deliberate design choices,” Giri said. “Without that, organizations risk becoming dependent not just on a model, but on the entire stack: data pipelines, workflows, and governance frameworks tied to a specific provider.”
The move also shows why enterprise AI requires so much hands-on implementation work, according to Sheel. Generative AI platforms may be powerful, but they still require significant enterprise adaptation before they can support real business processes.
“Enterprise AI isn’t plug-and-play because it needs deep integration with internal data, workflows, and governance systems,” Sheel said. “This highlights a gap between model capability and real-world deployment.”
That might prompt CIOs to consider not only which AI model performs best, but also who will control the implementation path once those models are embedded in enterprise systems.
Read More from This Article: OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race
Source: News

