I’ve been in the tech industry for over three decades, and if there’s one thing I’ve learned, it’s that the tech world loves a good mystery. And right now, we’ve got a fascinating one on our hands.
This year, two of the most respected surveys in our field asked developers a simple question: Do you trust the output from AI tools? The results couldn’t be more different!
- The 2025 DORA report, a study with nearly 5,000 tech professionals that historically skews enterprise, found that a full 70% of respondents express some degree of confidence in the quality of AI-generated output.
- Meanwhile, the 2025 Stack Overflow Developer Survey, with its own massive developer audience, found that only 33% of developers are “Somewhat” or “Highly” trusting of AI tools.
That’s a 37-point gap.
Think about that for a second. We’re talking about two surveys conducted during the same year, of the same profession and examining largely the same underlying AI models from providers like OpenAI, Anthropic and Google. How can two developer surveys report such fundamentally different realities?
DORA: AI is an amplifier
The mystery of the 37-point trust gap isn’t about the AI. It’s about the operational environment AI is surrounded with (more on that in the next section). As the DORA report notes in its executive summary, the main takeaway is: AI is an amplifier. Put bluntly, “the central question for technology leaders is no longer if they should adopt AI, but how to realize its value.”
DORA didn’t just measure AI adoption. They measured the organizational capabilities that determine whether AI helps or destroys your team’s velocity. And they found seven specific capabilities that separate the 70% confidence group in their survey from the 33% in the Stack Overflow results.
Let me walk you through them, because this is where we’ll get practical.
The 7 pillars of a high-trust AI environment
So, what does a good foundation look like? The DORA research team didn’t just identify the problem; they gave us a blueprint. They identified seven foundational “capabilities” that turn AI from a novelty into a force multiplier. When I read this list, I just nodded my head. It’s the stuff great engineering organizations have been working on for years.
Here are the keys to the kingdom, straight from the DORA AI Capabilities Model:
- A clear and communicated AI stance: Do your developers know the rules of the road? Or are they driving blind, worried they’ll get in trouble for using a tool or, worse, feeding it confidential data? When the rules are clear, friction goes down and effectiveness skyrockets.
- Healthy data ecosystems: AI is only as good as the data it learns from. Organizations that treat their data as a strategic asset—investing in its quality, accessibility and unification—see a massive amplification of AI’s benefits on organizational performance.
- AI-accessible internal data: Generic AI is useful. AI that understands your codebase, your documentation and your internal APIs is a game-changer. Connecting AI to your internal context is the difference between a helpful co-pilot and a true navigator.
- Strong version control practices: In an age of AI-accelerated code generation, your version control system is your most critical safety net. Teams that are masters of commits and rollbacks can experiment with confidence, knowing they can easily recover if something goes wrong. This is what enables speed without sacrificing sanity.
- Working in small batches: AI can generate a lot of code, fast. But bigger changes are harder to review and riskier to deploy. Disciplined teams that work in small, manageable chunks see better product performance and less friction, even if it feels like they’re pumping the brakes on individual code output.
- A user-centric focus: This one is a showstopper. The DORA report found that without a clear focus on the user, AI adoption can actually harm team performance. Why? Because you’re just getting faster at building the wrong thing. When teams are aligned on creating user value, AI becomes a powerful tool for achieving that shared goal.
- Quality internal platforms: A great platform is the paved road that lets developers drive the AI racecar. A bad one is a dirt track full of potholes. The data is unequivocal: a high-quality platform is the essential foundation for unlocking AI’s value at an organizational level.
What this means for you
This isn’t just an academic exercise. The 37-point DORA-Stack Overflow gap has real implications for how we work.
- For developers: If you’re frustrated with AI, don’t just blame the tool. Look at the system around you. Are you being set up for success? This isn’t about your prompt engineering skills; it’s about whether you have the organizational support to use these tools effectively.
- For engineering leaders: Your job isn’t to just buy AI licenses. It’s to build the ecosystem where those licenses create value. That DORA list of seven capabilities? That’s your new checklist. Your biggest ROI isn’t in the next AI model; it’s in fixing your internal platform, clarifying your data strategy and socializing your AI policy.
- For CIOs: The DORA report states it plainly: successful AI adoption is a systems problem, not a tools problem. Pouring money into AI without investing in the foundational capabilities that amplify its benefits is a recipe for disappointment.
So, the next time you hear a debate about whether AI is “good” or “bad” for developers, remember the gap between these two surveys. The answer is both, and the difference has very little to do with the AI itself.
AI without a modern engineering culture and solid infrastructure is just expensive frustration. But AI with that foundation? That’s the future.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The 37-point trust gap: It’s not the AI, it’s your organization
Source: News

