For the third consecutive year, the Thinkers360 AI Trust Index has taken the pulse of sentiment toward AI, and the results, once again, are a stark reminder to CIOs and CXOs that the technological innovation curve continues to outpace the ethical and governance structure required to support it.
The 2025 Index provides a crucial look into the AI paradox. The overall AI Trust Index score, which measures concern on a scale of 100 (not concerned) to 400 (extremely concerned), is 307. This is virtually unchanged from the 2024 score of 308, indicating a stagnation in sentiment following the massive leap in concern from 224 in 2023. We’re in a trust rut.
Further analysis reveals a critical chasm where AI end users register a higher level of concern (312) than AI providers and practitioners (301). The builders are more optimistic than the beneficiaries. This perception gap is the first red flag for any CIO. While 83% of providers agree the benefits of AI outweigh the risks, a far lower 65% of end users share that view. The disparity is a crisis of confidence that organizations must address directly.
Also, 61% of respondents somewhat believe or strongly believe in the possibility of an AI singularity, where machines surpass human intelligence and pose a threat. While this is fodder for science fiction, the immediate and tangible threats to business — privacy, accountability and fairness — are what demand immediate attention.
Based on the 2025 AI Trust Index, here are four mandates for every CIO and CXO to move their organization from passive observer to active leader in the architecture of AI trust.
1. Prioritize NIST trust attributes where concern is highest
The data is clear on what keeps people up at night. When measuring concern against the NIST AI Risk Management Framework attributes, a trifecta of issues stands out: privacy-enhancement (63%), accountability and transparency (61%), and fairness with harmful bias managed (59%). Each one scoring high on the very or extremely concerned scale.
In contrast, attributes like explainability and interpretability (49%), and valid and reliable (53%) are viewed with less concern. This says people generally believe the technology works as intended, but their concern is how it behaves.
For the CIO, this means shifting the focus from purely functional metrics to ethical outcomes. A few percentage points of accuracy improvement won’t move the needle on trust. In terms of the privacy attribute, the concern here is profound, especially among end users (69%) compared to providers (53%). This gap requires you articulate to end-users how their privacy is protected not just in general terms, but especially when AI technologies are involved.
2. Target the trust deficit in public-facing scenarios
Trust is not uniformly distributed. The Index reveals that concerns are highest for AI use in media scenarios (339) and personal scenarios (309). Conversely, concern is lowest, and thus trust is highest, in government scenarios (291) and workplace scenarios (289).
This presents the irony that employees are generally comfortable with AI supporting internal corporate operations, but they’re deeply concerned about AI governing their public lives, information access, and civil services.
As a CIO, you must recognize that low trust in public AI eventually seeps into the enterprise. If your customers or employees see AI being used unethically in media scenarios through misinformation and bias, or in personal scenarios like cybercrime, their skepticism will bleed into your enterprise-grade CRM or HR systems.
The recommendation is to build on the existing trust in the workplace. Use the enterprise as a model for responsible deployment. Document and communicate your AI internal usage policies with exceptional clarity, and allow this transparency to be your market differentiator. Show your customers and partners the standards you hold your internal AI to, and then extrapolate those standards to your external products.
3. Implement industry-specific governance and transparency
Trust varies considerably by industry, a factor CIOs must bake into their risk models.
For CIOs in highly regulated industries such as finance and healthcare, the mandate is to not just maintain but elevate the current level of rigor. The existing regulatory compliance is the baseline, not the ceiling, and the market will punish the first major breach or bias incident, undoing years of consumer confidence.
4. Close the perception gap through experiential trust
The most salient finding in the 2025 Index is the persistent 11-point divide in overall concern between providers and end users, and the 18-point gap in optimism regarding benefits outweighing risks. This is a human-centric communication problem, not a technical one.
We must stop telling end users AI is trustworthy and start showing them through tangible experience. Trust is a feature that must be designed from the start, not something patched in later.
The first step is to involve the customer. Implement co-design programs where the end-users and customers, not just product managers, are involved in the design and testing phases of new AI applications. If your customer base is concerned about bias, invite them to help you source and annotate training data to ensure fairness.
While I generally don’t recommend new CXO titles, you may also want to consider establishing a chief AI ethics officer, CAIEO, or finding a suitable internal candidate to take on the role. The CIO needs an equal partner focused purely on the social and ethical consequences of AI. This role should report directly to the CXO suite, ensuring ethical decision-making has the same weight as security or infrastructure mandates.
The mandate for responsible innovation
This year’s AI Trust Index confirms the AI revolution has peaked the concern of its beneficiaries, and that concern is focused squarely on the human dimensions of technology such as governance, ethics, and fairness.
For the CIO, the mission is unambiguous. You’re no longer just the custodian of the organization’s technology stack but the chief architect of its digital trust. By addressing high concerns around privacy and bias, using the workplace as a model for transparency, adjusting governance to your industry’s trust profile, and actively closing the user-provider perception gap, you can ensure your organization innovates responsibly.
Read More from This Article: 4 mandates for CIOs to bridge the AI trust gap
Source: News

