Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

America’s AI Action Plan and the risk of sprinting ahead without trust

America’s AI Action Plan states in no uncertain terms that to maintain America’s dominance in AI, we must “remove red tape and onerous regulation.” Seeking to differentiate itself from its predecessor, the Trump administration has argued that restricting AI development with onerous regulation “would not only unfairly benefit incumbents… it would mean paralyzing one of the most promising technologies we have seen in generations,” which is why President Trump rescinded what it calls the Biden administration’s dangerous actions on his very first day in office.

But what happens to trust and security when the focus is on accelerating innovation without appropriate guardrails?  It is the age-old struggle between regulation and innovation, a constant balancing act where leaders must decide how to remove barriers without stifling the very safeguards that keep technology sustainable as it scales.

Having spent nearly 15 years advising clients ranging from startups to Fortune 50 companies on how to adapt new technologies to existing and new legal frameworks, I understand the frustration with bureaucratic obstacles that slow innovation. This is particularly true when the government passes laws that do not reflect technological or operational realities. Case in point, during Mark Zuckerberg’s 2018 Senate hearing, the senators’ questions laid bare a striking lack of familiarity with even the basic workings of the Internet, underscoring the disconnect between policymakers and the technologies they seek to regulate.

But the reality is that speed without safeguards rarely ends well. To truly sustain American dominance in AI, citizens must trust the technology. They must also feel secure allowing their data to be used for training, since data is the lifeblood of AI and the development of large language models cannot advance without it. In the absence of regulation that protects this trust, the foundations of US leadership in AI will begin to weaken.

The choice before America is not a stark one between so-called “innovation-killing regulation” and unchecked “freedom-first governance.” That is a false dichotomy. The real path forward is crafting thoughtful, well-designed regulations that provide the durable foundation on which innovation can scale.

Innovation versus regulation

Having worked with clients in heavily regulated industries like advertising, healthcare and defense, I can tell you that oversight isn’t inherently anti-innovation. Regulation done thoughtfully can accelerate adoption, because it builds confidence among users, employees and investors.

Without strong safeguards against threats like adversarial attacks, data misuse or intellectual property theft, large-scale adoption becomes difficult. No one wants to deploy an AI tool only to discover later that it leaked sensitive data, exposed proprietary IP or became a new attack surface for adversaries. Beyond the immediate operational and security fallout, there’s also the risk of lawsuits over data misuse, regulatory penalties or contractual breaches. For many organizations, the uncertainty of those risks is enough to slow or even halt adoption until stronger safeguards are in place. In fact, a recent Forrester report shows that data privacy and security concerns remain the biggest barrier to generative AI adoption. Building trustworthy AI requires attention to privacy, cybersecurity and AI governance.

AI isn’t just a race for speed; it’s a race for trust

AI isn’t just about faster chips, bigger models or who gets to market first. It’s about whether enterprises, governments and individuals feel confident enough to use it in the first place.  The hesitation around DeepSeek, a Chinese artificial intelligence system, illustrates this point, as many potential users and governments remain wary due to unresolved privacy and cybersecurity concerns that undermine trust in the system and threaten national security.

We don’t have to speculate about what happens when trust is ignored. The crypto industry offers a cautionary tale for revolutionary technologies. Without regulation tailored to the unique nature of blockchain, the space was plagued by cyberattacks, privacy failures, security breaches and widespread illicit use. Now, as regulators begin clarifying the legal landscape, such as by requiring regular public disclosures and compliance with anti-money laundering and export control laws, many in the industry argue that digital assets can finally gain legitimacy and move into the financial mainstream.

When trust collapses, adoption stalls, and regulation becomes reactive rather than strategic. By the time governments step in to restore confidence, the damage to innovation momentum can be severe and long-lasting.

Beyond finance, blockchain adoption in other sectors reveals the same pattern. A study about blockchain use cases in healthcare, for example, revealed that the promise of secure, patient-centric data management has run headfirst into barriers around privacy, security, scalability and cost. Blockchain adoption in healthcare has stalled as privacy and security gaps, high data volumes, lack of standardization and limited interoperability can make it costly, inefficient and often noncompliant with regulations such as the EU’s General Data Protection Regulation.

More than a decade on, blockchain’s potential remains real, but its trajectory shows how the absence of early safeguards and strategic regulation can delay legitimacy and adoption.

Privacy and security as strategic assets, not red tape

Taking a page from the experience with blockchain and crypto, where the lack of regulatory clarity delayed adoption, the United States now has an opportunity to shape an approach to AI in which privacy and cybersecurity are treated as strategic assets.

I recommend the following strategic steps:

  • Embedding cybersecurity and privacy from the start. Just as “privacy by design” became a foundational best practice for data protection, “AI governance by design”, as reflected in NIST AI Risk Management Framework, calls for embedding cybersecurity and privacy into the earliest stages of AI development rather than adding them later.
  • Treating red-teaming and adversarial testing as competitive advantages. In AI testing, “red teaming” refers to simulating the tactics of an attacker in order to probe for weaknesses, a practice borrowed from cybersecurity. It is critical because it helps ensure that an AI system functions as intended and does not expose vulnerabilities that could undermine its reliability, security or trustworthiness.
  • Incentivizing public–private collaboration. Public-private collaboration is essential to advancing sensible AI regulation because it brings together the complementary strengths of government and industry. Governments provide oversight, funding and access to public data, while companies contribute technical expertise, innovation and market solutions. By working together, these partnerships help close resource and knowledge gaps, establish shared ethical standards and ensure that AI is developed in a way that is both globally inclusive and locally accountable.
  • Building regulatory alignment across borders Harmonizing AI laws across borders is critical because fragmented regulations slow innovation, weaken safety and limit equitable access. A healthcare algorithm that meets EU data governance standards might still violate certain US state laws or face export restrictions in China, making global deployment difficult. Startups and smaller firms with limited resources to address complex regulatory regimes are hit hardest, while larger enterprises gain an advantage navigating the patchwork of rules.
  • Building a federal privacy framework. The absence of a unified federal privacy statute has left the United States with a patchwork of state and local rules governing AI and data protection. As new regulations emerge at the state level, and in some cases even at the municipal level, businesses face compliance challenges. For AI companies that rely heavily on data, this fragmented landscape creates inefficiencies, higher legal costs and operational uncertainty, underscoring the urgency of a single, nationwide standard.

Final thoughts: We need a balanced approach

The path forward requires recognizing that America’s AI Action Plan sets ambitious goals for technological sovereignty and market leadership, but achieving these goals demands more than deregulatory enthusiasm. It requires building infrastructure that enables widespread and sustainable AI adoption.

The organizations and countries that understand this dynamic will capture the largest share of AI’s economic benefits.

Reducing oversight doesn’t remove responsibility. In a world where AI models can make choices, produce content and influence public opinion, a balanced approach to governance is essential.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: America’s AI Action Plan and the risk of sprinting ahead without trust
Source: News

Category: NewsAugust 26, 2025
Tags: art

Post navigation

PreviousPrevious post:The most important question in the C-Suite: Can we trust our data?NextNext post:Predictive vs. generative AI: Which one is right for your business?

Related posts

Snowflake offers help to users and builders of AI agents
April 21, 2026
Does IT have a value problem?
April 21, 2026
Increased AI expectations without guidance leads to employee burnout
April 21, 2026
Why the CIO is uniquely positioned to lead the digital workforce
April 21, 2026
Ciberseguridad en el sector farmacéutico: la experiencia de Faes Farma
April 21, 2026
The gap between SAP and its customers must not widen further
April 21, 2026
Recent Posts
  • Snowflake offers help to users and builders of AI agents
  • Does IT have a value problem?
  • Increased AI expectations without guidance leads to employee burnout
  • Why the CIO is uniquely positioned to lead the digital workforce
  • Ciberseguridad en el sector farmacéutico: la experiencia de Faes Farma
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.