Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

AI culture war: Hidden bias in training models may push political propaganda

The release of Chinese-made DeepSeek has generated heated debate, but critics have largely ignored a huge problem: the potential for China to use such an AI model to push a cultural and political agenda on the rest of the world.

DeepSeek has prompted concerns over cybersecurity, privacy, intellectual property, and other issues, but some AI experts also worry about its ability to spread propaganda.

The concern goes something like this: The Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., when unveiling the AI model and chatbot in January, envisioned it as sort of an encyclopedia to train the next generation of AI models.

Of course, fears about bias in AI are nothing new, although past examples often appear to be unintentional.

Those raising concerns about DeepSeek acknowledge that it isn’t the only large language model (LLM) AI likely to serve as a training tool for future models, and the Chinese government isn’t likely to be the only government or organization to consider using AI models as propaganda tools.

But the Chinese company’s decision to release its rendering model under the open-source MIT License may make it an attractive model to use in the distillation process to train smaller AIs.

Easy distillation

DeepSeek was built to make it easy for other models to be distilled from it, some AI experts suggest. Organizations building smaller AI models on the cheap, including many in developing countries, may turn to an AI trained to spout the Chinese government’s worldview.

Hangzhou did not respond to a request for comments on these concerns.

The company used inexpensive hardware to build DeepSeek, and the relatively low cost points to a future of AI development that’s accessible to many organizations, says Dhaval Moogimane, leader of the high-tech and software practice at business consulting firm West Monroe. “What DeepSeek did, in some ways, is highlight the art of the possible,” he adds.

Hangzhou developed DeepSeek despite US export controls on high-performance chips commonly used to design and test AI models, thus proving how quickly advanced AI models can emerge despite roadblocks, adds Adnan Masood, chief AI architect at digital transformation company UST.

With a lower cost of entry, it’s now easier for organizations to create powerful AIs with cultural and political biases built in. “On the ground, it means entire populations can unknowingly consume narratives shaped by a foreign policy machine,” Masood says. “By the time policy executives realize it, the narratives may already be embedded in the public psyche.”

Technology as propaganda

While few people have talked about AI models as tools for propaganda, it shouldn’t come as a big surprise, Moogimane adds. After all, many technologies, including television, the Internet, and social media, became avenues for pushing political and cultural agendas as they reached the mass market.

CIOs and other IT leaders should be aware of the possibility that the Chinese government and other organizations will push their own narratives in AI models, he says.

With AI training models, “there is an opportunity for models to shape the narrative, shape the minds, shape the outcomes, in many ways, of what’s being shared,” Moogimane adds.

AI is emerging as a new tool for so-called soft power, he notes, with China likely to take the initiative even as US President Donald Trump’s administration cuts funding for traditional soft-power vehicles like foreign aid and state-funded media.

If DeepSeek and other AI models restrict references to historically sensitive incidents or reflect state-approved views on contested territories — two possible biases built into China-developed AIs — those models become change agents in worldwide cultural debates, Masood adds.

“The times we live in, AI has become a force multiplier for ideological compliance and national soft-power export,” Masood says. “With deepfakes and automated chatbots already flooding public discourse, it’s clear AI is evolving into a high-impact leadership tool for cultural and political positioning.”

AI is already fooling many people when it’s used to create deepfakes and other disinformation, but bias within an AI training tool may be even more subtle, Moogimane adds.

“At the end of the day, making sure that you are validating some of the cultural influences and outputs of the model will require some testing and training, but that’s going to be challenging,” he says.

Take care when choosing AI models

Organizations should create modular AI architectures, Moogimane recommends, so that they can easily adopt new AI models as they are released.

“There’s going to be constant innovation in these models as you go forward,” he says. “Make sure that you’re creating an architecture that is scalable, so you can replace models over time.”

In addition to building a modular AI infrastructure, CIOs should also carefully evaluate AI tools and frameworks for scalability, security, regulatory compliance, and fairness before selecting them, Masood says.

IT leaders can use established frameworks like the NIST AI Risk Management Framework, OECD AI Principles, or EU Trustworthy AI guidelines to evaluate model trustworthiness and transparency, he says. CIOs need to continuously monitor their AI tools and practice responsible lifecycle governance.

“Doing so ensures that AI systems not only deliver business value through productivity and efficiency gains but also maintains stakeholder trust and uphold responsible AI principles,” Masood adds.

CIOs and other AI decision-makers must think critically about the outputs of their AI models, just as consumers of social media should evaluate the accuracy of the information they’re fed, says Stepan Solovev, CEO and co-founder at SOAX, vendor of a data-extraction platform.

“Some people are trying to understand what’s true and what’s not, but some are just consuming what they get and do not care about fact-checking,” he says. “This is the most concerning part of all these technology revolutions: People usually do not look critically, especially with the first prompt you put into AI or first search engine results you get.”

In some cases, IT leaders will not turn to LLMs like DeepSeek to train specialized AI tools and instead rely on more niche AI models, he says. In those situations, it’s less likely for AI users to encounter a training model embedded with cultural bias.

Still, CIOs should compare results between AI models or use other methods to check results, he suggests.

“If one AI spreads a biased message, another AI, or human fact-checkers augmented by AI, can counter it just as fast,” he adds. “We’re going to see a cat-and-mouse dynamic, but over time I think truth and transparency win out, especially in an open market of ideas.”

Competition as the cure

Solovev sees the potential for AI to spread propaganda, but he also believes that many AI users will flock to models that are transparent about the data used in training and provide unbiased results. However, some IT leaders may be tempted to prioritize low costs over transparency and accuracy, he says.

As more AI models flood the market, Solovev envisions a huge competition on many features. “The challenge is to keep this competition fair and ensure that both companies and individuals have access to multiple models, so that they can compare,” he says.

Like Solovev, Manuj Aggarwal, founder and CIO at IT and AI solutions provider TetraNoodle Technologies, sees a rapidly expanding AI market as a remedy for potential bias from DeepSeek or other LLMs.

“This is very unlikely that one model will have major influence on the world,” he says. “DeepSeek is just one of many models, and soon, we’ll see thousands from all corners of the world. No single AI can dictate narratives at scale when so many diverse systems are interacting.”

Since the release of DeepSeek, Mistral AI has moved its AI model to an open-source license, and models such as Meta’s Llama and xAI’s Grok were already available as open-source software, Aggarwal and other AI experts note.

Still, Aggarwal recommends that CIOs using large LLMs to train their own homemade AI models stick with brands they trust.

“Since [Barack] Obama’s first election, campaigns have relied on AI-driven analytics to target voters with precision,” he says. “Now, with models like DeepSeek, the stakes are even higher. The question isn’t if AI will be used for propaganda; it’s how much control different entities will have.”


Read More from This Article: AI culture war: Hidden bias in training models may push political propaganda
Source: News

Category: NewsMarch 28, 2025
Tags: art

Post navigation

PreviousPrevious post:La Diputación de Girona optimiza el uso de sus recursos mediante la automatizaciónNextNext post:칼럼 | 훔치다가 망치기까지··· 생성형 AI와 인터넷 사이의 갈등

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.