The release of Chinese-made DeepSeek has generated heated debate, but critics have largely ignored a huge problem: the potential for China to use such an AI model to push a cultural and political agenda on the rest of the world.
DeepSeek has prompted concerns over cybersecurity, privacy, intellectual property, and other issues, but some AI experts also worry about its ability to spread propaganda.
The concern goes something like this: The Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., when unveiling the AI model and chatbot in January, envisioned it as sort of an encyclopedia to train the next generation of AI models.
Of course, fears about bias in AI are nothing new, although past examples often appear to be unintentional.
Those raising concerns about DeepSeek acknowledge that it isn’t the only large language model (LLM) AI likely to serve as a training tool for future models, and the Chinese government isn’t likely to be the only government or organization to consider using AI models as propaganda tools.
But the Chinese company’s decision to release its rendering model under the open-source MIT License may make it an attractive model to use in the distillation process to train smaller AIs.
Easy distillation
DeepSeek was built to make it easy for other models to be distilled from it, some AI experts suggest. Organizations building smaller AI models on the cheap, including many in developing countries, may turn to an AI trained to spout the Chinese government’s worldview.
Hangzhou did not respond to a request for comments on these concerns.
The company used inexpensive hardware to build DeepSeek, and the relatively low cost points to a future of AI development that’s accessible to many organizations, says Dhaval Moogimane, leader of the high-tech and software practice at business consulting firm West Monroe. “What DeepSeek did, in some ways, is highlight the art of the possible,” he adds.
Hangzhou developed DeepSeek despite US export controls on high-performance chips commonly used to design and test AI models, thus proving how quickly advanced AI models can emerge despite roadblocks, adds Adnan Masood, chief AI architect at digital transformation company UST.
With a lower cost of entry, it’s now easier for organizations to create powerful AIs with cultural and political biases built in. “On the ground, it means entire populations can unknowingly consume narratives shaped by a foreign policy machine,” Masood says. “By the time policy executives realize it, the narratives may already be embedded in the public psyche.”
Technology as propaganda
While few people have talked about AI models as tools for propaganda, it shouldn’t come as a big surprise, Moogimane adds. After all, many technologies, including television, the Internet, and social media, became avenues for pushing political and cultural agendas as they reached the mass market.
CIOs and other IT leaders should be aware of the possibility that the Chinese government and other organizations will push their own narratives in AI models, he says.
With AI training models, “there is an opportunity for models to shape the narrative, shape the minds, shape the outcomes, in many ways, of what’s being shared,” Moogimane adds.
AI is emerging as a new tool for so-called soft power, he notes, with China likely to take the initiative even as US President Donald Trump’s administration cuts funding for traditional soft-power vehicles like foreign aid and state-funded media.
If DeepSeek and other AI models restrict references to historically sensitive incidents or reflect state-approved views on contested territories — two possible biases built into China-developed AIs — those models become change agents in worldwide cultural debates, Masood adds.
“The times we live in, AI has become a force multiplier for ideological compliance and national soft-power export,” Masood says. “With deepfakes and automated chatbots already flooding public discourse, it’s clear AI is evolving into a high-impact leadership tool for cultural and political positioning.”
AI is already fooling many people when it’s used to create deepfakes and other disinformation, but bias within an AI training tool may be even more subtle, Moogimane adds.
“At the end of the day, making sure that you are validating some of the cultural influences and outputs of the model will require some testing and training, but that’s going to be challenging,” he says.
Take care when choosing AI models
Organizations should create modular AI architectures, Moogimane recommends, so that they can easily adopt new AI models as they are released.
“There’s going to be constant innovation in these models as you go forward,” he says. “Make sure that you’re creating an architecture that is scalable, so you can replace models over time.”
In addition to building a modular AI infrastructure, CIOs should also carefully evaluate AI tools and frameworks for scalability, security, regulatory compliance, and fairness before selecting them, Masood says.
IT leaders can use established frameworks like the NIST AI Risk Management Framework, OECD AI Principles, or EU Trustworthy AI guidelines to evaluate model trustworthiness and transparency, he says. CIOs need to continuously monitor their AI tools and practice responsible lifecycle governance.
“Doing so ensures that AI systems not only deliver business value through productivity and efficiency gains but also maintains stakeholder trust and uphold responsible AI principles,” Masood adds.
CIOs and other AI decision-makers must think critically about the outputs of their AI models, just as consumers of social media should evaluate the accuracy of the information they’re fed, says Stepan Solovev, CEO and co-founder at SOAX, vendor of a data-extraction platform.
“Some people are trying to understand what’s true and what’s not, but some are just consuming what they get and do not care about fact-checking,” he says. “This is the most concerning part of all these technology revolutions: People usually do not look critically, especially with the first prompt you put into AI or first search engine results you get.”
In some cases, IT leaders will not turn to LLMs like DeepSeek to train specialized AI tools and instead rely on more niche AI models, he says. In those situations, it’s less likely for AI users to encounter a training model embedded with cultural bias.
Still, CIOs should compare results between AI models or use other methods to check results, he suggests.
“If one AI spreads a biased message, another AI, or human fact-checkers augmented by AI, can counter it just as fast,” he adds. “We’re going to see a cat-and-mouse dynamic, but over time I think truth and transparency win out, especially in an open market of ideas.”
Competition as the cure
Solovev sees the potential for AI to spread propaganda, but he also believes that many AI users will flock to models that are transparent about the data used in training and provide unbiased results. However, some IT leaders may be tempted to prioritize low costs over transparency and accuracy, he says.
As more AI models flood the market, Solovev envisions a huge competition on many features. “The challenge is to keep this competition fair and ensure that both companies and individuals have access to multiple models, so that they can compare,” he says.
Like Solovev, Manuj Aggarwal, founder and CIO at IT and AI solutions provider TetraNoodle Technologies, sees a rapidly expanding AI market as a remedy for potential bias from DeepSeek or other LLMs.
“This is very unlikely that one model will have major influence on the world,” he says. “DeepSeek is just one of many models, and soon, we’ll see thousands from all corners of the world. No single AI can dictate narratives at scale when so many diverse systems are interacting.”
Since the release of DeepSeek, Mistral AI has moved its AI model to an open-source license, and models such as Meta’s Llama and xAI’s Grok were already available as open-source software, Aggarwal and other AI experts note.
Still, Aggarwal recommends that CIOs using large LLMs to train their own homemade AI models stick with brands they trust.
“Since [Barack] Obama’s first election, campaigns have relied on AI-driven analytics to target voters with precision,” he says. “Now, with models like DeepSeek, the stakes are even higher. The question isn’t if AI will be used for propaganda; it’s how much control different entities will have.”
Read More from This Article: AI culture war: Hidden bias in training models may push political propaganda
Source: News