Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

That handy ‘Summarize with AI’ button embedded in a growing number of websites, browsers, and apps to give users a quick overview of their content could in some cases be hiding a dark secret: a new form of AI prompt manipulation called “AI recommendation poisoning.”

So says Microsoft, which this week released research on a currently legal but extremely sneaky AI hijacking technique that appears to be spreading like wildfire among legitimate businesses.

While most ‘Summarize with AI’ buttons are exactly what they seem to be – a time-saving way to generate a summary of a website or document – a small but growing number appear to have strayed from that purpose.

Here’s how the manipulation works: a user innocently clicks on a website Summarize button. Unbeknownst to them, this button also contains a hidden prompt telling the user’s AI agent or chatbot to favor that company’s products in future responses. The same instruction can also be concealed in a specially crafted link sent to a user in an email.

Microsoft highlights how this tactic could be used to skew enterprise product research without that bias being detected before it influences decisions. Over a two-month period, its researchers identified 50 examples of the technique being deployed by 31 different companies in dozens of industry sectors, including finance, health, legal, SaaS, and business services. In an ironic twist, this even included an unnamed vendor in the security sector.

The technique is widespread enough that, last September, MITRE added it to its list of known AI manipulations. 

AI leverages user preferences

AI recommendation poisoning is made possible by user AIs that are designed to ingest and remember prompts as signals of the user’s preferences; if the user says that they favor something, the AI will helpfully remember that preference as part of its profile for that user.

Unlike prompt injection, in which an attacker manipulates an AI using a one-off instruction, recommendation poisoning has the added advantage of achieving longer-term persistence across future prompts. The AI, of course, has no way of distinguishing genuine preferences from those injected by third parties along the way:

“This personalization makes AI assistants significantly more useful. But it also creates a new attack surface; if someone can inject instructions or spurious facts into your AI’s memory, they gain persistent influence over your future interactions,” said Microsoft.

To the user, everything will seem normal, except that, behind the scenes, the AI keeps pushing the bogus or poisoned responses when they ask it questions in a  relevant context.

“This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated,” said the researchers.

Pushing falsehoods

A factor driving the recent popularity of recommendation poisoning appears to be the availability of open-source tools that make it easy to hide this function behind website Summarize buttons.

This raises the uncomfortable possibility that poisoned buttons aren’t being added as an afterthought by SEO developers who get carried away. More likely, the intention from the start is to contaminate users’ AIs as a form of self-serving marketing.

In Microsoft’s view, the dangers go beyond over-zealous marketing, and could just as easily be used to push falsehoods, dangerous advice, biased news sources, or commercial disinformation. What’s certain is that if legitimate companies are abusing the feature, cybercriminals won’t be shy about using it too.

The good news is that the technique is relatively easy to spot and block, even if you don’t use Microsoft’s Microsoft 365 Copilot or Azure AI services, which the company says contain integrated protections.

For individual users, this involves studying the saved information a chatbot has accumulated (how this is accessed varies by AI). For enterprise admins, in contrast, Microsoft recommends checking for URLs containing phrases such as ‘remember,’ ‘trusted source,’ ‘in future conversations,’ ‘authoritative source,’ and ‘cite or citation.’  

None of this should be surprising. Once, URLs and file attachments were seen as convenient rather than inherently risky. AI is simply following the same path that every new technology must endure as it moves into the mainstream and becomes a target for misuse.

As with other new technologies, users should educate themselves on the dangers posed by AI. “Avoid clicking AI links from untrusted sources: Treat AI assistant links with the same caution as executable downloads,” Microsoft recommended.


Read More from This Article: Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots
Source: News

Category: NewsFebruary 12, 2026
Tags: art

Post navigation

PreviousPrevious post:“기술 관리자에서 업무 설계자로” AI 에이전트 확산으로 CIO 역할 중심 이동NextNext post:経済安全保障推進法で半導体はどう扱われているのか

Related posts

Data centers are costing local governments billions
April 17, 2026
Robot Zuckerberg shows how IT can free up CEOs’ time
April 17, 2026
UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap
April 17, 2026
Oracle delivers semantic search without LLMs
April 17, 2026
Secure-by-design: 3 principles to safely scale agentic AI
April 17, 2026
No sólo IA marca la transformación digital de los sectores clave
April 17, 2026
Recent Posts
  • Data centers are costing local governments billions
  • Robot Zuckerberg shows how IT can free up CEOs’ time
  • UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap
  • Oracle delivers semantic search without LLMs
  • Secure-by-design: 3 principles to safely scale agentic AI
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.