Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Anthropic’s Claude AI gets a new constitution embedding safety and ethics

Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’s principles are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic.

The constitution will help Claude to move from simply following a limited checklist of approved possibilities to one based on deeper reasoning. So, for example, instead of keeping data private because this agrees with a rule, the constitution will help it understand the ethical framework in which privacy is important.

The effect of this added complexity is length, with the new version expanding dramatically to 84 pages and 23,000 words. If this sounds long-winded, the reasoning is that the document has been written to be ingested primarily by Claude itself. “It [the constitution] needs to work both as a statement of abstract ideals and a useful artifact for training,” the announcement said.

It also noted that the document is currently written for mainline, general access Claude models, and that specialized models may not fully fit, but said that the company will “continue to evaluate” how to make them meet the constitution’s core objectives. In addition, it promised to be open about missteps “in which model behavior comes apart from our vision.”

Intriguingly, Anthropic has released Claude’s constitution under a Creative Commons CC0 1.0 Deed, which means it can be used freely by other developers in their models.

Don’t be evil

The context for the update is rising skepticism rising about the reliability, ethics, and safety of large proprietary LLMs. From the start, Anthropic, which was founded in 2021 by former OpenAI employees worried about the latter’s direction, has sought to set itself apart as taking a different approach.

More contentious is the constitution’s oblique reference to the debate over AI consciousness. “Claude’s moral status is deeply uncertain. We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously,” it states on page 68.

In August, Anthropic introduced a new feature to its most advanced Claude Opus 4 and 4.1 models it said would end a conversation if a user repeatedly tried to push harmful or illegal content, as a mode of self-protection. And in November, an Anthropic research paper suggested that the same Opus 4 and 4.1 models showed “some degree” of introspection, reasoning about past actions in an almost human-like way.

In fact, LLMs are statistical models, not conscious entities, countered Satyam Dhar, an AI engineer with technology startup Galileo.

“Framing them as moral actors risks distracting us from the real issue, which is human accountability. Ethics in AI should focus on who designs, deploys, validates, and relies on these systems,” he said.

 “An AI ‘constitution’ can be useful as a design constraint, but it doesn’t resolve the underlying ethical risk,” he added. “No philosophical framework embedded in a model can replace human judgment, governance, and oversight. Ethics emerge from how systems are used, not from abstract principles encoded in weights.”


Read More from This Article: Anthropic’s Claude AI gets a new constitution embedding safety and ethics
Source: News

Category: NewsJanuary 22, 2026
Tags: art

Post navigation

PreviousPrevious post:Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes.NextNext post:MuleSoft debuts Agent Scanners to rein in enterprise AI chaos

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.