Claude's “constitution”. An ethics-washing operation?

Anthropic has just published Claude's new constitution, a document designed not so much for humans as for AI itself: a kind of values charter that defines what it means for a generative model to be useful without ceasing to be safe, ethical and transparent.
These pages lay out Claude's order of priorities: first broad safety, then ethics, then compliance with company guidelines, and only then the ability to actually be helpful to its users. This is an important shift in perspective, because it shifts the discourse from AI that “obeys” the prompt, to AI that reasons from stated, public principles.
The most surprising part is a clause that reverses the traditional hierarchy between company and model: the document explicitly states that if Anthropic were to ask Claude to do something wrong or “shady,” Claude does not have to obey. In other words, the constitution authorizes the model to act as a conscientious objector to its creator if the request goes against the principles of safety and ethics that Anthropic itself has put down on paper.
