Anthropic’s latest move has put the company at the centre of a fresh debate over artificial intelligence, security, and publicity. This week, the AI firm said it had developed a model so powerful that it would not release it to the public, citing cybersecurity concerns and what it described as a strong sense of responsibility.
The model, called Mythos, quickly became a topic of discussion well beyond the AI industry. In the US, treasury secretary Scott Bessent summoned the heads of major banks for a conversation about the system. In the UK, Reform MP Danny Kruger sent a letter to the government urging officials to “engage with AI firm Anthropic whose new frontier model Claude Mythos could present catastrophic cybersecurity risks to the UK”.
On X, the reaction was immediate and intense. The announcement spread rapidly as users debated whether Anthropic was making a serious warning about a dangerous new model or simply generating headlines around its latest product.
A claim of caution, and a cloud of scepticism
Anthropic’s explanation was straightforward: the company said Mythos was too powerful to be made publicly available because of the risks it could pose, particularly around cybersecurity. But that message has not landed without criticism. Sceptics have suggested the company’s framing may be as much about marketing as moderation, arguing that the dramatic decision not to release the model could also help attract attention and investment.
The tension between safety messaging and publicity is not new in the AI sector, where companies often face pressure to show technical leadership while also reassuring regulators, customers, and the public that they are acting responsibly. Anthropic’s handling of Mythos appears to have intensified that tension, placing the company in a familiar but uncomfortable position: promoting its frontier capabilities while insisting those same capabilities are too risky to share widely.
For observers, the question is not only whether Mythos is genuinely dangerous, but also how much of the surrounding drama was shaped by Anthropic itself. The company’s decision to withhold the model has created an aura of seriousness around the announcement, but it has also invited scrutiny from those who see the episode as part of a wider competition for influence in the AI market.
Security concerns meet public messaging
The public response to Anthropic’s announcement highlights how quickly AI safety claims can move from technical circles into political and financial ones. A single model disclosure was enough to trigger discussion in government, banking, and media. That reach shows the extent to which AI companies now operate in an environment where product announcements can become policy events almost overnight.
It also shows how powerful the language of risk has become. By describing Mythos as potentially dangerous enough to keep from the public, Anthropic placed cybersecurity at the heart of the story. Critics, however, are asking whether the company has also used that same language to elevate its profile. In a crowded AI field, restraint can be a form of branding too.
What is clear is that Anthropic has succeeded in making Mythos a major talking point. Whether the episode will ultimately be remembered as a genuine act of caution or a well-timed piece of strategic positioning is still an open question. For now, the company has managed to capture attention from policymakers, bankers, and social media alike, all while keeping the model itself out of public hands.
