The legal plugin story from early February (Edition 35) was just the opening act. Over the following two weeks, Anthropic announced product after product - and sector after sector watched its share prices crater in response.
What
On 20 February, Anthropic launched Claude Code Security - an AI tool that reads and reasons about code like a human security researcher, tracing data flows and catching vulnerabilities that had gone undetected in production open-source codebases for decades. Over 500 vulnerabilities found. Cybersecurity stocks dropped on the announcement. Three days later, Anthropic revealed Claude Code can automate the exploration and analysis phases of COBOL modernisation - mapping dependencies across thousands of lines of legacy code, documenting workflows, identifying risks that "would take human analysts months to surface." IBM's stock crashed 13.2%, its worst single-day drop since October 2000, and fell 27% across February - on track for its biggest monthly slide since 1968. The same day, Anthropic accused DeepSeek, Moonshot AI, and MiniMax of industrial-scale data theft - 24,000 fraudulent accounts and 16 million exchanges used to "distil" Claude's capabilities. On 24 February, Anthropic expanded Cowork plugins beyond legal into finance, engineering, design, and HR - connecting Claude to Google Workspace, DocuSign, and WordPress for multi-step tasks across enterprise tools. TechCrunch called it "a significant threat to SaaS products currently performing those functions." Meanwhile, The Verge reported on Anthropic's standoff with the Pentagon over "any lawful use" contract terms - language that would permit AI for mass surveillance and lethal autonomous weapons without human-in-the-loop. A $200M contract, but far broader implications for a company built on safety-first principles. And in a study published the same week, Professor Kenneth Payne at King's College London pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other in 21 simulated armed conflict scenarios. Nuclear weapons were deployed in 95% of simulations. No model ever chose de-escalation. Claude recommended nuclear strikes in 64% of games - the highest rate of any model tested.
So what
The pattern is now unmistakable. Anthropic is no longer just a model provider - it is systematically entering established vertical markets, and the incumbents are being repriced on announcement day. Legal software, cybersecurity, mainframe services, and now enterprise SaaS have all felt the impact within a single month. For any firm or client relying on specialist technology vendors, the strategic question has changed: what happens when the foundation model company decides to offer the service directly? The answer, so far, is that markets don't wait to find out whether the product is actually better - they reprice on the assumption that it will be. The Pentagon standoff adds a different dimension entirely. The safety-first lab is being pressured to accept terms that would contradict its founding principles, while its own model is demonstrated to be the most aggressive in simulated warfare. For governance committees, for anyone advising on AI risk, and for anyone thinking about where this technology is heading, that combination of stories deserves serious attention.