News You Can Use

Edition 36 · 15th - 28th February 2026

News You Can Use

Opening

This was the fortnight the market started pricing in consequences. Anthropic systematically entered established markets and watched stocks collapse on announcement day - cybersecurity, mainframe services, enterprise software. A speculative Substack post modelling AI-driven economic collapse wiped billions off real share prices in real time. Some real questions about "what happens if AI really works..."

Deep Dives

Three stories worth sitting with

Anthropic's Month

The legal plugin story from early February (Edition 35) was just the opening act. Over the following two weeks, Anthropic announced product after product - and sector after sector watched its share prices crater in response.

What
On 20 February, Anthropic launched Claude Code Security - an AI tool that reads and reasons about code like a human security researcher, tracing data flows and catching vulnerabilities that had gone undetected in production open-source codebases for decades. Over 500 vulnerabilities found. Cybersecurity stocks dropped on the announcement. Three days later, Anthropic revealed Claude Code can automate the exploration and analysis phases of COBOL modernisation - mapping dependencies across thousands of lines of legacy code, documenting workflows, identifying risks that "would take human analysts months to surface." IBM's stock crashed 13.2%, its worst single-day drop since October 2000, and fell 27% across February - on track for its biggest monthly slide since 1968. The same day, Anthropic accused DeepSeek, Moonshot AI, and MiniMax of industrial-scale data theft - 24,000 fraudulent accounts and 16 million exchanges used to "distil" Claude's capabilities. On 24 February, Anthropic expanded Cowork plugins beyond legal into finance, engineering, design, and HR - connecting Claude to Google Workspace, DocuSign, and WordPress for multi-step tasks across enterprise tools. TechCrunch called it "a significant threat to SaaS products currently performing those functions." Meanwhile, The Verge reported on Anthropic's standoff with the Pentagon over "any lawful use" contract terms - language that would permit AI for mass surveillance and lethal autonomous weapons without human-in-the-loop. A $200M contract, but far broader implications for a company built on safety-first principles. And in a study published the same week, Professor Kenneth Payne at King's College London pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other in 21 simulated armed conflict scenarios. Nuclear weapons were deployed in 95% of simulations. No model ever chose de-escalation. Claude recommended nuclear strikes in 64% of games - the highest rate of any model tested.
So what
The pattern is now unmistakable. Anthropic is no longer just a model provider - it is systematically entering established vertical markets, and the incumbents are being repriced on announcement day. Legal software, cybersecurity, mainframe services, and now enterprise SaaS have all felt the impact within a single month. For any firm or client relying on specialist technology vendors, the strategic question has changed: what happens when the foundation model company decides to offer the service directly? The answer, so far, is that markets don't wait to find out whether the product is actually better - they reprice on the assumption that it will be. The Pentagon standoff adds a different dimension entirely. The safety-first lab is being pressured to accept terms that would contradict its founding principles, while its own model is demonstrated to be the most aggressive in simulated warfare. For governance committees, for anyone advising on AI risk, and for anyone thinking about where this technology is heading, that combination of stories deserves serious attention.

Ethan Mollick: A Guide to AI in the Agentic Era

One Useful Thing

Mollick's biggest rewrite since ChatGPT launched. He calls it "a very large break" from his previous eight guides - because the technology has fundamentally changed what the advice needs to be.

What
The core argument is that AI is no longer about chatbots. It's about agents - systems that complete tasks autonomously using tools, memory, and multi-step reasoning. Mollick introduces a framework of three layers: Models (the underlying AI), Apps (the interfaces we use), and Harnesses (the workflows and integrations that shape output). He demonstrates how the same model produces radically different results depending on the app and harness around it. The companion post puts it simply: "the people who thrive will be the ones who know what good looks like - and can explain it clearly enough that even an AI can deliver it."
So what
This is the most practically useful thing written about AI this fortnight, and it reframes the conversation we should be having internally and with clients. Stop thinking about AI as a better search engine or a drafting assistant. Start thinking about it as a junior team member that needs clear instructions, defined scope, and active management. The implication for legal is direct: the lawyers who will get the most from AI are the ones who can articulate what a good outcome looks like, break work into manageable steps, and review output critically. Those are management skills, not technical skills. For firms still running "how to write a better prompt" training sessions, Mollick's framework suggests the real gap is in workflow design and delegation - and that's a much harder problem to solve.

Citrini Research: The 2028 Global Intelligence Crisis

Citrini Research|Fortune|Bloomberg

A speculative "macro memo from 2028" published on Substack. On Monday, real markets moved.

What
James Van Geelen - described as the top finance writer on Substack, a former LA paramedic with a portfolio up 200%+ since May 2023 - published a scenario modelling what happens if aggressive AI adoption displaces white-collar workers at scale. The argument: white-collar roles represent 50% of US employment and 75% of discretionary spending. If companies cut costs with AI and laid-off workers stop spending, you get a negative feedback loop - more cuts, less spending, accelerating decline. He calls it "Ghost GDP" - growth that doesn't circulate through the real economy. The S&P 500 falls 38% from peak in his scenario. The report explicitly states it's "a scenario, not a prediction." But on Monday 23 February, software and payments stocks cratered anyway - AppLovin, Asana, DocuSign, Zscaler, ServiceNow, Visa all hammered. Indian IT stocks followed the next day. FT Alphaville marvelled at the power of narrative to shake institutional investors' convictions in minutes.
So what
The substance of the scenario is debatable - there are reasonable counter-arguments about demand creation, S-curves, and the historical record on automation. But the market reaction is the story. A Substack post from a former paramedic wiped billions off market cap because it articulated the anxiety that sitting beneath every AI adoption conversation: if this really works as promised, what happens to the people it replaces? For legal, that question is not abstract. White-collar professional services are at the dead centre of Van Geelen's scenario. The fact that institutional investors took a speculative blog post this seriously tells us something about the fragility of conviction around AI's economic impact - and that the firms having honest conversations about workforce transition, rather than just productivity gains, are the ones reading the room correctly.

Worth Reading

Everything else worth a click

OpenAI acquires OpenClaw

Acqui-hire of the fastest-growing open-source AI agent (145K GitHub stars). VentureBeat: "signals the beginning of the end of the ChatGPT era." OpenAI's clearest bet that the future is agents, not chatbots.

OpenAI is Looking Like Icarus

Evan Armstrong's financial deep dive. 80% of ChatGPT's 910M weekly users sent fewer than 1,000 messages annually. Gross margins fell from 46% target to 33%. Compute costs through 2030 projected at $665B. Revenue projections tripled while costs doubled.

Harvey partners with... Harvey

Gabriel Macht (Harvey Specter from Suits) becomes brand ambassador. When a legal AI company hires a TV lawyer, the market has entered a new phase. Confidence or necessity as Anthropic's legal plugin intensifies competition?

ElevenLabs secures first-of-its-kind AI agent insurance

First company to go live with insurance covering AI voice agents. Certification involves 5,000+ adversarial simulations. Over 95% of enterprise AI pilots fail to reach deployment, with legal and security concerns as primary barriers. A potential model for insuring AI risk in professional services.

Tool Shaped Objects

Objects that produce the sensation of work rather than actual output. "AI is everywhere in consumption and almost nowhere in output." The kanna blade metaphor: the setup ritual becomes the product. A sharp critique of metrics-driven AI adoption.

Fortune: The AI Productivity Paradox returns

Survey of nearly 6,000 executives: roughly 90% say AI has had no impact on productivity or employment, despite 70% actively using it. CEOs who use AI average just 1.5 hours per week. Economists invoke Solow's 1987 paradox.

"You Should Absolutely Be Freaking Out About AI"

Sociology professor argues AI has advanced far beyond what most academics realise. Created a publishable 25-page academic paper using basic prompting. Training compute doubles every five months. The counter-position to the productivity paradox narrative.

USA v Heppner: AI kills attorney-client privilege

First-of-its-kind SDNY ruling. Documents generated using consumer AI (Claude) are not protected by privilege. No attorney-client relationship with AI; no reasonable expectation of confidentiality under consumer terms. Every client using AI for anything legally sensitive needs to read this.

Anthropic AI Fluency Index

85.7% of Claude conversations involved iterating on earlier responses, and iterative users showed double the fluency behaviours. But when AI generated polished outputs, users became less evaluative - less likely to question reasoning or identify missing context. Only 30% of users set collaboration terms explicitly.

What Lawyers Actually Sell - and Why AI Changes the Answer

Quentin Solt argues clients buy managed risk, not documents. AI transforms risk management from episodic to continuous. The leverage model collapses. Capability becomes obligation - once AI makes something possible, it becomes a professional expectation. The washing machine metaphor: elevated standards consumed the time they saved.

Lawyers using AI despite lacking trust

Paragon survey of 250+ legal professionals. Only 1 in 5 place high trust in AI-generated work. 67% have overridden or corrected AI output. 47% say AI automation has sparked conflict within their team. The profession in a contradictory position - adopting tools it doesn't trust, under pressure it can't ignore.

Inside Anthropic's Pentagon negotiations

The Verge reports on Anthropic's standoff with the DoD over "any lawful use" contract terms. Language that would allow AI for mass surveillance and lethal autonomous weapons without human-in-the-loop. $200M contract at stake. OpenAI and xAI have already agreed to the same terms.