News You Can Use

Edition 28 · 15th - 31st Oct 2025

News You Can Use

Deep Dives

Three stories worth sitting with

PwC — Law Firm Survey 2025

PwC — Law Firm Survey 2025

What
The survey shows that 95% of the top 100 UK law firms achieved fee-income growth last year, while firms foresee an average 16% of hours saved through AI adoption in the near term. While revenue and profit remain up, overall growth is slowing. Firms are at an inflection point: diverging performance between large global firms vs mid/smaller firms, regional variation and a pressing need for operational and technology discipline. Margins are being driven more by rate increases and matter mix than by utilisation. At the same time, 92% of those firms list cyber-risk as a key concern - up from 89% in 2024.
So what
The survey highlights that efficiency gains from AI alone won’t automatically translate into improved margins unless pricing, commercial models, and client expectations are recalibrated. It is therefore critical for us to consider how our work integrates into broader business objectives. The market is increasingly focused on margin quality over volume, leveraging smarter matter management, tighter billing cycles, and selective resourcing - insights that are particularly useful for pricing and productivity conversations.

The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice

The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice

What
The paper critiques the common “risk‑opportunity” paradigm for adopting GenAI in legal practice - i.e. the idea that AI brings big gains and the risks can be managed. It identifies two structural flaws: reality flaw (AI outputs are not structurally tied to actual facts/legal sources) and transparency flaw (the reasoning/decision‑path of many models is opaque, making it hard to trust, audit, or explain). The 'verification-value' paradox argues that while AI may give efficiency gains, the increased need for verification (to meet lawyers’ obligations of accuracy, integrity, etc) may erode or nullify the value.
So what
When deploying tools, we have to embed steps for verification. Given that increasing speed without accuracy still carries risks, measure value in terms of improved performance, quality of advice, risk reduction, partner time freed… not simply “hours saved”. This aligns with the broader market focus as discussed above. When exploring use‑cases, those in the “low verification cost” zone (templates, internal memos, administrative drafting) may be where GenAI safely adds value initially. For high‑stakes client work (e.g. substantive advice) the verification cost will be higher, needing robust workflow controls, audit trails and human led review.

Vals AI - VLAIR Industry Report

Vals AI - VLAIR Industry Report

What
The report evaluates the performance of four AI products (Alexi, Counsel Stack, Midpage, and ChatGPT) on legal‑research tasks using a dataset of 200 U.S. legal‑research questions across a range of typical private‑practice types. The scoring criteria include accuracy, authoritativeness, and appropriateness. The AI tools out‑performed the human lawyer baseline across the three criteria – with Counsel Stack achieving the highest score across the criteria. The differentiator was “authoritativeness” (citations/sources) where the three AI vendors scored higher than ChatGPT. The biggest challenge was jurisdictional complexity: multi‑jurisdiction questions saw significant score drops.
So what
The study demonstrates that GenAI tools have now achieved very strong performance in legal‑research tasks - better than average human lawyers in this benchmark. But the value lies not just in “accuracy” but in “trustworthiness” (i.e. citations, references, etc) - AGPT now covers this, but is not yet linked to reliable source data. As part of our testing, we are building more straightforward legal questions (single jurisdiction, well‑defined issue) for better extraction and risk flagging. For ongoing conversations, this kind of benchmark will provide more visibility on how governance, metrics, verification, pricing and productivity work hand-in-hand.

Worth Reading

Everything else worth a click

Reuters - Salesforce Sued by Authors Over AI

Authors allege Salesforce's AI tools used copyrighted books without permission and seek damages and an injunction. Useful data point on copyright litigation expanding beyond the model makers to application-layer vendors.

PwC - Law Firm Survey 2025 (PDF)

Revenue and profit up but growth slowing. Margin is being driven by rates and matter mix more than utilisation, lock-up improving, and top-end firms rationalising headcount.

PwC UK and ContractPodAi Agentic Innovation Lab

PwC UK and ContractPodAi launch a centre of excellence to co-develop agentic legal and regulatory agents on Leah, building on prior DORA and Tariff agents. Agents-in-production is clearly the next battleground.

The Verification-Value Paradox - A Normative Critique of GenAI in Legal Practice

[Internal AG resource] Efficiency gains are eroded by the verification needed to meet lawyers' accuracy and integrity duties. Proposes a verification-first usage model. Good framing for risk-benefit discussions.

Litera - Democratises AI in Core Products

Litera makes agentic AI (Lito) accessible at no additional cost across products. Check the licence fine print and rollout schedule - expect parity responses from competitors.

Jordan Furlong - The New Legal Intelligence

Furlong argues "artificial legal intelligence" is becoming scalable and portable in ways humans are not, pushing lawyers to double down on uniquely human value. Clear, non-hysterical piece on what happens when machines can reason like lawyers.