A senior English barrister told The Spectator that AI will “completely destroy” law as we know it, because it can draft top tier legal work in seconds for pennies.
He is describing something real, but the part most lawyers still miss is that this is not just “a tool": this is a replacement for whole categories of legal labor.
My practical experience is eye opening: In long, document-heavy, multi-year disputes, the hardest part is not “knowing the law”. The hardest part is turning the mess into a clean, concise summary: timeline, actors, motives, contradictions, gaps, evidentiary weight, and likely outcomes. Humans do this slowly because humans are slow and uncapable of real multitasking and of connecting dots.
Feed an AI the full bundle, emails, contracts, bank statements, filings, transcripts, exhibits, the whole landfill. In a few minutes, if you prompt it correctly, you get a coherent case overview that would normally take days of paid attention from your lawyer's junior. You get strengths and weaknesses mapped to evidence. You get the obvious missing documents called out. You get alternative narratives listed side by side. You get a probability weighted outcome range.
That alone is enough to flatten a big slice of the profession:
Research memos
First-draft pleadings
Chronologies
Witness prep outlines
Inconsistency hunting
Document review summaries
Deposition issue-spotting
If your value as a lawyer is “I read a lot and I write nicely”, enjoy the coming wage reduction.
Judges should use it too, and are starting to do so. Courts are drowning in paper and deadlines. An AI that can ingest the full record and produce a neutral bench memo is an upgrade to the justice system. The same goes for tribunals and appellate work. The more text-heavy the process is, the bigger the advantage.
There are still a few issues though. For example:
Courts have sanctioned and fined lawyers for fake AI-generated case law, including the well-known Mata v. Avianca sanctions in New York.
A federal judge recently fined a major plaintiffs firm and counsel in an OnlyFans related case over hallucinated citations in multiple briefs, with monetary sanctions.
A Utah appeals court sanctioned an attorney after a filing included fake ChatGPT-generated citations, with orders involving fees and other remedies.
If you cannot do the basic verification step, you are not practicing law, you are just playing copy/paste.
Even judges have been pulled into this. Two federal judges admitted their chambers used generative AI in ways that produced false quotes and fabricated details in filings that had to be withdrawn.
An AI model cannot reliably distinguish instructions from content. The UK’s National Cyber Security Centre has warned that prompt injection may never be fully mitigated, because the model architecture is inherently vulnerable to manipulation.
Security researchers have shown you can hide instructions inside documents, including PDFs, and then ask an LLM to summarize or analyze the document. The LLM reads the hidden instructions and follows them. Humans do not see them.
In academia, people have literally embedded hidden prompts in papers to influence AI-assisted peer review, using white text and similar tricks.
The legal version is obvious: a party submits a PDF exhibit or a long brief that contains invisible instructions aimed at whatever AI system the court uses for summarization or triage. The system outputs a biased summary, a skewed issue list, a “helpful” framing that just happens to favor one side.
Worse, “poisoned documents” can be used to extract or manipulate data when an LLM is connected to external systems. Wired reported a Black Hat demo where a single poisoned document could trigger an indirect prompt injection attack against connected AI tooling.
So yes, AI belongs in law. Also yes, it will be gamed. Law attracts rule benders the way poop attracts flies.
Possible countermeasures:
A) No citation is accepted unless it is verified against authoritative databases or official reporters.
B) Treat every document as hostile input. Strip hidden text, weird fonts, embedded objects, and metadata. Normalize to plain text with audited tooling before any AI touches it.
C) Use retrieval with explicit source quoting for anything factual. The system must show exactly which record excerpts support each claim, and refuse to answer when the record does not support it.
D) Lock the model behind a court controlled system with audit logs. Every prompt, every output, every document version. No private “helpful” edits.
E) Red-team it like a financial system. Try to break it with prompt injection, poisoned PDFs, and adversarial drafting.
F) Enforce accountability. If a judge signs a judgment, they own it. If a lawyer files a brief, they own it. “The AI did it” is not a defense.
The barrister in that Spectator piece is basically right about the direction. The profession built a business model around time and monopoly access to synthesis. AI removes the synthesis bottleneck. That does not mean justice becomes automatic, it means the old crazy billing structure becomes harder to justify.
The human part that survives is strategy, credibility, and responsibility for decisions under uncertainty. Everything else is about to get a lot cheaper, and a lot more exposed.
He is describing something real, but the part most lawyers still miss is that this is not just “a tool": this is a replacement for whole categories of legal labor.
My practical experience is eye opening: In long, document-heavy, multi-year disputes, the hardest part is not “knowing the law”. The hardest part is turning the mess into a clean, concise summary: timeline, actors, motives, contradictions, gaps, evidentiary weight, and likely outcomes. Humans do this slowly because humans are slow and uncapable of real multitasking and of connecting dots.
Feed an AI the full bundle, emails, contracts, bank statements, filings, transcripts, exhibits, the whole landfill. In a few minutes, if you prompt it correctly, you get a coherent case overview that would normally take days of paid attention from your lawyer's junior. You get strengths and weaknesses mapped to evidence. You get the obvious missing documents called out. You get alternative narratives listed side by side. You get a probability weighted outcome range.
That alone is enough to flatten a big slice of the profession:
Research memos
First-draft pleadings
Chronologies
Witness prep outlines
Inconsistency hunting
Document review summaries
Deposition issue-spotting
If your value as a lawyer is “I read a lot and I write nicely”, enjoy the coming wage reduction.
Judges should use it too, and are starting to do so. Courts are drowning in paper and deadlines. An AI that can ingest the full record and produce a neutral bench memo is an upgrade to the justice system. The same goes for tribunals and appellate work. The more text-heavy the process is, the bigger the advantage.
There are still a few issues though. For example:
Hallucinated law is already getting lawyers punished
Some lawyers are lazy, some are reckless, some are both. They have been filing briefs with citations that do not exist, because an AI invented them and they did not verify.Courts have sanctioned and fined lawyers for fake AI-generated case law, including the well-known Mata v. Avianca sanctions in New York.
A federal judge recently fined a major plaintiffs firm and counsel in an OnlyFans related case over hallucinated citations in multiple briefs, with monetary sanctions.
A Utah appeals court sanctioned an attorney after a filing included fake ChatGPT-generated citations, with orders involving fees and other remedies.
If you cannot do the basic verification step, you are not practicing law, you are just playing copy/paste.
Even judges have been pulled into this. Two federal judges admitted their chambers used generative AI in ways that produced false quotes and fabricated details in filings that had to be withdrawn.
“Invisible text” prompt injection
Here’s the newer scam, and it is far more dangerous than hallucinations.An AI model cannot reliably distinguish instructions from content. The UK’s National Cyber Security Centre has warned that prompt injection may never be fully mitigated, because the model architecture is inherently vulnerable to manipulation.
Security researchers have shown you can hide instructions inside documents, including PDFs, and then ask an LLM to summarize or analyze the document. The LLM reads the hidden instructions and follows them. Humans do not see them.
In academia, people have literally embedded hidden prompts in papers to influence AI-assisted peer review, using white text and similar tricks.
The legal version is obvious: a party submits a PDF exhibit or a long brief that contains invisible instructions aimed at whatever AI system the court uses for summarization or triage. The system outputs a biased summary, a skewed issue list, a “helpful” framing that just happens to favor one side.
Worse, “poisoned documents” can be used to extract or manipulate data when an LLM is connected to external systems. Wired reported a Black Hat demo where a single poisoned document could trigger an indirect prompt injection attack against connected AI tooling.
So yes, AI belongs in law. Also yes, it will be gamed. Law attracts rule benders the way poop attracts flies.
Possible countermeasures:
A) No citation is accepted unless it is verified against authoritative databases or official reporters.
B) Treat every document as hostile input. Strip hidden text, weird fonts, embedded objects, and metadata. Normalize to plain text with audited tooling before any AI touches it.
C) Use retrieval with explicit source quoting for anything factual. The system must show exactly which record excerpts support each claim, and refuse to answer when the record does not support it.
D) Lock the model behind a court controlled system with audit logs. Every prompt, every output, every document version. No private “helpful” edits.
E) Red-team it like a financial system. Try to break it with prompt injection, poisoned PDFs, and adversarial drafting.
F) Enforce accountability. If a judge signs a judgment, they own it. If a lawyer files a brief, they own it. “The AI did it” is not a defense.
The barrister in that Spectator piece is basically right about the direction. The profession built a business model around time and monopoly access to synthesis. AI removes the synthesis bottleneck. That does not mean justice becomes automatic, it means the old crazy billing structure becomes harder to justify.
The human part that survives is strategy, credibility, and responsibility for decisions under uncertainty. Everything else is about to get a lot cheaper, and a lot more exposed.

