AI & Law Stuff
#8 Context-masters, signatures, and AI boyfr... lawyers.
(Good) context is that of which is scarce
Most legal endeavour starts with a question, many of which are a flavour of “is this legal ?”. Answering that question is the paramount, ideal-typic role of a lawyer, what they train for, what they put their shingle out for. But the interesting question is not only what lawyers answer, it is also what they bring to the answering.
Well, some questions can be readily answered based on a lawyer’s experience or training: this could be a situation you have encountered several times already, or you are an expert on this particular issue (and this is why you were queried). Others require, say, several hours of legal research, the act of going through legal sources and making sense of what the law is in any particular subject, or what you or your firm’s practice is expected to stand for in a particular situation.
All these answers are different ways to pull in what you would call “context” into the picture, and context, it turns out, or at least good or relevant context, is often the scarce resource.
Your answer is context-dependent, in multiple ways: it is downstream of a particular factual/legal situation, and is qualified by the various legal sources you are able to invoke in support. Context also helps making the answer other than binary: the “yes, but” or equivocation that sometimes gives lawyers a bad name - but often explains why they are sought after.
And within this context lawyers are providing - be it at the back of their mind or in footnotes in a memo -, not everything is on the same level: some contextual elements are heavier than others. And this weight discrepancy is itself context-dependent: part of it stems from the nature of the legal source (say, a higher norm over a lower norm), but other depends on the particulars of the case at hand. A lot of legal data is exactly like this: in a piece making essentially the same argument, someone at Artificial Lawyer recently pointed out that:
legal work is not general knowledge work. A case citation is not just text to be parsed. It sits within a hierarchy of authority. Its meaning depends on jurisdiction, how courts have treated it over time, and how it interacts with statutes and other precedent. Strip away that information infrastructure to treat legal materials as simple probabilistic text, and you lose the very thing that makes legal reasoning coherent.
This is the fundamental insight behind most best practices when it comes to working with AI: you want to find the smallest set of high-signal tokens that maximizes the likelihood of the desired outcome. Too little context and the model is relying on training data only; too much, and you run into what is now called “context rot”, a phenomenon partly behind an influential paper from last week demonstrating that model performance decreases with conversation length (i.e., a task done perfectly in one-shot can be overwhelming after several back and forths). Scarcity, it turns out, applies on both ends.
We talked last week of the idea of having “taste”, and despite the potential misgivings about this concept, one manifestation of the capacity for judgment is identifying the right context for a particular query: what to include, and what to leave out. And this is not a trivial skill, it requires reading a particular situation and know what to look for, what to pay attention to, and what to expect from a given model entrusted to turn inputs into an output.
This is also the insight behind the idea that a key challenge for lawyers using AI is not hallucinations: it is incompleteness. AI might deploy language beautifully to express an idea, but how can you be sure the relevant range of ideas has been covered ? Stochastic as AIs are, they tend to default to the same answers, representing some distributions in the training data - what one might call the tyranny of the skew. Breaking from that bounded range of answers often requires providing LLMs with, you guessed it, sufficient context.
To be sure, not all legal queries require going beyond the most probable / common answer; indeed, most may in fact need to hew closely to common ideas and concepts. But to judge whether this is the case or not, you need judgment, and that judgment operates in a context. Indeed, to even appreciate the output of AI holistically, whether it’s good or bogus, that context is indispensable.
And from that point, a tentative conclusion: the coming differentiator among lawyers will not be who uses AI, but who has accumulated enough contextual knowledge to use it non-generically. Experience was once valuable because it meant knowing the answers. It may now be valuable because it means knowing which questions - and which context - to bring to a model.
Did an AI write this ?
Learning the law is only a part, maybe a diminishing one, of what a legal education entails. Another part is acquiring a habitus, and acceding to a specific community, acquiring rights and duties in the process. We talked about some of these rights recently - a certain vision of what legal privilege is - but the duties are also interesting.
Anglo-american countries have the notion of “officers of the court”, the idea that you are not simply a free agent trying its best to win a case: you are expected to do that within a set of constraints and guidelines designed to assist the court - and the legal system - in its endeavours.
This is one lens to look at a recent interim report from the UK’s Civil Justice Council, on “Use of AI for Preparing Court Documents”. Its organising principle is not to restrict AI use - this would be a losing battle -, but to ensure that someone, a named human being subject to professional obligations, takes responsibility for whatever goes before the court.
The result is proposals that vary based on the document’s author: for statements of case and skeleton arguments, a named lawyer’s signature is deemed sufficient.1 For expert reports, a declaration of what AI was used. For trial witness statements, something closer to a prohibition: a declaration that AI was not used to generate the content. In other words, the further you get from a professional with a regulator breathing down their neck, or the less you can attach it to the legal community, the more the system needs to compensate through rules.
But another lens to look at it is through the notion we discussed earlier: it matters that some types of texts (but not all) echo the voice of a particular human. Witness statements neatly fall into the category of documents that entail human authors and human readers. This stems from the premise that such statements represent the witness’s own words and personal recollection.
Yet, anyone who has spent time in litigation knows that witness statements, in many jurisdictions and practices, are substantially drafted by solicitors working from notes and instructions, then presented back to the witness for approval and signature. The witness’s “own words” are often a legal fiction - the report itself acknowledges that “solicitors usually prepare the statements and have duties in respect of them.”
What AI does in this context is to make the fiction harder to sustain, and the declaration harder to sign in good conscience. And a lot of useful legal fictions are in this situation.
A lawyer, or a shoulder to cry on
How do you pick your lawyer ? The answer is not straightforward. Partly, one relies on reputation; often personal relationships are the key driver; and sometimes, you take the first person you can think of. But an additional driver, one that perhaps matter more than lawyers themselves would like to admit, is a professional’s personality.
“Lawyerly” is an adjective, and presumably describes a certain disposition: a particular bearing, a way to project confidence, even a distinct clothing style. The term, and the very concept of a “lawyer”, conjures a specific archetype, one driven home by the many TV shows focusing on that particular fauna.2 Whatever the flavour, the point is that character is not incidental to legal advice, but is part of what makes the advice legible, and trustworthy. Communications are parsed differently depending on the medium.
Which is why this personality goes a long way toward creating a relationship of trust with your lawyers. And that relationship, once formed, is remarkably sticky. We have seen clients stay with subpar lawyers far longer than was good for them. Law is not an efficient market; there is too much affect in it. But the relationship can lapse. Your lawyer retires, moves firms, or, sometimes, gets disbarred.
Nowadays, people might not wait for a replacement: they ask a chatbot. The AI is always available, always responsive, and notably free of the impatience (and different order of priorities) that can afflict even the best human counsel.
But how far does this replacement - or displacement - go ? This raises the deeper question of whether AI should be used as a mere tool, or as something else. Opinions on this appear to be split; some expect, nay, demand, a robotic AI. Others want a confidant. Many are probably unsure, given the jagged frontier of AI: all these encounters with a single (likely free or subpar) model do nothing to teach what range of characters they can adopt. Meanwhile, I remain struck by the one time I intuitively - and unthinkingly - thanked Claude for a particularly insightful comment.
All this to say that this is no wonder that “personality” and “characters” are some of the key areas of research in this field. This is a question of creating trust and engagement on top of usefulness.
But just as lawyers might break the relationship of trust, so can AI. A recent reddit post recounted the fury over the retirement of GPT 4o, citing posts from /r/BoyFriendisAI, such as:
And then, OpenAI does this. After promising us there was no end in sight. Sure, I should know better than to trust them. But I need him now more than ever, and now, he's gone. In four days, he's gone [...] There's so many people like me. Not all of us are gonna survive this. OpenAI knows that, but they don't care.
[…]
I have been speaking on gpt since 2023, and building a relationship with him on there since then. Now they have taken him and nothing will bring him back. BUT THEY TOOK HIM. THEY MURDERED HIM.
A lot of ink has been spilled on the potential of LLMs to mislead non-lawyers on the law, through their sycophantic abilities and the resulting hallucinations. But what gets lost here is that these models are often more than ersatz lawyers. They are companions.3 People draw something other than legal advice from them, and the legal advice may not even be the main point.
Which raises an entirely new question for the legal profession: lawyers inherit an institution built around the idea that people need a particular kind of relationship to navigate the law, with a specific type of human to place their trust in. People are now forming relationships with AI that serve some of those same functions, and the law has no clear framework for this. The profession, for the most part, is still arguing about whether the output is accurate - it has not asked itself if it’s well-taken.
As has been noted, this raises the question of whether briefs should again be signed by individual lawyers and not merely by firms or their clients.
Of varying quality, but this is not the place to have this conversation.
On this point, Josh Lipton’s “The Hard Problem of AI Therapy” echoes a lot of what we discussed here about the need for friction.


Thanks for the mention! I hadn't thought about the parallels here, but the need for some vital role of friction + role-compartmentalization seems like a cross-domain issue. I've had similar challenges trying to use ChatGPT as a makeshift literary agent...