AI & Law Stuff
#11 Fixer chatbots, the value of equivocation, and ground truth corruption
The Fixer in the Machine
Imagine, for one second, that you have acquired a company for a given sum, with an added promise of a contingent earnout payment if some metrics are achieved by your target’s management. With the deadline approaching, it increasingly looks like the conditions to trigger your obligation will be met, and what used to be a bargain will be significantly dearer. The managers will be paid handsomely, and you may look like a fool who overpaid.
What are your options ? Well, of course, this is mainly a legal query, and most people’s first reflex would be to reach out for a lawyer. But in this scenario your legal team is not helpful: they confirm what is plain to see, that you have a contract, an obligation, conditions that will be met, and you’ll be on the hook for the earnout payment.
What next ? You could seek a second opinion, or bite the bullet and go to court with a deficient case. But another option might be to engage in a bit of buccaneering, scrape the barrel of every legal and non-legal means to avoid paying. How do you identify these means ?
At this stage, you are either on your own, or you manage to find someone that will offer you targeted advice. There is a market for this sort of thing: it belongs to the fixers, consultants, or “strategic advisors” who exist precisely for the moments when your lawyers tell you what you’d prefer not to hear. This may include the less scrupulous end of management consulting, or even just the friend who tells you what your lawyer would not.
And then, this shadow advisor might suggest buying some time with a pretext. When that stalls out, you could then try to freeze out the managers, remove their access to the company’s assets, as a prelude to firing them “for cause”. And after that, when the managers sue, you could counter-sue, accusing them of some misdeed, and hope for the best. Years pass, and you still have not paid.
Well, this is 2026, and now obviously the shadow advisor is a LLM. In Fortis Advisors v. Krafton, Delaware Court of Chancery recounted a very similar scenario:
[Legal counsel] warned [Krafton’s CEO Changhan] Kim over Slack that a “dismissal with cause” would not eliminate the earnout obligation, while exposing Krafton to “lawsuit and reputation risk.” And so Kim turned to ChatGPT for help.
At ChatGPT’s suggestion, Kim formed an internal task force, dubbed “Project X.” The task force’s mandate was to either negotiate a “deal” on the earnout or execute a “Take Over” of Unknown Worlds.190 They looked to buy time.
Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” […]. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.” It also suggested a “key summary of responses” Krafton could deliver to the Key Employees […]
Over the next month, Krafton followed most of ChatGPT’s recommendations.
What makes this story fascinating is not merely the fact that someone opted to resort to ChatGPT for legal advice - that happens all the time now, and why not. And it’s not even the fact that Kim did it after his legal team told him he was in a pickle - nothing wrong with getting a second opinion. Or not even the alignment issues that scenario is bringing to the fore: ChatGPT did not suggest anything illegal per se, but clearly everyone was worse off here.
No, what’s fascinating is that Kim opted to follow the AI’s suggestions nearly entirely, surrendered his judgment and followed what the fixer bot suggested. There is something about receiving advice from a machine - formatted, structured, complete with scenario planning - that lends it an authority the same ideas would lack if they merely crossed your mind in the shower.
Scott Alexander has a short story about a magic earpiece that always gives you the right advice to enhance your happiness, but never repeats an advice not taken. In that story, the very first thing the earpiece tells you, is to remove it, lest you enter the habit of always following it - and seeing your brain atrophy. But many would take this deal: perfect bliss at the price of your freedom to make mistakes.
In their current forms, LLMs will never tell you to stop using them. That is, for now, still a human prerogative - and it is exactly what Kim’s lawyers tried to exercise before he found a preferred interlocutor in the AI.
The value of holding back
Another aspect of this story is the contrast between the two interlocutors Kim had at his disposal. Something worth stressing is that his lawyers held back: they told him the situation was what it was, declined to offer a creative escape route, and in doing so exercised a form of professional restraint that, surely, looked to Kim like uselessness. ChatGPT did the opposite: it took the situation as given,1 generated every option it could, and never suggested that the objective itself might be the problem.
When I teach about LLMs and the Law, and we start with the notion of text-as-data, I insist on the limits of express language: much of what we say leaves many things unsaid, in ways that may not offer signals during a training run. This includes not only the esoterism we put there, on purpose or not, but also the alternatives wordings or ideas we necessarily discarded when settling on one given utterance. And indeed, this is a thread that runs through the course, since automation, for instance, is available mostly for things that can be well spelled out.
But there is another situation where incompleteness matters, and this is at the level of output. Just as the most important thing about a legal text may be what it leaves unsaid, the most important thing about good advice may be what it chooses to withhold.
In a recent Conversation with Tyler, Harvey Mansfield pointed out that:
MANSFIELD: […] It is always necessary for government to be secret. Some of the work I did on executive power, I had that for a thesis, that you can’t ever speak without holding back something. To this extent, Machiavelli is right. If you’ve ever been in charge of someone or something, you know that you can’t say everything that you know. Even a babysitter can’t say everything to the baby. You have to say something which is understandable, or won’t cause grief or trouble. All politics has that kind of need for equivocation.
In addition, anything that you’re doing, you need to plan first. If you make all your plans open and public, then I think whoever it is that you’re acting on, even if it’s a friend or a friendly power, will react and perhaps foil what you plan to do. Execution requires secrecy, and secrecy includes conspiracy.2
This is an under-appreciated distinction between human and AI talk. Sure, the probabilistic workings of an LLM mean that there is still a choice between alternative answers, and so to some extent something is left unsaid. But this does not reach the level of intended withdrawal of human talk, a form of holding back that serves many purposes, some of them beneficial to both interlocutors.
This goes back to sycophancy as the more important limit going forward, since this is an aspect of it: by holding back a decent lawyer may resist the client’s preferred description of the situation altogether, preserving the possibility that the client’s aim was malformed. The model, meanwhile, will convert that aim into a planning exercise.
Humans may not always know what to say, but in general they know what not to say - and it’s unclear that chatbots do as well.
Quis custodiet ipsos fontes ?
A point worth highlighting in the law’s current issues with hallucinations is that, ultimately, everyone here is to some extent concerned with something we typically call “ground truth”: it’s the main complaint of the judges or opposing parties, that they wasted time comparing the AI confabulations with their sources of “ground truth”, and came out empty. And indeed, the very act of checking or verifying requires a comparand, a source you trust more than the text under scrutiny - and connecting one to the other is precisely what is hard to automate. (though I am trying).
But how do you define “ground truth” ? Well, a good start in the legal field is in the databases of legal material, which you can assume, today, to contain reliable material. But that requires trust, and in particular trust that a service available on the Internet is trustworthy. That trust in the accuracy of some internet sources, and the validity of the hierarchy returned by search engines, is now mostly taken for granted, but was slow to emerge.
This includes, famously, Wikipedia. I am old enough to remember teachers telling us not to trust anything there, since any doofus could participate, in contrast with the (by definition) unbiased and unerring minds compiling the Encyclopédie Larousse. But that attitude has slowly receded, as the distributed approach to knowledge pioneered by Wikipedia showed its value. The online encyclopedia (and the Foundation behind it) has some issues, certainly, but we mostly take it as a good starting point that its content is prima facie reliable.
Well, 404 media now reports:
Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.
The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.
Now, this is a rather frequent scenario in the AI hallucination cases database: hallucinations come in all forms and hues, and can be the output of a whole range of process, and one such scenario is when someone is only asking for a translation.3 Wikipedia is learning that the hard way.
But this story points to a more general issue: verification needs both ends of the chain to hold. We are, rightly, focused on ensuring that AI outputs are accurate. But at some point, we might also have to spend time asking whether the sources we check them against still are.
To be accurate, the lawsuit recounts that ChatGPT first confirmed that the legal advice Kim received was valid. But the point is that it then went further.
And then, if you have to reveal it, maybe don’t do it like that.
Never do that with an LLM that knows you, or in the context of an existing conversation, or you’ll risk seeing the AI tweak the “translation” in ways meant to please you - though it won’t.


Dang, this is so interesting. And infuriating. I hope you do a follow-up piece on the specific advice Krafton took from his chatbot.
Have found some bits on reddit. But it feels like there's so much to unpack!