AI & Law Stuff
#18 AI Note-takers, AI insurance, and the role of billable hours
Too many notes
Fundamentally, a meeting between individuals is a device to convey information. You encounter someone, on- or offline, and both you and the other person(s) try to learn and teach each other things, implicitly or explicitly.
There are other ways to convey information ! Writing is a popular one, one that trades the benefit of asynchronicity for the drawback of unilateralism - that is, until you respond and engage in a conversation. Video, podcasts, and the occasional interpretive dance also work. And one assumes we will one day all be tethered to a brain-computer or something. But meetings are popular because, on top of other benefits, a conversation - a back-and-forth optimal for conveying information - is generally inherent to it.
On the other hand, meetings have the drawback that they can be taxing on your patience and energy. Professional meetings, in particular, have a reputation, not unjustly earned, for being long, boring, and frequently useless. Still, they serve their role of conveying information, even if that information is that your boss just likes chairing a meeting and having a captive audience all to himself.
But meetings are long, I was saying, and one often wants to capture the information somehow, for instance by taking notes - another potentially boring task, but one that can (by contrast with your participation in the meeting, for now) be delegated by AI.
Some people, however, see an issue here. The New York Times reported last week that:
[…] to lawyers like Gifford, inviting an A.I. bot to meetings introduces a ticking time bomb of legal risk.
A.I.-generated transcripts, which some video call apps allow users to turn on by default, preserve all sorts of things — offhand comments, quickly corrected statements, jokes — that humans would rarely write in the meeting minutes. And they show up in meetings that would otherwise not be recorded.
In a lawsuit or an investigation, that can make every word uttered discoverable.
Even worse, say corporate lawyers: Sharing the meeting with an A.I. bot may void attorney-client privilege, making conversations that would not otherwise be subject to discovery fair game in a lawsuit.
There is a lot to unpack here.
Let’s not waste time on the privilege thing, which is downstream from the bizarre and technically illiterate focus on LLMs as a specific kind of privacy risk we discussed recently. Simply consider this: the data captured by AI note-takers is the same data, or of poorer quality, as the video or audio stream processed by your online video platform. Sure, one difference is that the notes you are taking are meant to be stored, but people have been recording online meetings for ages without lawyers freaking out.
The accuracy issue has better grounds: AI note-takers, like most LLM-powered apps, suffer from the basic LLM limits, such as hallucinations (just ask the doctors how it’s going). But as usual when discussing AI, the right question is “compared to what”. A human note-taker ? If so, I have bad news for you.
And I am sympathetic to the concerns that you don’t necessarily want everything talked about in a meeting on paper, especially if oral elements (such as emphasis, tone, etc.) are missing. But here your concern is not with AI-taken notes per se - it’s with the idea of having a transcript or minutes to begin with, the boring fact that scripta manent. Famously, one should not take notes on a criminal conspiracy, but a range of licit activities are also, often probably better left unrecorded.
But more fundamentally, I think what’s lost in this discussion is the underlying purpose of the notes - which is, again, to capture information. And that comes down to the notion of who writes for whom.
On the one hand, if the notes enter the category of texts that will never be read by anyone, then sure, use AI, or maybe consider not adding to the mountains of text no one will ever read.
But if the notes are genuinely for a human reader, then it’s worth pointing out that, often, the “taking” is as important, if not more, than the final notes themselves. Taking notes is an excellent way to ingest that information by focusing on what matters. Good note-takers exercise judgment to capture, in real-time, the relevant details of an account or factual recounting, all the better to remember them later on. A judgment that LLMs frequently lack, being better at padding than at pruning.1
Which means that the more important issue in delegating note-taking is that, in most cases, you are in fact not saving time: you are adding a layer of abstraction between you and getting the information. So indeed, maybe turn off the automated note-taker.
Flood insurance
Not all lawyers are stellar. Some are even bad. But this is the kind of profession where everyone believes they are above average, and it’s hard to make sure this belief is backed by sufficient proficiency in law, business, or simply common sense. Certainly, this is a lot of what bar membership now entails: continuous legal education, professional duties of competence, etc.
One thing these duties are meant to do is to help lawyers defend against complaints by their clients, complaints that arise naturally from the job: many legal operations are risky, in that one might regret engaging with the law. Litigation, certainly, typically ends with at least one disappointed party (if not two), but even some transactional legal advice can lead to regrets on the client’s part. And it serves as a good defence to say: “look, I worked in line with existing obligations and principles”.
All this clashes with AI in various ways. But one particular impact of generative AI we have catalogued is that it lowers the bar to complain and assert one’s rights. Including, as it happens, bar complaints. In a recent paper, Ashley Krenelka Chase coins the term “synthetic grievances” to describe such complaints, and draws the consequences of all this:
The cost asymmetry is stark. Generating the complaint takes minutes and costs nothing. Investigating it—verifying citations, interviewing witnesses, reviewing case files—takes hours or days and consumes scarce agency resources. Even if the complaint is ultimately dismissed as meritless, the accused attorney has been subjected to the stress and reputational harm of being under investigation.
A further consequence, Chase predicts, will be in terms of advocacy: complaint-averse lawyers may prefer more conservative strategies and legal approaches, and seek to shield their practice from criticism at the expense of focusing on their clients’ needs.
But there is another way: traditionally, when a risk increases and you don’t want to change your behaviour, you start insuring yourself. Back in January, we predicted that malpractice insurance would adapt to a world with AI.
And sure enough, since then, the market provided. Glitchwire reported:
Corgi, the Y Combinator-backed insurance carrier, has launched AI Insurance Coverage, a purpose-built product designed to protect businesses when their AI systems malfunction, hallucinate, or make decisions that cause financial harm. The coverage addresses the full spectrum of AI failures: biased algorithms, inaccurate generated content, training data disputes, adversarial attacks, and autonomous system breakdowns.
The timing is pointed. Traditional insurers have started excluding AI-related risks from their policies altogether, leaving companies exposed precisely when they need protection most.
So the cycle completes itself: AI creates not only a new way for lawyers to fail their clients, but also a new way for clients to complain about the lawyers - including unfairly. And insurance is what appears when the cost of distinguishing real from bogus complaints does not scale.
OpEx, CapEx, Law Firms
Many discussions about AI and legal work turn around efficiency, and a common retort in this context is that, you know, law firms are not geared towards efficiency. This comes down to several reasons, one of which is the important role played, in most practices, by the billable hour: why would you tool up to become more efficient if this means less money for you ?2 Hence, the argument goes, law firms are not incentivised to adopt AI to become more efficient.3
This means that billable hours will have to go, AI optimists reply, to which the easy retort is that the billable hour’s demise has been predicted many times, only to entrench itself further.
So far, so classic. But in a recent piece on LegalTech Hub Nicola Shaver gave this a twist I had never thought of before:
Most conversations around this are incomplete. The deeper issue is not simply that AI reduces time; it’s that the law firm economic model is built on time both as a proxy for value and the mechanism through which value is distributed across the organization. Once that foundation begins to erode, the consequences cascade.
[…]
Associate performance, progression, and compensation are still overwhelmingly tied to billable hours. The system assumes that time spent is the clearest and fairest way to measure contribution - rather than, say, merit, or value of work undertaken. As long as that remains true, any shift away from time-based work will create misalignment. Efficiency in this scenario is no longer neutral; it becomes economically and professionally ambiguous.
The billable hour, and the broader apparatus of quantifying lawyer output, exists in large part because big firms need legible metrics to manage hundreds of partners and associates. Not entirely - discretionary bonuses and partnership politics have always done some of the work - but mostly. And, in this respect, if AI lowers the transaction costs of running a legal practice, smaller firms become more viable, and smaller firms don’t need that apparatus to the same degree, which is one way the billable hour might disappear - but that’s a lot of “ifs”.
Likewise, Shaver points to the importance of the “matter” as a way to bring legibility to the revenue-generation pipeline: you know who is in charge of it, and who is working on it, and how much they made. By contrast, it’s hard to assign value to the development of AI tools and workflows, even though that could assist across the board.
Or in other words (mine, here), law firms are light-capital structures centred around OpEx, and they don’t know how to deal with CapEx and to redistribute the value generated by the latter.
Shaver goes on to point out that this is part of the structural reasons that put brakes on AI adoption in law firms, and predicts that the latter will eventually have to bet on specialisation: “Firms need to identify, now, where their future differentiation will come from. Not in abstract terms, but in prioritizing the right practices and clients, and in focusing on concrete workflows, capabilities, and client-facing offerings.”4
But from my perspective, all this as well puts a new twist on the notion of an AI-first law firm, which we recently covered. If it’s a typical law firm that just professes to use AI better than its peers, then, I have my doubts the notion is salvageable. But if instead it’s something that reinvents what a law firm stands for, how it works, and how it allocates value between its members, then this thing might have legs.
Even further: why make juniors more efficient, when their time value is mostly captured by you, the partner deciding on tooling them up ?
Though they can be incentivised to pretend they do, of course.
Interestingly, this echoes this interview of Holly Robbins in the New Yorker: “Why the Future of College Could Look Like OnlyFans”.


Hi Damien! A pleasant read as always. I really found the idea of insurance for risk caused by AI systems interesting. This is because mistakes caused by such systems can be costly in service sectors such as law, medicine, consulting, and personal financial advice. Furthermore, with respect to the billable hour, I think this is one of the reasons why ALSPs will adopt AI more widely, because their workflows are already standardized and have a fixed-fee structure. The competition for market share has never been higher, I feel, with so many entrants, including AI-native law firms, BigLaw, ALSPs, Mid-sized law firms, and Specialized boutique verticals.