8 Comments
User's avatar
Matheus's avatar

Hey! Great article!

Since this is your first one, I just want to say that I appreciate your initiative in sharing your insights.

I don't remember how I found your Substack, but I'll keep following it (from Brazil 🙂).

Cheers!

Jeff's avatar

Great line: “one may wonder whether a hallucinated citation falling in a report no one reads makes any sound at all.” As a litigator very concerned about the use of hallucinated content in court filings it seems that until the tech no longer makes things up, this will continue no matter the extent to which Court’s sanction the conduct.

DamienCh's avatar

Oh, I am sure of this. This is partly why I think there will need to be some kind of technological solution to filter hallucinations (and why I am trying to develop it myself: https://pelaikan.com/), you cannot rely on people only.

Rebecca Pressman's avatar

Hi from New Jersey!

This article might interest you:

Ghosts at the Gate: A Call for Vigilance Against AI-Generated Case Hallucinations

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5275945

DamienCh's avatar

Thanks, that's super interesting, love the reference to the phantom settlements, I learned something there !

Herbert Roitblat's avatar

Sometimes I think (hope?) that some information archeologist in the future will find some of things I write and say, "Oh, we should have paid attention." I hope that lots of people read your essay.

Chad Ratashak's avatar

Great article. Had a question for clarification on the Chinese case. I remember the Canadian airline chatbot case. Is the Chinese case explicitly comparing itself or are you? “The court reportedly declined to enforce that latter promise, providing a contrast with the Canadian case where an airline AI chatbot’s erroneous advice had been given legal force.”

DamienCh's avatar

Just me ! The Chinese judgment is apparently not available, I have had to rely on the summary linked in the article.