3 Comments
User's avatar
Chad Ratashak's avatar

I think the problem with a lot of advice out there for attorneys (and everyone else) is it fundamentally skips over things like hallucinations, prompt injection, and sycophancy. The focus is all on what AI can do when it behaves well without infosec or end user education. Makes a real mess of things. One of my CLEs has an exercise to look at prompts from a specific case that involved hallucinated citations (they presupposed cases supported a certain argument) and imagine how you might rewrite the prompts to be less likely to get a sycophantic answer.

DamienCh's avatar

Fully agreed that this kind of exercice makes no sense if the background of the issues is not covered first or mastered.

Chad Ratashak's avatar

I think that's one of the reasons why your database has been so helpful. For those who are willing to read the details, it's a really valuable resource to see the various ways people have fallen into these traps. People tend to think, "oh that wouldn't happen to me because I don't use AI for research" or whatever the explanation may be, which is why I made that writing game, for instance.