5 Comments
User's avatar
Chad Ratashak's avatar

I think this pollution of the epistemic commons gets worse before it gets better with widespread use of agentic research tools.

AI naively cribbing notes from other AI’s hallucinations as grounding means even if the citation accurately summarizes the source (not always guaranteed), the source might already be a prior hallucination anyway.

DamienCh's avatar

Cue all those "I asked ChatGPT if this fake case exists and it described it to me in detail" (sadly common in the database).

James Andrews's avatar

Damien, in this case https://www.cbsnews.com/colorado/news/colorado-leticia-stauch-conviction-murder-stepson-overturned-juror-biased/ The system failed not because judgment was replaced, but because one of many discrete screening decisions was left entirely manual. Out of hundreds of decisions involved in jury selection, some can be automated to reduce cognitive load without removing human authority. "...we can build or adopt the conditions under which people know when not to use it." The prerequisite problem is decision structure mapping (which machine learning is great at). Before you can offload anything you have to answer: what are the discrete decisions embedded in this institutional process, which are deterministic, which are probabilistic, which are irreducibly human, and what is the authority hierarchy for each?

DamienCh's avatar

Fascinating example ! But not sure we can ever identify all these hundred of decisions and subdecisions, managing complexity is, well, complex.

James Andrews's avatar

Certainly we cannot with the technology we have. But if we fail to try, chatbots will swamp our frail little institutional knowledge canoe. AI won't respect our boundaries.