8 Comments
User's avatar
Doc Herbst's avatar

I do agree that we shouldn't put so much the blame on AI made summaries or paragraphs when they are published by authors. Alright.

But ... I've seen lots of summaries/synthesis that miss a crucial piece of information or use one opinion on a subject matter and doesn't take into account the other one. I always disclose such use whenever I send an AI made summary that I didn't have time to check and edit myself.

DamienCh's avatar

Transparency here is good practice indeed? And there is definitely a question of competence, of the AI, but also sometimes of the human (my favorite phrase in all these debates is always "compared to what ?").

Doc Herbst's avatar

You wrote: "Transparency here is good practice indeed?"

Is it a question or an affirmation?

DamienCh's avatar

Ooops, this question mark should not have been there.

Doc Herbst's avatar

Compared to what here? Well, compared to my own skills. And I really don't practice summarising often.

What is true, on the other hand, is I can produce a much faster summary by checking and editing the one made by the AI, than by writing it all myself.

BTW, the AI here used a GPT-4o API if I remember correctly.

Rebecca Pressman's avatar

For another example of the US practice of creating patchwork legal regimes, see US privacy protection.

DamienCh's avatar

Definitely ! But the question then becomes whether the strictest regime sort of set the bar extra-territorially (as is partly the case for privacy, if I am not mistaken).

Chad Ratashak's avatar

“And as in many respect, legislating away AI use (as this obligation will certainly do) will simply result in concealed/shadow use of AI instead, making things worse.” In contrast, you might like Illinois Supreme Court’s policy https://ilcourtsaudio.blob.core.windows.net/antilles-resources/resources/e43964ab-8874-4b7a-be4e-63af019cb6f7/Illinois%20Supreme%20Court%20AI%20Policy.pdf