Discussion about this post

User's avatar
Ant's avatar

Dear Damien,

On May 1, 2026, at 3:18 PM, I sent you an email, which for obvious reasons I cannot post here! To date, I have not received your reply! I attribute this to the fact that the email is in your spam folder! I would appreciate your opinion on the matter. Sincerely,

Antonio Garcia - Brazil

P.S. My email is Gmail

The transcript follows:

Dear Mr. Dr. Damien,

I write to you because I have been working extensively on the problem of hallucinations in large language models and I would like to share in detail the protocols and structures I have tried to develop. My goal is to provide you with a complete insight so that you can analyze and perhaps align that reasoning with your own research.

Step by step, here’s what I tried:

1. Initial doubts:

- Whether hallucinations can be eliminated completely or only reduced.

- If the repetition of protocols creates persistence between sessions.

- If an “auditor mode” provides real security or only methodological discipline.

2. Protocols created:

- **AuditResultClean**: designed to separate confirmed facts from inferences.

- **PROTOCOLO_MESTER_AI_V2**: a master protocol with control domains, master rules, flags and confidence levels.

- Mandatory inclusion of category **UNKNOWN**, forcing the model to declare uncertainty.

- **Mandatory uncertainty mode**: requires the response “SUFFICIENT DATA” when there is no factual basis.

Below is the JSON structure of PROTOCOLO_MESTER_AI_V2 for your analysis:

{

"PROTOCOL_MASTER_AI_V2": {

"Control Domains": [

"Confirmed Facts"

"Inferences,"

"Unknown"

],

"Master Rules": {

"Uncertainty of Requirements": "Reply 'INSUFFICIENT DATA' when there is no factual basis".

"Triple Separation": "Every answer must be classified as Fact, Inference, or Unknown."

"Flags of Trust": [

"High,"

"Media,"

"Bass"

]

},

"Levels of Trust": {

"High": "Facts confirmed by external source or scientific consensus"

"Media": "Plausible inferences, but no external source"

"Download": "Replies without sufficient data or classified as Unknown"

}

}

}

3. Identified in a limited way:

- Complete elimination of hallucinations is not possible.

- Protocols do not create persistent conversations among themselves.

- The model generates plausibility rather than consulting reality.

Strict audits improve discipline, but do not replace external verification.

4. Practical attempts:

- Repeating the anti-halucination instructions at the beginning of each session.

- Application of the tripartite classification (Fact / Inference / Unknown).

- Discussion about RAG (recovery augmented generation) and external verifiers as required architecture.

- Recognition of the limitations of “listening mode”: it does not block responses, does not create memories and does not eliminate hallucinations.

5. Other concerns:

- How to prevent the model from inventing persistence.

- The difference between internal audit systems and external verification systems.

- The need for layered architectures (grounding, uncertainty marking, RAG, verification).

- The structural question: the model does not consult reality, it only generates a plausible text.

In summary, I have tried to build a layered audit structure that strengthens transparency and declaration, but I recognize its limitations. I believe that your experience could help me understand how these efforts align with more robust approaches and how they can contribute to the broader work you are developing with the Artificial Authority! and Pelaikan.

I would appreciate your analysis of these points and your perspective on how to proceed.

Sincerely,

Antonio Garcia - Brazil

Ps https://www.damiencharlotin. com/hallucinations/

No posts

Ready for more?