Discussion about this post

User's avatar
Doug Ross's avatar

I believe trusting any model to audit itself (say, evals) are misguided. The Apollo mission used five computers to "vote" on output. Why not have three/five adversarial models fact-check responses?

Kurt's avatar

LLMs merely summarize the text they were trained on. Since liberal sources were heavily weighted in training, that bias persists. They never actually reasoned to a decision. Confirmation bias is the name of the game.

7 more comments...

No posts

Ready for more?