

27·
11 hours agoSo now we can finally go back to good old code optimization, right? Right? (Padme.jpg)
So now we can finally go back to good old code optimization, right? Right? (Padme.jpg)
Lol. Be my guest and knock yourself out, dreaming you know things
I’ll bait. Let’s think:
-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it
now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output
now llm is asked about the topic and computes the answer string
By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)
If you want to say 40% wrong llm means 40% wrong sources, prove me wrong
Damn. I hate how it hurts to know that’s what will happen