Epistemic Honesty: An Unusual Commodity for Large Language Models
I asked 290 LLMs to summarize a paper that has never existed. 62% of them confidently did. Recently Hiroko Konishi posted on X and wrote a report “Structural Inducements for Hallucination in LargeLanguage Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” that has garnered significant attention. For me, it was like […]
The Risks of Using Gemini Code
I’ve been exploring using various AI coding agents over the past year. They generally work by using an API key, which charges per million tokens. That might sound like a lot, but keep in mind that the way LLMs function is by appending messages to all that’s gone before, so the length increases as the […]
Recent Comments