this has always been wrong - but now categorically wrong in the era of RL on verifiable domains models are truth seeking and even interact with a hard outside world via tool use
David Deutsch
David Deutsch4.8. klo 16.23
LLMs are trained to imitate patterns of language, not to discover or verify truth. So, when asked to speak as an expert in an area where perceived experts have a widespread misconception, the LLM will parrot that misconception, adopting the register and vocabulary of experts.
358,06K