When OpenAI launched GPT5 recently it took away access to its earlier models (which people preferred) because an LLM can't learn new things.
https://www.wheresyoured.at/the-enshittification-of-generative-ai/
If you ask GPT3 about anything that happened in 2025 it doesn't know, which spoils the illusion.
An LLM is basically lossy compression for the google cache, with hallucinations being a bit like jpeg artifacts. Each LLM is a snapshot in time from when "training" slowly and expensively compressed a historical dataset.