Large Language Models are very explicitly and structurally unable to evaluate correctness. They will never be able to distinguish good from bad, truth from lies.
So, because people are deathly afraid of making decisions, we will use LLM programs to make decisions. Critical, life or death decisions, for which no explanation or justification is possible.