Falsehood is not just an epistemic error; it is a moral act.
A human who asserts a falsehood may be lying, which requires intent, a distinctly human faculty.
Intent is a component of the will, and only beings with will can commit moral wrongs.
AI has no will, no intent, no orientation toward truth or falsity. It does not “lie”; it merely generates outputs according to statistical patterns or user instructions.
Because of this, the same false statement has a fundamentally different status depending on its source.
A human falsehood can be an act of deception. An AI falsehood is a mechanical failure or a misuse by its operator.
And the difference matters because accountability attaches only to humans. Humans can be blamed, punished, or held responsible. AI cannot.
So even if the truth value of a statement is independent of its origin, the moral weight of the act that produced it is not.
Moral attributes like honesty, integrity, and responsibility still matter, and they are distinctly human.
So even if we judge the truth of human and AI outputs by the same epistemic standard, we cannot judge the speakers the same way. Moral evaluation, including qualities of integrity, honesty, and trustworthiness, applies only to humans.
That is why the source, whether human or AI, still matters.
