Sorry if I'm beating a dead horse, but I think quite a bit about this topic.
I would argue that one important mode of "thought" is an internally generated, goal-directed, agent-oriented process. There is some "I" that wants some thing, and a (perhaps nonlinear) "train" of thought that the organism continually generates and steers toward achieving some outcome.
This is fundamentally different from what an LLM does, but also quite similar. The main difference is that the goals, agency, and process governing our interactions with LLMs are almost exclusively *external* to the LLM itself. They are scaffolding that shapes a generic token generation process.
To the extent that such scaffolding approximates what a thinking organism does to shape its own thought process, then LLMs will appear as if they are "thinking." But I think the differences are significant and many, even if we don't have a good enumeration or vocabulary yet.