Join Nostr
2026-05-06 11:53:14 UTC
in reply to

Johannes Koski on Nostr: There's also some irony in how the instructor is very clear that LLMs are bad at the ...

There's also some irony in how the instructor is very clear that LLMs are bad at the thing they themselves are an expert on, because the models make mistakes and are not infallible.

Yet their prime use case for using LLMs is to "learn new things". Where the models naturally relay only factual and relevant information.

Like, obviously there are shades of grey here! You CAN learn a lot with LLMs (if you disregard ethics). But dang, it's scary to see critical thinking crumble so fast.