I think their explanation of symbolic logic was pretty flawed. Also, Demis didn't invent artificial neural networks, multilayer perceptrons, or reinforcement learning, which are foundations of LLMs. Although he did come up with better reinforcement learning algorithms.
Also, this "Eyes on the Earth" podcast is making the same fundamental error of understanding LLMs that is all too common nowadays: the idea that an artificial neural network is somehow similar to real neural networks in a human brain. This is wrong, but people believe it to be true, and this leads people to believe that modern AI is eventually going to become more intelligent than humans. The actual fact is, these machines can only use reinforcement learning to learn from information already created by human brains, it can't come up with anything new on it's own. It has no way to know the difference between a creative new idea and random noise. I have written about this at length [on my blog](https://tilde.town/~ramin_hal9001/articles/ai-cannot-live-up-to-the-hype.html).
I blame this misunderstanding on misinformation coming from the big tech companies trying to make people believe they have invented something more powerful than they actually have.
But I agree with everything else they said on this podcast, about how technology is always ultimately going to be used to do evil before people figure out how to use it to do good.
