Join Nostr
2026-05-09 10:57:59 UTC

John Carlos Baez on Nostr: I again feel the need to point out that despite all the immense dangers of LLMs, ...

I again feel the need to point out that despite all the immense dangers of LLMs, mathematicians are seeing rapid improvements in their capabilities... which may be yet another danger. Fields medalist Timothy Gowers recently blogged about using ChatGPT 5.5 Pro to improve a recent result in combinatorics. I'll skip the math and quote his summary:

"I would judge the level of the result that ChatGPT found in under two hours to be that of a perfectly reasonable chapter in a combinatorics PhD. It wouldn’t be considered an amazing result, since it leant very heavily on Isaac’s ideas, but it was definitely a non-trivial extension of those ideas, and for a PhD student to find that extension it would be necessary to invest quite a bit of time digesting Isaac’s paper, looking for places where it might not be optimal, familiarizing oneself with various algebraic techniques that he used, and so on.

It seems to me that training beginning PhD students to do research, which has always been hard (unless one is lucky enough, as I have often been, to have a student who just seems to get it and therefore doesn’t need in any sense to be trained), has just got harder, since one obvious way to help somebody get started is to give them a problem that looks as though it might be a relatively gentle one. If LLMs are at the point where they can solve “gentle problems”, then that is no longer an option. The lower bound for contributing to mathematics will now be to prove something that LLMs can’t prove, rather than simply to prove something that nobody has proved up to now and that at least somebody finds interesting."

Arguable, but worth paying attention to.

https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/