I don't think the statement is that controversial. Yes we understand how to train a model but we don't understand their interworkings. This is just one example:
https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/