I've been thinking about [this article](https://www.nature.com/articles/s41583-020-0277-3 ) Ulrike Hahn (npub1u32…xcf5) shared with me recently. Apparently, I have strong opinions about why we shouldn't say that the brain is doing something "backprop-like" when we learn!
I think both brains and artificial neural networks (ANNs) need to solve the "credit assignment" problem. For ANNs, an algorithm called "back propagation" or just "backprop" is the industry standard solution, and it works very well. I think what brains do is different.
Before we start, the key thing to know is that computation in a neural network is distributed across many nodes connected by links. To tune the behavior of the network as a whole, you need to tune each of the nodes and links, but how do you know how any one node or link contributed to the final answer? It's complicated, and each one depends on many others. We call that "credit assignment."
(1/3)
#ai #ml #neuroscience