There are two big reasons I dislike saying brains do something "backprop-like." The first issue is how work is split between nodes and links.
In ANNs, the nodes themselves are *trivial*, and they're completely homogeneous across a full layer of the network, if not all the layers. Any deeper computation is about how the nodes are wired together. That is, the program is in the *links* (synapse weights), not the nodes.
By comparison, brain cells are both complex and diverse. We don't know how much of the computation happens within cells vs. between them. We're just starting to figure out what all the different kinds of cells *are*, but have little idea of what they're doing. It's clear that individual neurons do a lot, and that ensembles of cells manage each other in complex ways.
I worry saying the brain "does backprop" implies a network of trivial nodes, where tuning weight vectors is *the place* where learning happens. That's likely wrong, and it obscures other possibilities.
(2/3)
#ai #ml #neuroscience