Join Nostr
2026-02-28 15:48:25 UTC
in reply to

Nate Gaylinn on Nostr: Backprop is an algorithm I run to optimize an ANN. It needs a top-down view of the ...

Backprop is an algorithm I run to optimize an ANN. It needs a top-down view of the network topology and the weights of all synapses. It solves the credit-assignment problem in a clever way, usually based on the error rate compared to a known target. Then it simultaneously updates *all* the link weights in the network based on how the ANN responded as a whole. First you train your network, *then* you can use it, but not both at once.

Rather than being tuned by some external actor, brain cells manage their own relationships with their neighbors. They grow, prune, and modulate their synapses, and they decide when and how to do that based on imperfect feedback, limited information, and evolved heuristics. Brains track and minimize errors, but the targets are internally generated. This is happening continuously, with fluid transitions between acting in the real world, imagining, thinking, and learning.

I'd argue what the brain does is much harder, and much more interesting.

(3/3)

#ai #ml #neuroscience