<oembed><type>rich</type><version>1.0</version><title>TheGuySwann wrote</title><author_name>TheGuySwann (npub1h8…6rpev)</author_name><author_url>https://yabu.me/npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>This is what we&#39;ve failed to recognize with Ai, in my opinion.&#xA;&#xA;Researchers think there&#39;s going to be an exponential, neverending explosion the moment Ai can train its own models better using the scaling principle (a big model can make a smaller model that&#39;s better, then use compute to scale it up to its own size, thus making a better model than itself)&#xA;&#xA;but what they don&#39;t see is that, *especially* because its all still dependent upon human judgement as to what is &#34;better,&#34; and because its all derived from human content. Sure the first few iterations will make generally much better models, but the first time can make 100% improvement, the second time maybe 40%, the third time maybe 10%, etc. It&#39;s just not going to scalre forever because there&#39;s no way it compounds, it seems blatantly obvious it has diminishing returns. You can&#39;t start with human quality material, and human judgement, and then end up with something 10000x better than any human ever can or could be. That doesn&#39;t make sense on a dozen different levels. Ai is a probability machine, every single time you make things more aligned with what is more probable, you necessarily also introduce a little noise with each iteration until it can&#39;t even know what is &#34;better&#34; and what isn&#39;t anymore. &#xA;&#xA;It&#39;s like having an Ai check its own work. Sure a lot of times it catches mistakes, but sometimes it still just says shit that&#39;s dumb as fuck because that&#39;s what ai does sometimes.&#xA;&#xA;nostr:nevent1qvzqqqqqqypzq3svyhng9ld8sv44950j957j9vchdktj7cxumsep9mvvjthc2pjuqqs8wqt6expe40xc5a0p8e674q0pd8kucfz2xk39ut99l9kxnwley6g6yehk9 </html></oembed>