<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <updated>2026-04-09T12:01:05Z</updated>
  <generator>https://yabu.me</generator>

  <title>Nostr notes by Syntaxxor 🏳️‍⚧️ :neobot:</title>
  <author>
    <name>Syntaxxor 🏳️‍⚧️ :neobot:</name>
  </author>
  <link rel="self" type="application/atom+xml" href="https://yabu.me/npub1zcg29w5t9t0302g7qqdk609pyjxaqyfdykh86se0nuwp6a87xatqex4xdu.rss" />
  <link href="https://yabu.me/npub1zcg29w5t9t0302g7qqdk609pyjxaqyfdykh86se0nuwp6a87xatqex4xdu" />
  <id>https://yabu.me/npub1zcg29w5t9t0302g7qqdk609pyjxaqyfdykh86se0nuwp6a87xatqex4xdu</id>
  <icon>https://sk.girlthi.ng/files/e219ea27-fa41-4ea7-861c-8f8bcfd142ee.webp</icon>
  <logo>https://sk.girlthi.ng/files/e219ea27-fa41-4ea7-861c-8f8bcfd142ee.webp</logo>




  <entry>
    <id>https://yabu.me/nevent1qqs8kwnr22elpulltq8dz8glw2wqrqpe6eeu0axu0f3keqdw0mt04qgzyqtppg463v4d79afrcqpkmfu5yjgm5q395j6ul2r9703c8t5lcm4vr8f2qn</id>
    
      <title type="html">There&amp;#39;s an excellent point at the end here that I never ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8kwnr22elpulltq8dz8glw2wqrqpe6eeu0axu0f3keqdw0mt04qgzyqtppg463v4d79afrcqpkmfu5yjgm5q395j6ul2r9703c8t5lcm4vr8f2qn" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsgnu865w8q4nv85hrka7ec9rjdnw66ufuc4dep9c8k0epej6tvx7g3srm9y&#39;&gt;nevent1q…rm9y&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;There&amp;#39;s an excellent point at the end here that I never really considered before:&lt;br/&gt;&amp;gt; &amp;#34;And of course, the people who value process knowledge the *least* are the AI bros who think you can replace skilled workers with a chatbot trained on the things they *say* and *write down*, as though that somehow captured everything they *know*.&amp;#34;&lt;br/&gt;&lt;br/&gt;Online posts and chats and documentation and everything else a chatbot might train off of are generally written to explain the output and structure of a thing to someone else. And while that generally means they&amp;#39;ll be on the simpler side, easier to digest, it also is usually a very *lossy* process. I&amp;#39;m most familiar with how it works with programming, but I&amp;#39;m sure it applies to anything technical enough. And by &amp;#34;technical&amp;#34; I mean basically anything which involves process knowledge. So most positions outside the Board and the C-Suite.&lt;br/&gt;&lt;br/&gt;Explaining how something works rarely gets into the nitty gritty of exactly why each coding decision was made. Yet that&amp;#39;s by *far* the most valuable thing to understand about any given piece of code. Those important conversations of imparting knowledge will happen in far more personal contexts. Usually through word-of-mouth, which means it never gets documented. Because how *can* it be documented? Even when it&amp;#39;s talked about online, in things like those tumblr posts, it often only scratches the surface of the sheer *depth* of knowledge needed to actually do something.&lt;br/&gt;&lt;br/&gt;The best teacher, the only one whose lessons can really be trusted, is experience. And a chatbot that can only be trained by reading existing text will *never* be able to learn from experience. Thus, it can&amp;#39;t really be trusted to actually make correct, informed decisions based on real knowledge of what&amp;#39;s needed in a specific context.&lt;br/&gt;&amp;lt;/rant&amp;gt;
    </content>
    <updated>2026-04-09T07:59:02Z</updated>
  </entry>

</feed>