<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <updated>2026-05-13T23:59:04Z</updated>
  <generator>https://yabu.me</generator>

  <title>Nostr notes by Nate Gaylinn</title>
  <author>
    <name>Nate Gaylinn</name>
  </author>
  <link rel="self" type="application/atom+xml" href="https://yabu.me/npub1wg38fugtvv94x4n5fzlqywcw48wcwyl46md6a98nd5v58q3k8xgqg6wzsz.rss" />
  <link href="https://yabu.me/npub1wg38fugtvv94x4n5fzlqywcw48wcwyl46md6a98nd5v58q3k8xgqg6wzsz" />
  <id>https://yabu.me/npub1wg38fugtvv94x4n5fzlqywcw48wcwyl46md6a98nd5v58q3k8xgqg6wzsz</id>
  <icon>https://media.tech.lgbt/accounts/avatars/109/529/910/049/404/221/original/5d2debf4c745576c.jpg</icon>
  <logo>https://media.tech.lgbt/accounts/avatars/109/529/910/049/404/221/original/5d2debf4c745576c.jpg</logo>




  <entry>
    <id>https://yabu.me/nevent1qqs8uacx7w5r6zkdm8jajlksxnq2v97ymuars6cugkw3aqren7wqguszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq46mtl0</id>
    
      <title type="html">I just shared this with my lab in our &amp;#34;use of AI&amp;#34; ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8uacx7w5r6zkdm8jajlksxnq2v97ymuars6cugkw3aqren7wqguszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq46mtl0" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsfsttcl2kg00re9c8xege5fnfzfk9fkfmsa23k2tv60fykez3udrqzpqmnv&#39;&gt;nevent1q…qmnv&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I just shared this with my lab in our &amp;#34;use of AI&amp;#34; discussion channel, right after this article: &lt;a href=&#34;https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/&#34;&gt;https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Relating to our recent discussion, I&amp;#39;m starting to think &amp;#34;unreasonable reasoning&amp;#34; is actually a pretty good descriptor. If you point then in the right direction and keep them in an abstract domain, they can chain together ideas and do useful work. But at the slightest provocation, they will also happily go down an elaborate garden path of utter fantasy, unintentionally confabulating whatever one might expect to see.&lt;br/&gt;&lt;br/&gt;The juxtaposition is wild, but I think it shows that what we casually call &amp;#34;reasoning&amp;#34; is a few things mixed together. LLMs have some, but not all, of these faculties, which explains why we have such a hard time assessing them and come to such radically different conclusions. We need to stop conflating these things in our own minds to better appreciate what LLMs can and can&amp;#39;t do.
    </content>
    <updated>2026-05-10T12:04:12Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfa3tsvmh3mumgft6wxsvy2czujcvpmrehvqrrwwpgz2j4hqzqh4czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq97xdr4</id>
    
      <title type="html">You say it&amp;#39;s not clear why a lack of situationally embedded ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfa3tsvmh3mumgft6wxsvy2czujcvpmrehvqrrwwpgz2j4hqzqh4czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq97xdr4" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsf3qdqjnej4gwn65nt7dqskjt22t8r4nntqlldmu473fz927r4h9sx7yw7z&#39;&gt;nevent1q…yw7z&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;You say it&amp;#39;s not clear why a lack of situationally embedded meaning would limit reasoning, and I think in a literal sense that&amp;#39;s true. Reasoning can be free-floating, purely abstract and detached from a particular context. But we use LLMs in our daily lives, and they may appear &amp;#34;unreasonable&amp;#34; because they have no sense of consequences. They don&amp;#39;t care if their word choice might drive a user to suicide. They don&amp;#39;t care if their solution to your problem undermines the reason you asked for help.&lt;br/&gt;&lt;br/&gt;And this relates to your point about predictive coding. Yes, human minds *do* seem to depend on LLM-like next-token prediction and free associative flow. But we also monitor and reshape that process to align with our goals and values. Part of successful reasoning is to keep that train of thought on task, and to apply reasoning appropriately in context. If an LLM can do reasoning but can&amp;#39;t *manage* and *direct* that reasoning, then we&amp;#39;re talking about something less than the human faculty.&lt;br/&gt;&lt;br/&gt;(4/4)&lt;br/&gt;#llm #ai
    </content>
    <updated>2026-05-09T14:13:31Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsf3qdqjnej4gwn65nt7dqskjt22t8r4nntqlldmu473fz927r4h9szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqj3fmpd</id>
    
      <title type="html">I also agree that it&amp;#39;s wrong to say that an LLM *can&amp;#39;t* ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsf3qdqjnej4gwn65nt7dqskjt22t8r4nntqlldmu473fz927r4h9szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqj3fmpd" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsyq87rknshz8397xlgk9zv79c5k6lzv4veduj0juf265222ymjpnc2cgy7f&#39;&gt;nevent1q…gy7f&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I also agree that it&amp;#39;s wrong to say that an LLM *can&amp;#39;t* access meaning because it isn&amp;#39;t physically embedded. Word frequencies do reveal structure in language that mirrors our systems of meaning and the physical world, and so manipulating words can approximate manipulating ideas. I think it&amp;#39;s fair to say that LLMs &amp;#34;have&amp;#34; concepts, and often these match what&amp;#39;s in a human mind. However, humans learn concepts primarily through our interaction with the physical world. We learn *names* for these concepts linguistically, but often in an active, contextual fashion. There&amp;#39;s also plenty about the world that we don&amp;#39;t explicitly put into words.&lt;br/&gt;&lt;br/&gt;This isn&amp;#39;t to say LLMs *lack* all access to meaning. It&amp;#39;s just that they have to get at it indirectly, through linguistic metaphor and human cultural artifacts. They have a limited, impoverished, biased, and subtly warped view of human meaning. Which, again, does not mean they are incapable of &amp;#34;reasoning,&amp;#34; but may make their behavior appear &amp;#34;unreasonable.&amp;#34;&lt;br/&gt;&lt;br/&gt;(3/4)&lt;br/&gt;#llm #ai
    </content>
    <updated>2026-05-09T14:13:19Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsyq87rknshz8397xlgk9zv79c5k6lzv4veduj0juf265222ymjpnczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq77r4dr</id>
    
      <title type="html">Ulrike, I think part of the problem here is that some of these ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsyq87rknshz8397xlgk9zv79c5k6lzv4veduj0juf265222ymjpnczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq77r4dr" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs88hlqhq9es8kvhzudhhtue93tu2qd5c7k9cm2m7t68cywpn0mqfsv74j5q&#39;&gt;nevent1q…4j5q&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Ulrike, I think part of the problem here is that some of these &amp;#34;arguments from means&amp;#34; aren&amp;#39;t really about reasoning at all, they&amp;#39;re about what we expect from &amp;#34;reasonable people,&amp;#34; which is more about common sense and situational embedding. At least, that&amp;#39;s how I interpret them. I&amp;#39;d be curious what &lt;span itemprop=&#34;mentions&#34; itemscope itemtype=&#34;https://schema.org/Person&#34;&gt;&lt;a itemprop=&#34;url&#34; href=&#34;/npub15xvq07ttjr063lkf99nvm62y74w9fgkhqsehg96ynjqpmh37uukqmq0jms&#34; class=&#34;bg-lavender dark:prose:text-neutral-50 dark:text-neutral-50 dark:bg-garnet px-1&#34;&gt;&lt;span&gt;Prof. Emily M. Bender(she/her)&lt;/span&gt; (&lt;span class=&#34;italic&#34;&gt;npub15xv…0jms&lt;/span&gt;)&lt;/a&gt;&lt;/span&gt; thinks, since I know many people take her arguments more literally than I do, and I&amp;#39;m not sure of her intent.&lt;br/&gt;&lt;br/&gt;I think you&amp;#39;re right that lacking the ability to co-create meaning in dialog with a user does not imply that an LLM is incapable of reasoning or doing cognitive labor. However, they are marketed as friends, lovers, therapists, and assistants. It&amp;#39;s deeply problematic that they may appear human yet do not understand us in a human way. It may violate our expectations and make their behavior appear &amp;#34;unreasonable.&amp;#34; I think if they were presented as impersonal, then this would be much less of a problem.&lt;br/&gt;&lt;br/&gt;(2/4)&lt;br/&gt;#llm #ai
    </content>
    <updated>2026-05-09T14:13:01Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs88hlqhq9es8kvhzudhhtue93tu2qd5c7k9cm2m7t68cywpn0mqfszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqke64az</id>
    
      <title type="html">I enjoyed [this paper](https://zenodo.org/records/18231172 ) by ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs88hlqhq9es8kvhzudhhtue93tu2qd5c7k9cm2m7t68cywpn0mqfszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqke64az" />
    <content type="html">
      I enjoyed [this paper](&lt;a href=&#34;https://zenodo.org/records/18231172&#34;&gt;https://zenodo.org/records/18231172&lt;/a&gt; ) by &lt;span itemprop=&#34;mentions&#34; itemscope itemtype=&#34;https://schema.org/Person&#34;&gt;&lt;a itemprop=&#34;url&#34; href=&#34;/npub1u328aygluds5ft6n3ngrwjmd3ptq9j4xryeek708ryw524krrndqt4xcf5&#34; class=&#34;bg-lavender dark:prose:text-neutral-50 dark:text-neutral-50 dark:bg-garnet px-1&#34;&gt;&lt;span&gt;Ulrike Hahn&lt;/span&gt; (&lt;span class=&#34;italic&#34;&gt;npub1u32…xcf5&lt;/span&gt;)&lt;/a&gt;&lt;/span&gt;! It&amp;#39;s very well written, and I think it&amp;#39;s correctly identifying and dissecting an important stumbling block in our discussions of what LLMs are capable of. I really appreciate the attempt to make this dialog more productive&lt;br/&gt;&lt;br/&gt;The point is that &amp;#34;reasoning&amp;#34; is a hopelessly ambiguous term, with many different meanings within and across fields. If we want to debate whether LLMs can &amp;#34;reason,&amp;#34; then we ought to get much more precise about what we mean.&lt;br/&gt;&lt;br/&gt;I strongly agree with most of this, though I do have a few objections. They&amp;#39;re all from section 5.4, exploring arguments about &amp;#34;the means&amp;#34; by which LLMs complete reasoning tasks, or whether they *lack* the means to do what we call &amp;#34;reasoning.&amp;#34;&lt;br/&gt;&lt;br/&gt;This really is the value of a paper like this: because of Ulrike&amp;#39;s careful analysis, I can point to precisely the point where I think the problem lies. Interestingly, &amp;#34;reasoning&amp;#34; may be a red herring.&lt;br/&gt;&lt;br/&gt;(1/4)&lt;br/&gt;#llm #ai
    </content>
    <updated>2026-05-09T14:12:48Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs0avvsm59rmlgtsq2wxrgnwf9v77ctnjq6r0p6pkqdx6w87nkhumczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqen308m</id>
    
      <title type="html">Some UIs have an A&amp;gt;Z or Z&amp;gt;A, which seems better to me, ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs0avvsm59rmlgtsq2wxrgnwf9v77ctnjq6r0p6pkqdx6w87nkhumczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqen308m" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsxkne56eqh8tp3q5p4f6lwzq206r3je7ds2tnd905e3wt7tzk7lfchy0j0d&#39;&gt;nevent1q…0j0d&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Some UIs have an A&amp;gt;Z or Z&amp;gt;A, which seems better to me, because I don&amp;#39;t think there is a strong convention.
    </content>
    <updated>2026-05-03T21:52:47Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsx8dyketnnhua3xrx7ydvg2jgy9ycs2macdwkuurulwaug44ls20qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqs670q3</id>
    
      <title type="html">I suppose the problem with that quote is that it is *intuitively* ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsx8dyketnnhua3xrx7ydvg2jgy9ycs2macdwkuurulwaug44ls20qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqs670q3" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsdpg6gvqwvg6lcwez7aztum7ujsr44k7elkrftaujckmn5fnqq5mqh8zr4z&#39;&gt;nevent1q…zr4z&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I suppose the problem with that quote is that it is *intuitively* very reasonable, but it is not *defensible* because it is imprecise. Perhaps we ought to be more precise, but I also think the intuition is valid.&lt;br/&gt;&lt;br/&gt;It&amp;#39;s reasonable to argue that LLMs, as built today, can&amp;#39;t be doing the kind of thought process that I described. They don&amp;#39;t have any sort of personal identity, by design. How can there be a &amp;#34;thought process&amp;#34; as we know it without a self?&lt;br/&gt;&lt;br/&gt;On the other hand, does human thinking involve sampling from a modeled probability distribution? Possibly! But it&amp;#39;s reasonable to say it can&amp;#39;t *just* be computing probability distributions. There seems to be something more.&lt;br/&gt;&lt;br/&gt;So, maybe it&amp;#39;s wrong to point at &amp;#34;computing probability&amp;#34; here. That&amp;#39;s not the thing, and it may be confusing to highlight it. But we&amp;#39;re actually *not* pointing at computing probability. We&amp;#39;re pointing to the negative space around it. Perhaps we could be more careful about how we do that?
    </content>
    <updated>2026-04-30T11:20:38Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsphgrn78kptg7pgt004kcjwtg534rftev0sqnf4776endhq4e2dagzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq8f3s9z</id>
    
      <title type="html">Sorry if I&amp;#39;m beating a dead horse, but I think quite a bit ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsphgrn78kptg7pgt004kcjwtg534rftev0sqnf4776endhq4e2dagzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq8f3s9z" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsddykz6pspd0yznaje62uqz6emmerntsny7x2pfe5e0m0ev6pue2sau8c6r&#39;&gt;nevent1q…8c6r&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Sorry if I&amp;#39;m beating a dead horse, but I think quite a bit about this topic.&lt;br/&gt;&lt;br/&gt;I would argue that one important mode of &amp;#34;thought&amp;#34; is an internally generated, goal-directed, agent-oriented process. There is some &amp;#34;I&amp;#34; that wants some thing, and a (perhaps nonlinear) &amp;#34;train&amp;#34; of thought that the organism continually generates and steers toward achieving some outcome.&lt;br/&gt;&lt;br/&gt;This is fundamentally different from what an LLM does, but also quite similar. The main difference is that the goals, agency, and process governing our interactions with LLMs are almost exclusively *external* to the LLM itself. They are scaffolding that shapes a generic token generation process.&lt;br/&gt;&lt;br/&gt;To the extent that such scaffolding approximates what a thinking organism does to shape its own thought process, then LLMs will appear as if they are &amp;#34;thinking.&amp;#34; But I think the differences are significant and many, even if we don&amp;#39;t have a good enumeration or vocabulary yet.
    </content>
    <updated>2026-04-29T21:51:32Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsqw2p3pnmvppelvrvze99lwdjrpd2t5kwsdp69xr58agucuqe9jaqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqt8sak8</id>
    
      <title type="html">One thing that really bugs me about the &amp;#34;replicator&amp;#34; ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsqw2p3pnmvppelvrvze99lwdjrpd2t5kwsdp69xr58agucuqe9jaqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqt8sak8" />
    <content type="html">
      One thing that really bugs me about the &amp;#34;replicator&amp;#34; model of evolution is this assumption that organisms make copies of themselves.&lt;br/&gt;&lt;br/&gt;It makes sense to say that natural selection picks out the best organisms, and then they go on to make more like themselves. That does produce adaptation.&lt;br/&gt;&lt;br/&gt;But the reality is offspring are always *different* from their parent(s). Also similar, of course, but never completely. That&amp;#39;s because reproduction and development are always a bit random, and that&amp;#39;s by design.&lt;br/&gt;&lt;br/&gt;So what is being selected? Not an ideal form to be replicated, but a stochastic process that generates new variations on a theme. It&amp;#39;s more like a *distribution* of possible forms, only one of which is realized and (potentially) selected.&lt;br/&gt;&lt;br/&gt;How do things look different if we center this creative process that evolves a space of possible forms, rather than the particular objects that process produces?&lt;br/&gt;&lt;br/&gt;#science #philosophy #evolution #genetics
    </content>
    <updated>2026-04-13T12:35:19Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsp44s23ju9ec8z5qtgvqgxz84zkwxdyam492zl7twy2ku623v8wcqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq5sflvp</id>
    
      <title type="html">I thought this was a particularly good analysis of the problem of ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsp44s23ju9ec8z5qtgvqgxz84zkwxdyam492zl7twy2ku623v8wcqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq5sflvp" />
    <content type="html">
      I thought this was a particularly good analysis of the problem of using LLMs for science. It explores the purpose of science, the perverse incentives that drive people to use LLMs, and the impact this has on skill building and training future scientists.&lt;br/&gt;&lt;br/&gt;My lab group has been struggling with this topic  lately, without much consensus. This blog post captures a lot of our thinking, and very clearly made some good points that we appreciated. It mostly just describes the mess we&amp;#39;re in without offering much useful advice, but just laying out the problems do nicely is helpful. That said, I do worry the author may be underestimating the impact these tools might have on experienced researchers.&lt;br/&gt;&lt;br/&gt;&lt;a href=&#34;https://ergosphere.blog/posts/the-machines-are-fine/&#34;&gt;https://ergosphere.blog/posts/the-machines-are-fine/&lt;/a&gt;&lt;br/&gt;#academicchatter #llm
    </content>
    <updated>2026-04-10T17:53:16Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsqzf3082uya66u7dk622k9jn8qr0g6d49dpzt573utw6c8wwsvazqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqlmwdrx</id>
    
      <title type="html">Asking &amp;#34;do you?&amp;#34; upsets me. I&amp;#39;m not arguing that ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsqzf3082uya66u7dk622k9jn8qr0g6d49dpzt573utw6c8wwsvazqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqlmwdrx" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsz3al9qpjjs7x7rk2gfh0tkfy579uldgfmgj4d4h64pq8dtpglnrq5qxtkm&#39;&gt;nevent1q…xtkm&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Asking &amp;#34;do you?&amp;#34; upsets me. I&amp;#39;m not arguing that I&amp;#39;m smarter than an LLM, I&amp;#39;m questioning your point that &amp;#34;AI is operating in substantially different modes when it&amp;#39;s hallucinating to when it&amp;#39;s retrieving factual information.&amp;#34;&lt;br/&gt;&lt;br/&gt;If I want a tool for retrieving and operating on facts, I would *insist* it have a trusted knowledge graph linked to sources! I&amp;#39;m biased, &amp;#39;cuz I worked for years on Google&amp;#39;s KG. But, seriously: I want a tool with verifiable results, and a trusted authority who&amp;#39;s accountable for accuracy.&lt;br/&gt;&lt;br/&gt;I buy what you&amp;#39;re saying about an &amp;#34;internal knowledge graph&amp;#34; and source tracing after the fact. I, too, have a fuzzy memory of where my ideas came from, and have to fact check myself sometimes! But these models are trained on fact, fiction, and falsehood, all blended together without judgment in the same &amp;#34;sorta KG.&amp;#34; Unless the LLM is explicitly told to fact check itself, and which sources to trust, then it will occasionally make stuff up.
    </content>
    <updated>2026-04-09T11:18:38Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8553wql6akeyhpkf2edpdz92wcxa68fs9xanqz8e04n5a3ury77szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqkqnprm</id>
    
      <title type="html">I&amp;#39;ll check out that paper when I get the chance. Sounds ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8553wql6akeyhpkf2edpdz92wcxa68fs9xanqz8e04n5a3ury77szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqkqnprm" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs8g6rukpr8v4fxwrgceu442h3sw6dv62fm5dtq7mmg6rrljtjkr4qt79f0c&#39;&gt;nevent1q…9f0c&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I&amp;#39;ll check out that paper when I get the chance. Sounds interesting. But, still, these models *don&amp;#39;t* model uncertainty, right? They don&amp;#39;t know what they know or how they know it. They don&amp;#39;t have any notion of authoritative sources, or even where the text they&amp;#39;re reproducing cones from. They don&amp;#39;t have a knowledge graph or relational database of facts. They don&amp;#39;t have any notion of logical correctness, except for correct examples in their training corpus and &amp;#34;tools&amp;#34; if those are provided / used. Right?&lt;br/&gt;&lt;br/&gt;That&amp;#39;s my disconnect here. Yes, we&amp;#39;re doing more elaborate training to reduce the error rate. But I think what we call &amp;#34;hallucination&amp;#34; is just a way of describing the fundamental operation that LLMs do (without necessarily implying anything about correctness). New techniques constrain and reinforce that hallucination to make it less error prone, but... it&amp;#39;s still hallucinating, all the time. There is no &amp;#34;factual mode&amp;#34; that I know of. Right?
    </content>
    <updated>2026-04-08T14:47:01Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsxl0sqqql5gf7jh8t4t0xlhtulwn59jhjvlghg65vvuvg0yqqmx2szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2cdgy6</id>
    
      <title type="html">Why do you say &amp;#34;AI is operating in substantially different ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsxl0sqqql5gf7jh8t4t0xlhtulwn59jhjvlghg65vvuvg0yqqmx2szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2cdgy6" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsf9uh3anakhm8gtc0xm7p78m0fl8579m7euu9vhckem4hupv56yvs0425yn&#39;&gt;nevent1q…25yn&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Why do you say &amp;#34;AI is operating in substantially different modes when it&amp;#39;s hallucinating to when it&amp;#39;s retrieving factual information&amp;#34;? My understanding is that LLMs and the tooling built up around them have no notion of what is &amp;#34;factual,&amp;#34; but merely produce statistically probable text. The error rate has gone down, but I think the main innovation driving that is just doing several hidden prompts for every interaction with the user, asking the LLM to self correct before it says anything. That&amp;#39;s not distinguishing factual information, though, just filtering out low probability responses (which are more likely to be errors, but not necessarily).
    </content>
    <updated>2026-04-08T10:16:09Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsy7dm50hnwxpcpckmwynqhjjj00u8du226nx0m77seq7k3r3lnd7gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqxv9a5m</id>
    
      <title type="html">I see! You&amp;#39;re talking about the distinction between ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsy7dm50hnwxpcpckmwynqhjjj00u8du226nx0m77seq7k3r3lnd7gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqxv9a5m" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs86d293krczcppnkng55d3l3lre3gl2vnva9l9t3cav872w5v8emgg5maa2&#39;&gt;nevent1q…maa2&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I see! You&amp;#39;re talking about the distinction between supervised and unsupervised learning.&lt;br/&gt;&lt;br/&gt;As the paper points out, backprop doesn&amp;#39;t *necessarily* need externally provided targets. Like the brain, it can learn to merely recognize patterns and predict what comes next in an unsupervised way.&lt;br/&gt;&lt;br/&gt;However, I think there&amp;#39;s still the issue of having someone from the outside reach in and nudge all the synapse weights vs. having all the neurons autonomously nudge the &amp;#34;weights&amp;#34; of their own synapses.&lt;br/&gt;&lt;br/&gt;It&amp;#39;s possible what the neurons do is just an approximation of what would happen if the weights were set from outside. Even that would be very interesting, since it might be more efficient and scalable than ANNs.&lt;br/&gt;&lt;br/&gt;But I wonder if neurons do something different, using heuristics and strategies to learn continuously in a dynamic setting? Online learning is a very different use case, after all.
    </content>
    <updated>2026-02-28T16:37:30Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsq42dwpugkw9m30wk388j4gkxnf6fwmpchchpzkhsrnd6e388ka4qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqgrm39n</id>
    
      <title type="html">I&amp;#39;m not sure what you mean that &amp;#34;next token prediction ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsq42dwpugkw9m30wk388j4gkxnf6fwmpchchpzkhsrnd6e388ka4qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqgrm39n" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs095ufqqnqnxakewf5f62420c8fkm00sjp0yu0t58e5gn2wuprl0c290d0j&#39;&gt;nevent1q…0d0j&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I&amp;#39;m not sure what you mean that &amp;#34;next token prediction [...] elides that difference between the internal and the external.&amp;#34; Could you say more?
    </content>
    <updated>2026-02-28T15:56:47Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsxvgr9r3fmdffa3gc27c83ut0wu65gfw8cdwfy6spedq6qfvufgfgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqg2euyr</id>
    
      <title type="html">Backprop is an algorithm I run to optimize an ANN. It needs a ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsxvgr9r3fmdffa3gc27c83ut0wu65gfw8cdwfy6spedq6qfvufgfgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqg2euyr" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsrd2e0usfg20eye43pmch6z9pnfdawtdhfjgcukkg8c4xaxt6tafc6v08zm&#39;&gt;nevent1q…08zm&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Backprop is an algorithm I run to optimize an ANN. It needs a top-down view of the network topology and the weights of all synapses. It solves the credit-assignment problem in a clever way, usually based on the error rate compared to a known target. Then it simultaneously updates *all* the link weights in the network based on how the ANN responded as a whole. First you train your network, *then* you can use it, but not both at once.&lt;br/&gt;&lt;br/&gt;Rather than being tuned by some external actor, brain cells manage their own relationships with their neighbors. They grow, prune, and modulate their synapses, and they decide when and how to do that based on imperfect feedback, limited information, and evolved heuristics. Brains track and minimize errors, but the targets are internally generated. This is happening continuously, with fluid transitions between acting in the real world, imagining, thinking, and learning.&lt;br/&gt;&lt;br/&gt;I&amp;#39;d argue what the brain does is much harder, and much more interesting.&lt;br/&gt;&lt;br/&gt;(3/3)&lt;br/&gt;&lt;br/&gt;#ai #ml #neuroscience
    </content>
    <updated>2026-02-28T15:48:25Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsrd2e0usfg20eye43pmch6z9pnfdawtdhfjgcukkg8c4xaxt6tafczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2mwkev</id>
    
      <title type="html">There are two big reasons I dislike saying brains do something ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsrd2e0usfg20eye43pmch6z9pnfdawtdhfjgcukkg8c4xaxt6tafczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2mwkev" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs06l9q5yxtt5xz4m57dc6e0kd7swhc8clc0runy276lk9r84czh3s6hvre8&#39;&gt;nevent1q…vre8&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;There are two big reasons I dislike saying brains do something &amp;#34;backprop-like.&amp;#34; The first issue is how work is split between nodes and links.&lt;br/&gt;&lt;br/&gt;In ANNs, the nodes themselves are *trivial*, and they&amp;#39;re completely homogeneous across a full layer of the network, if not all the layers. Any deeper computation is about how the nodes are wired together. That is, the program is in the *links* (synapse weights), not the nodes.&lt;br/&gt;&lt;br/&gt;By comparison, brain cells are both complex and diverse. We don&amp;#39;t know how much of the computation happens within cells vs. between them. We&amp;#39;re just starting to figure out what all the different kinds of cells *are*, but have little idea of what they&amp;#39;re doing. It&amp;#39;s clear that individual neurons do a lot, and that ensembles of cells manage each other in complex ways.&lt;br/&gt;&lt;br/&gt;I worry saying the brain &amp;#34;does backprop&amp;#34; implies a network of trivial nodes, where tuning weight vectors is *the place* where learning happens. That&amp;#39;s likely wrong, and it obscures other possibilities.&lt;br/&gt;&lt;br/&gt;(2/3)&lt;br/&gt;&lt;br/&gt;#ai #ml #neuroscience
    </content>
    <updated>2026-02-28T15:47:30Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs06l9q5yxtt5xz4m57dc6e0kd7swhc8clc0runy276lk9r84czh3szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqwkt7nx</id>
    
      <title type="html">I&amp;#39;ve been thinking about [this ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs06l9q5yxtt5xz4m57dc6e0kd7swhc8clc0runy276lk9r84czh3szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqwkt7nx" />
    <content type="html">
      I&amp;#39;ve been thinking about [this article](&lt;a href=&#34;https://www.nature.com/articles/s41583-020-0277-3&#34;&gt;https://www.nature.com/articles/s41583-020-0277-3&lt;/a&gt; ) &lt;span itemprop=&#34;mentions&#34; itemscope itemtype=&#34;https://schema.org/Person&#34;&gt;&lt;a itemprop=&#34;url&#34; href=&#34;/npub1u328aygluds5ft6n3ngrwjmd3ptq9j4xryeek708ryw524krrndqt4xcf5&#34; class=&#34;bg-lavender dark:prose:text-neutral-50 dark:text-neutral-50 dark:bg-garnet px-1&#34;&gt;&lt;span&gt;Ulrike Hahn&lt;/span&gt; (&lt;span class=&#34;italic&#34;&gt;npub1u32…xcf5&lt;/span&gt;)&lt;/a&gt;&lt;/span&gt; shared with me recently. Apparently, I have strong opinions about why we shouldn&amp;#39;t say that the brain is doing something &amp;#34;backprop-like&amp;#34; when we learn!&lt;br/&gt;&lt;br/&gt;I think both brains and artificial neural networks (ANNs) need to solve the &amp;#34;credit assignment&amp;#34; problem. For ANNs, an algorithm called &amp;#34;back propagation&amp;#34; or just &amp;#34;backprop&amp;#34; is the industry standard solution, and it works very well. I think what brains do is different.&lt;br/&gt;&lt;br/&gt;Before we start, the key thing to know is that computation in a neural network is distributed across many nodes connected by links. To tune the behavior of the network as a whole, you need to tune each of the nodes and links, but how do you know how any one node or link contributed to the final answer? It&amp;#39;s complicated, and each one depends on many others. We call that &amp;#34;credit assignment.&amp;#34;&lt;br/&gt;&lt;br/&gt;(1/3)&lt;br/&gt;&lt;br/&gt;#ai #ml #neuroscience
    </content>
    <updated>2026-02-28T15:46:17Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsg49fdjxwqxcduz5j2sqlt09pxg2xnfeh62kc3ld3f4px8g0x0qgqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2h3gq7</id>
    
      <title type="html">It also doesn&amp;#39;t change my mind from my original post. In ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsg49fdjxwqxcduz5j2sqlt09pxg2xnfeh62kc3ld3f4px8g0x0qgqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2h3gq7" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsgzl9k3kjtglpardg3sx75ujmx9ngmkn3pu9xl9s3cnancatkawfg6ttur3&#39;&gt;nevent1q…tur3&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;It also doesn&amp;#39;t change my mind from my original post.&lt;br/&gt;&lt;br/&gt;In machine learning, we run some data through a math expression, compute error, nudge all the parameters of our math expression to reduce the error, and try again. It&amp;#39;s almost always an *offline* process that *depends* on the global top-down view of the equation.&lt;br/&gt;&lt;br/&gt;What I object to is how often people casually accept that this is &amp;#34;learning&amp;#34; in the same sense as we see in human minds. I&amp;#39;d argue that&amp;#39;s obviously false. We learn in different contexts and in different ways.&lt;br/&gt;&lt;br/&gt;Our brains share some design principles with ANNs. In particular, they both exploit self-organization effects in networks to turn local nudges into coherent, functional global structures. But, like... that&amp;#39;s barely scratching the surface of what brain cells actually *do*, and how animal learning happens in practice.
    </content>
    <updated>2026-02-25T14:59:41Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsgzl9k3kjtglpardg3sx75ujmx9ngmkn3pu9xl9s3cnancatkawfgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcuequ9s2jw</id>
    
      <title type="html">So, I just read the Lillicrap et al. paper, and found it very ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsgzl9k3kjtglpardg3sx75ujmx9ngmkn3pu9xl9s3cnancatkawfgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcuequ9s2jw" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsfmknk8u0lk9uyw8vmawttfjvadve8ljqe7m9qwxupf8hpluqhp6szx0uda&#39;&gt;nevent1q…0uda&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;So, I just read the Lillicrap et al. paper, and found it very frustrating.  😅 &lt;br/&gt;&lt;br/&gt;I see this as good evidence of cross pollination between the study of ANNs and neuroscience, which is great. Fruitful dialog, to be sure. I also fully accept that brains *must* solve the credit assignment problem that backprop solves in order to learn.&lt;br/&gt;&lt;br/&gt;However, I feel like this paper is trying to classify many things I would call &amp;#34;alternatives to backprop&amp;#34; as &amp;#34;backprop-like&amp;#34; simply because they fulfill the same purpose and involve rich feedback signals. That&amp;#39;s ridiculous! It mostly serves to take credit for people finding different, more biological plausible ways to solve the problem.&lt;br/&gt;&lt;br/&gt;I&amp;#39;d also argue they&amp;#39;re trying to take credit for self-organization phenomena in network structures. Backprop is one example of this, NGRAD is another, but so is crystal growth. It&amp;#39;s not surprising nature takes advantage of this, like backprop does, but this is not a feature of *backprop* so much a feature of networks.
    </content>
    <updated>2026-02-25T14:52:17Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsrxad0a7qd7huycrmskr0atr6mgjafpef9ahajlaw7f4g93hdladgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqhghzm9</id>
    
      <title type="html">The abstract on that paper sounds very much like what I imagine. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsrxad0a7qd7huycrmskr0atr6mgjafpef9ahajlaw7f4g93hdladgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqhghzm9" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsvy4lpgyz78hkr4g4eg2eujjwdxe0mfyhmpaael8psk6v40hygtnsfh97vd&#39;&gt;nevent1q…97vd&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;The abstract on that paper sounds very much like what I imagine. Surely the brain does *some* sort of credit assignment, but I wouldn&amp;#39;t want to project any assumptions about how that ought to work or whether it&amp;#39;s backprop-like. I&amp;#39;d rather we just study and try to understand it on its own terms!&lt;br/&gt;&lt;br/&gt;I stick by my point about gradient-based optimization, though. The mention of backprop was a minor addition.
    </content>
    <updated>2026-02-24T13:48:45Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvy4lpgyz78hkr4g4eg2eujjwdxe0mfyhmpaael8psk6v40hygtnszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqmdtpgu</id>
    
      <title type="html">Interesting! I didn&amp;#39;t realize there was still active debate ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvy4lpgyz78hkr4g4eg2eujjwdxe0mfyhmpaael8psk6v40hygtnszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqmdtpgu" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsyz09j9c642mexruaasn07apzq8szmx8nx4675jc2yjexvulfh2tc59t2lz&#39;&gt;nevent1q…t2lz&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Interesting! I didn&amp;#39;t realize there was still active debate over whether backprop could be biologically plausible. I&amp;#39;ll check out the article.&lt;br/&gt;&lt;br/&gt;And, sorry, when I say &amp;#34;everybody,&amp;#34; I meant &amp;#34;too large a portion of the AI research community&amp;#34; and perhaps the media, also.&lt;br/&gt;&lt;br/&gt;I just read a paper that very casually claimed that combining evolutionary algorithms with gradient-based methods is reflective of what nature does, and I realized most readers would just swallow that whole without noticing it makes no sense.
    </content>
    <updated>2026-02-24T13:44:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsz43duyycngczwsazc4d0let3a5crn7eyjzd96rvgy4f4y3764u6gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq86q5su</id>
    
      <title type="html">AI pet peeve: everyone equates artificial neural networks and ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsz43duyycngczwsazc4d0let3a5crn7eyjzd96rvgy4f4y3764u6gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq86q5su" />
    <content type="html">
      AI pet peeve: everyone equates artificial neural networks and gradient-based optimization with brains, minds, and thinking.&lt;br/&gt;&lt;br/&gt;ANNs are big, parameterized math equations that we configure with an algorithm. Living neurons are intelligent agents that manage their own behavior and relationships autonomously. Human brains *definitely* aren&amp;#39;t attempting to differentiate through their interactions in the physical world, because that isn&amp;#39;t possible. They don&amp;#39;t do backprop, either.&lt;br/&gt;&lt;br/&gt;Deep learning is its own thing. Brains are something else. It&amp;#39;s hard to figure out how they compare when we keep pretending they&amp;#39;re the same.&lt;br/&gt;&lt;br/&gt;#ai #deeplearning
    </content>
    <updated>2026-02-24T13:11:56Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8w32z3e9p56df53v2hq78cd9ksx3rlxmq27yefczsw8ezaptgtvczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq8dg3t8</id>
    
      <title type="html">Completely agreed. I also think there&amp;#39;s something to be said ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8w32z3e9p56df53v2hq78cd9ksx3rlxmq27yefczsw8ezaptgtvczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq8dg3t8" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqszpu9wgeut49qau4szjvx6mt0jrlwphz2fp7ea2829m5lr3hrq52qjfk874&#39;&gt;nevent1q…k874&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Completely agreed. I also think there&amp;#39;s something to be said about how AI is ostensibly about understanding intelligence, but in practice seems more concerned with automating human cognitive tasks.
    </content>
    <updated>2026-02-04T18:06:35Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsz3mvm53gkxr9dq7kj9nmfdjnjjd9r93kwkecmtkf6v9j8cpxr80qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq3dz304</id>
    
      <title type="html">Not to mention the fact that so many of our current AI solutions ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsz3mvm53gkxr9dq7kj9nmfdjnjjd9r93kwkecmtkf6v9j8cpxr80qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq3dz304" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsz3eleuqwpkavq3gg9qj55t5q8hrxyrrkwn50kanh7lzyp52npmsg08r6n5&#39;&gt;nevent1q…r6n5&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Not to mention the fact that so many of our current AI solutions are really just models of human intelligence, trained by example. Do we not care about creating genuine, original intelligence? Is bottling it up in a computer so that it&amp;#39;s reproducible the same thing as making it in the first place? I&amp;#39;d say *obviously not*, but apparently many people don&amp;#39;t see the difference...
    </content>
    <updated>2026-02-04T14:52:21Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsz3eleuqwpkavq3gg9qj55t5q8hrxyrrkwn50kanh7lzyp52npmsgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq0m0rq4</id>
    
      <title type="html">It&amp;#39;s strange and frustrating that most AI researchers ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsz3eleuqwpkavq3gg9qj55t5q8hrxyrrkwn50kanh7lzyp52npmsgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq0m0rq4" />
    <content type="html">
      It&amp;#39;s strange and frustrating that most AI researchers don&amp;#39;t seem interested in natural intelligence.&lt;br/&gt;&lt;br/&gt;In the early days, when &amp;#34;neural networks&amp;#34; were seen as models of brains, many people seemed at least superficially interested in neuroscience. It&amp;#39;s not like that now. I&amp;#39;m sure some folks would say &amp;#34;yeah, and aerospace engineers don&amp;#39;t worry about bird flight, either!&amp;#34; but that feels wrong to me.&lt;br/&gt;&lt;br/&gt;If all you care about is moving cargo, then sure, flight is solved, and who cares if our designs are &amp;#34;biologically realistic&amp;#34;. Similarly, if all you care about is recognizing images, playing video games, and generating slop, then AI is solved. We&amp;#39;ll just make the current solutions better.&lt;br/&gt;&lt;br/&gt;But I think we&amp;#39;ve barely scratched the surface of what intelligence actually is! Current AI is *so* narrow and *so* shallow by comparison, yet I think people don&amp;#39;t even notice that because they haven&amp;#39;t actually thought about how intelligent living things are, and in how many different ways!&lt;br/&gt;&lt;br/&gt;#science #ai #intelligence
    </content>
    <updated>2026-02-04T14:48:47Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs2yfk4a5gv2wk24elh8nxskch6tcx6s7x5rp9hz237kjhwrfx4nvqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqma3ayk</id>
    
      <title type="html">Yeah, I agree. The problem is that Anthropic *really is* a leader ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs2yfk4a5gv2wk24elh8nxskch6tcx6s7x5rp9hz237kjhwrfx4nvqzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqma3ayk" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsqt3vpkhsmwfnp9pa2wqtr89jhnyvqj050daqamuklkmhd3uhhhsge27py5&#39;&gt;nevent1q…7py5&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;Yeah, I agree.&lt;br/&gt;&lt;br/&gt;The problem is that Anthropic *really is* a leader in this field, and we ought to care who they put up as their &amp;#34;expert.&amp;#34; That perspective is *very relevant!* But we ought to be challenging that person&amp;#39;s credentials, and explicitly contrasting them with an outside expert&amp;#39;s opinion, since they have a clear conflict of interest.&lt;br/&gt;&lt;br/&gt;Nature... sorta tried to do that? They did at least get some good independent voices. But they didn&amp;#39;t allow any critique, just... putting out a variety of opinions, for viewers to make up their own minds.&lt;br/&gt;&lt;br/&gt;That&amp;#39;s not journalism, and it&amp;#39;s not science, and that&amp;#39;s a shame because Nature ought to be good at both.
    </content>
    <updated>2026-01-19T12:39:18Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvn79h390nw3paltw0ktve82nnrtkdx86j5v36el3l7t2q2ef229czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqsrcxlt</id>
    
      <title type="html">I appreciate videos like [this one](https://youtu.be/ecCqUgHJaPI ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvn79h390nw3paltw0ktve82nnrtkdx86j5v36el3l7t2q2ef229czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqsrcxlt" />
    <content type="html">
      I appreciate videos like [this one](&lt;a href=&#34;https://youtu.be/ecCqUgHJaPI&#34;&gt;https://youtu.be/ecCqUgHJaPI&lt;/a&gt; ) from Nature that collect expert viewpoints, but sometimes the experts should be challenged.&lt;br/&gt;&lt;br/&gt;Jared Kaplan of Anthropic had some very misleading claims.&lt;br/&gt;&lt;br/&gt;LLMs do *not* democratize access to expertise. It feels like that because they sounds like an expert, but only when you ask them questions in domains you don&amp;#39;t know. Really, they&amp;#39;re just making shit up, and you don&amp;#39;t notice in areas you&amp;#39;re not an expert in.&lt;br/&gt;&lt;br/&gt;LLMs will *not* solve open problems in STEM. Researchers may *use* machine learning tools to do that, but ML is for finding patterns in data. It can&amp;#39;t &amp;#34;solve&amp;#34; or make &amp;#34;insights.&amp;#34; It only applies when we already have vast amounts of the right kind of data.&lt;br/&gt;&lt;br/&gt;And if we want to talk about LLMs as a cybersecurity threat, we should talk about how *vulnerable* they are to attackers. Imagining a genius AI hacker is nothing more than a distraction!&lt;br/&gt;&lt;br/&gt;#llm #ai
    </content>
    <updated>2026-01-19T12:32:27Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsxn0drgghtf0jf0e649j2h786n3hg5rp6ctv8tfaqc2sm2xg02v6qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2p7h6e</id>
    
      <title type="html">I love and hate that I can make a complex, animated, interactive ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsxn0drgghtf0jf0e649j2h786n3hg5rp6ctv8tfaqc2sm2xg02v6qzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq2p7h6e" />
    <content type="html">
      I love and hate that I can make a complex, animated, interactive data visualization like this with matplotlib.&lt;br/&gt;&lt;br/&gt;I can pick any trial from my experiment and play back the simulation in full detail. I can jump around, pause, and resume just by clicking on the figure. I can see nearly the full state of all the agents and their fitness as they evolve over time. It&amp;#39;s very information dense, but useful, and it actually looks pretty decent!&lt;br/&gt;&lt;br/&gt;On the other hand, the code is *horrifying*. Matplotlib has got to have one of the worst APIs of all time, and the animation tools are particularly gnarly.&lt;br/&gt; &lt;img src=&#34;https://media.tech.lgbt/media_attachments/files/115/906/152/598/101/931/original/da977ab1880da4c2.png&#34;&gt; &lt;br/&gt;
    </content>
    <updated>2026-01-16T18:29:54Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs9lzp305eygwkzsq3gjxv2d0dvy8qev9pydgdrfpmvwyxr3f8647szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqm8c0cu</id>
    
      <title type="html">I highly recommend *[The AI Con](https://thecon.ai/ )* by ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs9lzp305eygwkzsq3gjxv2d0dvy8qev9pydgdrfpmvwyxr3f8647szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqm8c0cu" />
    <content type="html">
      I highly recommend *[The AI Con](&lt;a href=&#34;https://thecon.ai/&#34;&gt;https://thecon.ai/&lt;/a&gt; )* by &lt;span itemprop=&#34;mentions&#34; itemscope itemtype=&#34;https://schema.org/Person&#34;&gt;&lt;a itemprop=&#34;url&#34; href=&#34;/npub15xvq07ttjr063lkf99nvm62y74w9fgkhqsehg96ynjqpmh37uukqmq0jms&#34; class=&#34;bg-lavender dark:prose:text-neutral-50 dark:text-neutral-50 dark:bg-garnet px-1&#34;&gt;&lt;span&gt;Prof. Emily M. Bender(she/her)&lt;/span&gt; (&lt;span class=&#34;italic&#34;&gt;npub15xv…0jms&lt;/span&gt;)&lt;/a&gt;&lt;/span&gt; and &lt;span itemprop=&#34;mentions&#34; itemscope itemtype=&#34;https://schema.org/Person&#34;&gt;&lt;a itemprop=&#34;url&#34; href=&#34;/npub1e2zt6n5m6h6x0a659artfnv700kyl2nsp6v00nzyv2l5en24huustsn97d&#34; class=&#34;bg-lavender dark:prose:text-neutral-50 dark:text-neutral-50 dark:bg-garnet px-1&#34;&gt;&lt;span&gt;Alex Hanna&lt;/span&gt; (&lt;span class=&#34;italic&#34;&gt;npub1e2z…n97d&lt;/span&gt;)&lt;/a&gt;&lt;/span&gt; &lt;br/&gt;&lt;br/&gt;I wish I could say it was a &amp;#34;fun&amp;#34; read, but it was quite infuriating, actually. I was familiar with most of the problems with AI that this book brings up, but seeing them laid out end-to-end, chapter after chapter paints a very upsetting big picture. The scale of harm done in so many facets of our daily lives is *stunning*, especially considering how much of it has developed over just the past decade.&lt;br/&gt;&lt;br/&gt;If you&amp;#39;re pissed off about AI hype, how AI gets imposed on us, or scary tales of an AI apocalypse, this book is a great resource for understanding the problem, communicating it with others, and pushing back.&lt;br/&gt;&lt;br/&gt;See my full review [here](&lt;a href=&#34;https://www.goodreads.com/review/show/8180549414&#34;&gt;https://www.goodreads.com/review/show/8180549414&lt;/a&gt;? ).
    </content>
    <updated>2025-12-27T16:48:36Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsgs9x944qugwnqq7cpek0v8x3h8ha86h80vcwv7ws9fm2dhvppr3czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqjtfqwx</id>
    
      <title type="html">I mean, any excuse to vent about Slack, right? 😉 I have the ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsgs9x944qugwnqq7cpek0v8x3h8ha86h80vcwv7ws9fm2dhvppr3czypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqjtfqwx" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsq0y9mmugk4nmgfyz3c8wu8vyky5fqpnm9rr7zvxdwe62ctqmwhhcusdr6l&#39;&gt;nevent1q…dr6l&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I mean, any excuse to vent about Slack, right? 😉&lt;br/&gt;&lt;br/&gt;I have the added PITA of having my Slack workspaces spread across *three* different email addresses, even though they are all associated with my university! So, each time I set up a new device or something, I go through this whole song and dance three times, which somehow Slack makes work all in one browser. But that means I&amp;#39;m sorta logged into three accounts, but Slack somehow binds that all together into one cookie and one UX, despite not acknowledging that these email accounts are the same person. What a mess!&lt;br/&gt;&lt;br/&gt;I can only imagine how much the engineers working on this must hate it, too, but I imagine changing it at this point would be an absolutely Herculean refactoring and migration effort. This is exactly the sort of tech debt that just gets kicked down the road forever...
    </content>
    <updated>2025-12-23T16:14:29Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8q6zh7cqjja09xmd8983lr45vtc7kz9qf0m38puup8pp9hxhtu2gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqyrwkvu</id>
    
      <title type="html">*So much* security theater these days, even for really minor ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8q6zh7cqjja09xmd8983lr45vtc7kz9qf0m38puup8pp9hxhtu2gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqyrwkvu" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsv8vw23kwuvs6jmmla64497s3654r82jvlzq3lyv977rg798qj6rsn3lwc5&#39;&gt;nevent1q…lwc5&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;*So much* security theater these days, even for really minor stuff that doesn&amp;#39;t need it. But Slack is actually important, and its authentication system is insane. I&amp;#39;m sure someone made the decision back in the day that &amp;#34;signing up for a dedicated account is too much friction, anyone should be able to send anyone an invite by email to let them log in!&amp;#34; Seems sensible, until you think through the consequences...
    </content>
    <updated>2025-12-23T15:39:48Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs08x8zcqk3z44h5zu34uzruna392svnurmf03m2pme43peg2vttfszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq4x759c</id>
    
      <title type="html">Are viruses alive? It&amp;#39;s a paradox, but one that disappears ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs08x8zcqk3z44h5zu34uzruna392svnurmf03m2pme43peg2vttfszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq4x759c" />
    <content type="html">
      Are viruses alive?&lt;br/&gt;&lt;br/&gt;It&amp;#39;s a paradox, but one that disappears with a small change in perspective.&lt;br/&gt;&lt;br/&gt;The problem is that viruses *seem* alive. They infect us, then manipulate our bodies in elaborate (and dangerous) ways, to reproduce and spread. They evolve. They seem to be intelligent, and have goals. Yet, they are little more than packets of DNA. They can&amp;#39;t *do* anything without hijacking the mechanisms of a living cell first.&lt;br/&gt;&lt;br/&gt;So are viruses alive? The paradox arises from thinking of an *object* as either alive or not. This raises all sorts of problems, because a living thing is actually a *process* that continually rebuilds itself. The physical stuff comes and goes, and is not in itself *alive*.&lt;br/&gt;&lt;br/&gt;The solution is to stop talking about &amp;#34;life&amp;#34; and switch to &amp;#34;living&amp;#34;. A virus is *living* when it is part of a living system. This isn&amp;#39;t just wordplay. Life is a process, and it is fundamentally *collective* and interconnected. We are living. Earth is living. But no single object here is.&lt;br/&gt;&lt;br/&gt;#science
    </content>
    <updated>2025-12-15T12:11:47Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsrpam8gs5lfrfpeg2acklc4e26hmn893hxmwhqsaew8zwnl7mjjpczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqhplupz</id>
    
      <title type="html">No such thing as science, technology, or reasoning without a ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsrpam8gs5lfrfpeg2acklc4e26hmn893hxmwhqsaew8zwnl7mjjpczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqhplupz" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs2h4htp4t2c5yzml5sa976vpt9ydd2qp0e8467qjtr36d9xw0ualslxau3e&#39;&gt;nevent1q…au3e&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;No such thing as science, technology, or reasoning without a model. There&amp;#39;s only working from *implicit* models, which are harder to understand and to question.
    </content>
    <updated>2025-11-29T18:40:17Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs90j5l7qely98ppayhr7k7dvd75z697g2a4h5dyxnqpqw249wv6hczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqe7rrzz</id>
    
      <title type="html">Ooh! I just discovered uBlock Origin&amp;#39;s &amp;#34;element ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs90j5l7qely98ppayhr7k7dvd75z697g2a4h5dyxnqpqw249wv6hczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqe7rrzz" />
    <content type="html">
      Ooh! I just discovered uBlock Origin&amp;#39;s &amp;#34;element zapper&amp;#34; mode. It&amp;#39;s quite fun, playing whack-a-mole with Discord&amp;#39;s obnoxious promotions until they&amp;#39;re utterly destroyed. Take that!
    </content>
    <updated>2025-11-23T14:20:17Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqstz8rhzakakpuqrnmzawf5qahnxm0h2s6yrmmakdl7nxt28t758rgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq9a8ycs</id>
    
      <title type="html">My friend seems genuinely baffled that I am an AI researcher who ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqstz8rhzakakpuqrnmzawf5qahnxm0h2s6yrmmakdl7nxt28t758rgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq9a8ycs" />
    <content type="html">
      My friend seems genuinely baffled that I am an AI researcher who refuses to use AI! Not only that, but I argue against it from theory, not experience. Why don&amp;#39;t I just give it a try for a while, and see what it&amp;#39;s really about before I judge it?&lt;br/&gt;&lt;br/&gt;I guess I see where he&amp;#39;s coming from. Part of the problem is the word &amp;#34;AI.&amp;#34; LLMs are *not* my research focus, so it&amp;#39;s less of a contradiction than it sounds. But I admit, being a non-user makes my arguments against LLMs less credible.&lt;br/&gt;&lt;br/&gt;I just don&amp;#39;t understand why I *owe it* to anybody to give AI a shot. I know how LLMs work in gory detail, and I don&amp;#39;t trust them. I&amp;#39;ve seen the mediocre work they produce. I&amp;#39;ve read studies about the seductive illusion of competence and caring they create, and how people fall for that. I know it&amp;#39;s all built on an incredibly exploitative business model.&lt;br/&gt;&lt;br/&gt;I feel *entirely* justified in *not* giving them a chance. I guess I&amp;#39;m just as baffled by how *badly* he wants me to try it, and how sure he seems to be that it would change my mind.
    </content>
    <updated>2025-11-22T14:09:46Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs0w0u7kty5lupmztdqg6wyatte08xnakarge0l0kljmjlruhwqmgszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqlrnjze</id>
    
      <title type="html">I&amp;#39;m looking for some advice for archival compression of image ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs0w0u7kty5lupmztdqg6wyatte08xnakarge0l0kljmjlruhwqmgszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqlrnjze" />
    <content type="html">
      I&amp;#39;m looking for some advice for archival compression of image / video data. Boosts appreciated. :boost_requested: &lt;br/&gt;&lt;br/&gt;I&amp;#39;m working with some lab equipment that captures a jpg image of the same scene every few seconds, continuously. We&amp;#39;d like to keep a relatively high-quality archive of all the data we collect, but storage space is costly!&lt;br/&gt;&lt;br/&gt;My first thought was that we might get much better compression rates if we encoded the images as a video. That&amp;#39;s logically what they are, and very little changes from frame to frame.&lt;br/&gt;&lt;br/&gt;First off: is that reasonable? In my experience, video files seem a bit more &amp;#34;fragile&amp;#34; than images, more likely to get corrupted or not play reliably on all devices / years later. Is this really the case? Are there ways to mitigate this?&lt;br/&gt;&lt;br/&gt;Second, I don&amp;#39;t know much about video codecs and compression schemes, so I&amp;#39;m not sure what would be best for this purpose! Any advice?&lt;br/&gt;&lt;br/&gt;#ffmpeg #data #academicchatter
    </content>
    <updated>2025-09-14T15:00:53Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsg2mazq92l0ufpa69xdhr62ukhvsnj370le5tcrgs0k6jp228chaczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqnnwwy9</id>
    
      <title type="html">I take a lot of notes, and use writing very heavily as part of my ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsg2mazq92l0ufpa69xdhr62ukhvsnj370le5tcrgs0k6jp228chaczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqnnwwy9" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs9c3fxmvf9wfsvllj0m9znxq8tfpgvwjydtlkq5fv63heurtkza4q9ksdcp&#39;&gt;nevent1q…sdcp&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;I take a lot of notes, and use writing very heavily as part of my thought process. But I almost never actually *read* those notes later. It&amp;#39;s the act of writing them, not reviewing them, that helps me understand and remember. &lt;br/&gt;&lt;br/&gt;So, for me, notes are obviously not a second brain. They&amp;#39;re how I engage my first brain. 😝
    </content>
    <updated>2025-06-29T10:34:14Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqszkjtv648pxdxwhz2kqhmpxu0juyll4v94l06pg24sc3xy9g0k4eszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqk65sxd</id>
    
      <title type="html">I&amp;#39;ve got an obscure Computer Science question, so if you know ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqszkjtv648pxdxwhz2kqhmpxu0juyll4v94l06pg24sc3xy9g0k4eszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqk65sxd" />
    <content type="html">
      I&amp;#39;ve got an obscure Computer Science question, so if you know CS folks, boosts appreciated. :boost_requested: &lt;br/&gt;&lt;br/&gt;I&amp;#39;m experimenting with weird algorithms inspired by cell biology, and looking for related works in the CS literature.&lt;br/&gt;&lt;br/&gt;In particular, I&amp;#39;m looking for attempts to frame computation and program generation in terms of ordinary differential equations, or stochastic processes. So, like, you describe your input as initial conditions for some system, simulate that system for some time, and whatever the final state is, that&amp;#39;s your output. Problem solved.&lt;br/&gt;&lt;br/&gt;Intuitively, this should be equivalent to other much more common programming models, but if anyone&amp;#39;s already put work into how to do it, that would be helpful  :)&lt;br/&gt;&lt;br/&gt;Or, more literally, if you know of anyone trying to do useful computation using gene regulatory or chemical reaction networks, I&amp;#39;d love to find those examples.&lt;br/&gt;&lt;br/&gt;#compsci #computerscience
    </content>
    <updated>2025-05-20T20:59:29Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8kn09k8rqc034q9u0w79qd6qwhsy9jqmugrawv3ehkfvz8tvl0pszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqk6sgv6</id>
    
      <title type="html">I was listening to a podcast where the guest argued: when it ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8kn09k8rqc034q9u0w79qd6qwhsy9jqmugrawv3ehkfvz8tvl0pszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqk6sgv6" />
    <content type="html">
      I was listening to a podcast where the guest argued: when it comes to art, today&amp;#39;s AI is like a tool, but in the future, maybe it could be an artist.&lt;br/&gt;&lt;br/&gt;As he explained why, it became clear why I disagree with him. He defines &amp;#34;art&amp;#34; as a *product* that is valued by art critics and by consumers. I agree that AI generated content may become more common and broadly accepted in the future, but I don&amp;#39;t think that&amp;#39;s a good thing, and I wouldn&amp;#39;t call it art!&lt;br/&gt;&lt;br/&gt;I see art as more of an activity than a product. An artist *does* art. Whatever the output of that process is, valuable or not, is also called &amp;#34;art.&amp;#34;&lt;br/&gt;&lt;br/&gt;Am I saying only humans can make art? Not at all. But to be &amp;#34;art&amp;#34; it has to be self-expression. The artist must have something to *say*. As long as we are designing the AI, training it, and prompting it, then it will be a tool. Without us it has no initiative, no opinion, and no creative urge.&lt;br/&gt;&lt;br/&gt;#ai #art
    </content>
    <updated>2025-03-07T20:31:25Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsz6ujp9p2vrfv5sk5zukwcas9atze9yacm5kg5ad49hey4zxlc62gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq7rh9m4</id>
    
      <title type="html">AI has an important place in science. Projects like AlphaFold, ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsz6ujp9p2vrfv5sk5zukwcas9atze9yacm5kg5ad49hey4zxlc62gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueq7rh9m4" />
    <content type="html">
      AI has an important place in science. Projects like AlphaFold, for instance, really do have the potential to rapidly advance our knowledge and improve lives in powerful ways.&lt;br/&gt;&lt;br/&gt;But we have to be careful how we talk about them. AlphaFold and algorithms like it do not *&amp;#34;discover&amp;#34;* facts about the universe. They do not &amp;#34;solve&amp;#34; open problems in science.&lt;br/&gt;&lt;br/&gt;They propose *hypotheses*. They make educated guess that *may* be right, with some unknown probability. You have to *actually check* before you know if the universe matches what the AI predicted.&lt;br/&gt;&lt;br/&gt;This is no small distinction. This is huge.&lt;br/&gt;&lt;br/&gt;#ai #science
    </content>
    <updated>2025-02-13T11:54:33Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsd7ty8c6r4a4wmg6zncz6dsgmd0hxtqj7ffrqrn0v7md5myq829ngzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqek5aee</id>
    
      <title type="html">Has anyone ever seen code reviews working well in an academic ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsd7ty8c6r4a4wmg6zncz6dsgmd0hxtqj7ffrqrn0v7md5myq829ngzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqek5aee" />
    <content type="html">
      Has anyone ever seen code reviews working well in an academic setting? &lt;br/&gt;&lt;br/&gt;One of my lab mates just paid a very high price for a small bug, and we&amp;#39;re talking about using code reviews to prevent this. Coming from industry myself, I like this idea and volunteered to help figure this out, but I can also imagine lots of reasons this might not work in a lab full of grad students and postdocs.&lt;br/&gt;&lt;br/&gt;In particular, I&amp;#39;m worried about training people to actually give good code reviews, sharing the workload fairly, and agreeing on a standard for code quality. I&amp;#39;m very familiar with how to deal with those issues on a software engineering team, but they would all be very different in this setting...&lt;br/&gt;&lt;br/&gt;#academicchatter #computerscience #compsci
    </content>
    <updated>2025-02-03T22:32:28Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs87twm8r2ex3evaglaa0k3hrzk7q89p02ktvm78tgh3fqfewmfxvszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqdruc7x</id>
    
      <title type="html">It&amp;#39;s actually really frustrating how little time is spent on ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs87twm8r2ex3evaglaa0k3hrzk7q89p02ktvm78tgh3fqfewmfxvszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqdruc7x" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsyp4drgh32leuukf0klz6txa8tjlx0s2c8rt6p6wsjklg6l8d0vnqnqlzj8&#39;&gt;nevent1q…lzj8&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;It&amp;#39;s actually really frustrating how little time is spent on synthesis and &amp;#34;refactoring&amp;#34; in the realm of science. &lt;br/&gt;&lt;br/&gt;Seems like nobody feels a responsibility to curate the bodies of knowledge we&amp;#39;re accumulating.
    </content>
    <updated>2025-01-31T20:01:42Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsxya4yp0ax5nkc9rvvfpa2agq7g3d5kxl7dl397532kucs3m686dgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqfukmsq</id>
    
      <title type="html">The big question is whether you can spot a lottery ticket ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsxya4yp0ax5nkc9rvvfpa2agq7g3d5kxl7dl397532kucs3m686dgzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqfukmsq" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsxldn4y7tk0tdsx8lsmljhc0ffdx2zgh5kezfww4qh3h7rxlzp35s2sldyu&#39;&gt;nevent1q…ldyu&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;The big question is whether you can spot a lottery ticket *without* training the network first. That would make this finding really useful, and perhaps tell us something very interesting about what &amp;#34;learning&amp;#34; even is in the first place.&lt;br/&gt;&lt;br/&gt;But is it possible? I hope that I&amp;#39;m wrong, but I&amp;#39;m starting to think the answer is &amp;#34;no.&amp;#34; Maybe all of this is just a funny procedure that&amp;#39;s roughly equivalent to training a network the normal way.&lt;br/&gt;&lt;br/&gt;Even so, I wonder if this alternate framing is useful?&lt;br/&gt;&lt;br/&gt;#science #ai #machinelearning
    </content>
    <updated>2025-01-31T13:56:24Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsxldn4y7tk0tdsx8lsmljhc0ffdx2zgh5kezfww4qh3h7rxlzp35szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqswcx62</id>
    
      <title type="html">I&amp;#39;ve been reading up on the Lottery Ticket Hypothesis, which ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsxldn4y7tk0tdsx8lsmljhc0ffdx2zgh5kezfww4qh3h7rxlzp35szypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqswcx62" />
    <content type="html">
      I&amp;#39;ve been reading up on the Lottery Ticket Hypothesis, which is super interesting.&lt;br/&gt;&lt;br/&gt;Basically, the observation is that these days we build *vast* neural networks with billions of parameters, but most of the parameters aren&amp;#39;t needed. That is, after training, you can just throw away 95% of the network (pruning), and it will still work fine.&lt;br/&gt;&lt;br/&gt;The LTH paper is asking: could we start with a network just 5% of the size, and get comparable results? If so, that would be a *huge* performance win for Deep Learning.&lt;br/&gt;&lt;br/&gt;What&amp;#39;s interesting is that you *can* do this, but only by training the full network (perhaps several times) to see which neurons are needed. They argue that training a neural network isn&amp;#39;t so much *creating* a model, as finding a lucky sub-network (a lottery ticket) from the randomly initialized network, a bit like a sculpter &amp;#34;finding&amp;#34; the bust hidden in a block of marble.&lt;br/&gt;&lt;br/&gt;Initial LTH paper: &lt;a href=&#34;http://arxiv.org/abs/1803.03635&#34;&gt;http://arxiv.org/abs/1803.03635&lt;/a&gt;&lt;br/&gt;Follow-up with major clarifications: &lt;a href=&#34;http://arxiv.org/abs/1905.01067&#34;&gt;http://arxiv.org/abs/1905.01067&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;#science #ai #machinelearning
    </content>
    <updated>2025-01-31T13:54:53Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqszvyssh73l768tv8me8h98tj85d8drd32rjvw02n73x5nvvd77asczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqrxqnsp</id>
    
      <title type="html">I love how complex the role of RNA in a cell is turning out to ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqszvyssh73l768tv8me8h98tj85d8drd32rjvw02n73x5nvvd77asczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqrxqnsp" />
    <content type="html">
      I love how complex the role of RNA in a cell is turning out to be. How cool is it that cells might use RNA as a code for communicating with each other? It makes me wonder: what are they talking about? What is this *for*?&lt;br/&gt;&lt;br/&gt;It&amp;#39;s particularly fascinating that some species use this mechanism to attack each other (if that&amp;#39;s what&amp;#39;s really going on), and yet *the cells still accept and interpret the RNA despite this*. This mechanism must be very important (or primal) for cells to maintain it even when it could be the biological equivalent of a remote code execution vulnerability.&lt;br/&gt;&lt;br/&gt;&lt;a href=&#34;https://www.quantamagazine.org/cells-across-the-tree-of-life-exchange-text-messages-using-rna-20240916/&#34;&gt;https://www.quantamagazine.org/cells-across-the-tree-of-life-exchange-text-messages-using-rna-20240916/&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;#science #rna #biology
    </content>
    <updated>2024-10-20T16:50:15Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs22w6vetw6pp8mqk4uftdga7x4c7lwvv9ppslmxad0tszgqf33nuszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcuequhmwr0</id>
    
      <title type="html">Now, this is a legitimately awesome way to use AI image ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs22w6vetw6pp8mqk4uftdga7x4c7lwvv9ppslmxad0tszgqf33nuszypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcuequhmwr0" />
    <content type="html">
      Now, this is a legitimately awesome way to use AI image generation: &lt;a href=&#34;https://www.youtube.com/watch?v=FMRi6pNAoag&#34;&gt;https://www.youtube.com/watch?v=FMRi6pNAoag&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is all about creating illusions that show different images depending on how you view or transform them.&lt;br/&gt;&lt;br/&gt;It&amp;#39;s also one of the few examples where I feel it really would make a lot of sense to use generative AI to last the groundwork for an actual artist. You could choose the two image targets and the transformation desired, use the generated image to solve the global layout problem for you (and filter down the results to those you like best), then make a new image with that layout. Notice: human creativity applies before, during, and after.&lt;br/&gt;&lt;br/&gt;#ai #llm #generativeai #generativeart #aiart
    </content>
    <updated>2024-09-16T14:44:37Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsve8umhvx3ztjstqmrj6xtuuyffll8pajj8xpuu86gdwp0u7ekn9gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqja7m9h</id>
    
      <title type="html">Generative AI is really good at reproducing the superficial form ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsve8umhvx3ztjstqmrj6xtuuyffll8pajj8xpuu86gdwp0u7ekn9gzypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqja7m9h" />
    <content type="html">
      Generative AI is really good at reproducing the superficial form and structure of art, and at using all the tropes of a genre.&lt;br/&gt;&lt;br/&gt;But in real art, these things are just the &amp;#34;carrier wave&amp;#34; for the artist&amp;#39;s message. The real meaning of the artwork is in how the artist *deviates* from expectations. It&amp;#39;s about how they use tropes in new ways, or draw attention to certain aspects of their work, often through repetition, dissonance, or absence.&lt;br/&gt;&lt;br/&gt;This explains the sort of empty feeling I get from AI art.&lt;br/&gt;&lt;br/&gt;#ai #llm #generativeart
    </content>
    <updated>2024-08-30T11:01:49Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsde80qphuj9g8dq6609k5gty8g3ml53g0mgurvjqqwhadmjna5thczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqax96ru</id>
    
      <title type="html">Here’s the thing about elitism. Getting in is about luck and ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsde80qphuj9g8dq6609k5gty8g3ml53g0mgurvjqqwhadmjna5thczypezya83pd3sk56kw3ytuq3mp65ampcn7htdht557dk3jsuzxcueqax96ru" />
    <content type="html">
      Here’s the thing about elitism. Getting in is about luck and charisma as much as competence. It’s all about impressing the right people.&lt;br/&gt;&lt;br/&gt;On some level, folks on the inside know this, but it makes them uncomfortable. They need to believe they *deserve* their position, that they got into power by brilliance, not dumb luck and connections.&lt;br/&gt;&lt;br/&gt;This primes them to become filters. They defend their self image by judging others, and keeping out folks who don’t have the “right stuff,” like they do. But they are *not* super competent, and they have limited visibility, so this is mostly a vibe check, if we’re being honest.&lt;br/&gt;&lt;br/&gt;Truth is, there are vastly more people who could do these elite jobs, but will never get the chance. They may even have better ways to do it, but we’ll never know, because we filter out the folks who don’t fit the mold.&lt;br/&gt;&lt;br/&gt;Elitism is about *artificial scarcity*. Which means, ultimately, it’s about concentrating wealth and power.
    </content>
    <updated>2024-08-22T11:51:04Z</updated>
  </entry>

</feed>