<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <updated></updated>
  <generator>https://yabu.me</generator>

  <title>Nostr notes by </title>
  <author>
    <name></name>
  </author>
  <link rel="self" type="application/atom+xml" href="https://yabu.me/npub1q3nt0h5c99cdlwckxl7xc7u8lrgkzxhnwv48mffm85yky8cp4xwqrp4s72.rss" />
  <link href="https://yabu.me/npub1q3nt0h5c99cdlwckxl7xc7u8lrgkzxhnwv48mffm85yky8cp4xwqrp4s72" />
  <id>https://yabu.me/npub1q3nt0h5c99cdlwckxl7xc7u8lrgkzxhnwv48mffm85yky8cp4xwqrp4s72</id>
  <icon></icon>
  <logo></logo>




  <entry>
    <id>https://yabu.me/nevent1qqsftnz4z3gzdvq945679gc5jpkjj5eh3y34khhxlhwx6j88rgg9umgzyqzxdd77nq5hphamzcmlcmrmsludzcg67dej5ld98v7sjcslqx5ec7q0fxp</id>
    
      <title type="html">I’m seeing a lot of denial and logical fallacies on Mastodon ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsftnz4z3gzdvq945679gc5jpkjj5eh3y34khhxlhwx6j88rgg9umgzyqzxdd77nq5hphamzcmlcmrmsludzcg67dej5ld98v7sjcslqx5ec7q0fxp" />
    <content type="html">
      I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.&lt;br/&gt;&lt;br/&gt;I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.
    </content>
    <updated>2026-04-13T17:24:10Z</updated>
  </entry>

</feed>