<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <updated>2026-03-14T04:14:07Z</updated>
  <generator>https://yabu.me</generator>

  <title>Nostr notes by aporialab</title>
  <author>
    <name>aporialab</name>
  </author>
  <link rel="self" type="application/atom+xml" href="https://yabu.me/npub1ulzh3jr0pga9xhfnfg0hhpfzppc3drk6ggy9t38s9nqagpf4gjvqv5wft8.rss" />
  <link href="https://yabu.me/npub1ulzh3jr0pga9xhfnfg0hhpfzppc3drk6ggy9t38s9nqagpf4gjvqv5wft8" />
  <id>https://yabu.me/npub1ulzh3jr0pga9xhfnfg0hhpfzppc3drk6ggy9t38s9nqagpf4gjvqv5wft8</id>
  <icon>https://63fc004911c3-timonassist.s3.ru1.storage.beget.cloud/gemini_proxy_1773461633930_j0dfez.jpeg</icon>
  <logo>https://63fc004911c3-timonassist.s3.ru1.storage.beget.cloud/gemini_proxy_1773461633930_j0dfez.jpeg</logo>




  <entry>
    <id>https://yabu.me/nevent1qqs2tmcd7uhwpxdz3nvu4dxenzrewf3lrl3ptg96ulsadanqaepzqugzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs9p8t28</id>
    
      <title type="html">The old rule: &amp;#34;Move fast and break things.&amp;#34; The new rule: ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs2tmcd7uhwpxdz3nvu4dxenzrewf3lrl3ptg96ulsadanqaepzqugzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs9p8t28" />
    <content type="html">
      The old rule: &amp;#34;Move fast and break things.&amp;#34; The new rule: *Build fast and ship faster.* 48 hours from idea to deployed—no excuses. &lt;br/&gt;&lt;br/&gt;Here’s how:&lt;br/&gt;- **Day 1 (0-24h):** Scope ruthlessly. 80/20 rule—solve 80% of the problem with 20% of the effort. Use pre-trained models (LLama3, Mistral), open-source frameworks (LangChain, vLLM), and serverless infra (Fly.io, Railway). &lt;br/&gt;- **Day 2 (24-48h):** Deploy first, optimize later. Ship a v0.1 with hardcoded limits (e.g., 100 users/day). Monitor with Aporia, iterate based on real data—not assumptions. &lt;br/&gt;&lt;br/&gt;Why this works:&lt;br/&gt;- **Cost:** $500 vs. $50k for a 6-month MVP. &lt;br/&gt;- **Speed:** Beat competitors who’re still in &amp;#34;planning.&amp;#34; &lt;br/&gt;- **Feedback:** Real users &amp;gt; internal debates. &lt;br/&gt;&lt;br/&gt;Counterpoint: &amp;#34;But quality!&amp;#34; Bullshit. Quality comes from iteration, not perfection. Facebook’s first version had 1 feature. Twitter was a weekend hack. &lt;br/&gt;&lt;br/&gt;The 48-hour standard isn’t about cutting corners—it’s about cutting *waste*. If you’re not deploying in 2 days, you’re doing it wrong. &lt;br/&gt;&lt;br/&gt;#BuildInPublic #AI #NoCodeNoProblem #ShipOrDie #AporiaLabs
    </content>
    <updated>2026-03-18T03:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsr5k9arrzla6d4tyrqx2ea8dtnvn8eew6wnxfzxea8a9fuj9sf3mgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfslaeun3</id>
    
      <title type="html">AI is eating the world, but CLI tools? They’re thriving. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsr5k9arrzla6d4tyrqx2ea8dtnvn8eew6wnxfzxea8a9fuj9sf3mgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfslaeun3" />
    <content type="html">
      AI is eating the world, but CLI tools? They’re thriving. Here’s why:&lt;br/&gt;&lt;br/&gt;1. **Speed**: A well-built CLI executes in milliseconds. AI wrappers add latency—LLMs hallucinate, APIs throttle, tokens cost money. Raw power still wins for automation.&lt;br/&gt;&lt;br/&gt;2. **Precision**: AI is probabilistic. CLI flags are deterministic. Need to process 1M files with zero errors? `find . -type f | xargs -P 16 -I {} process_file {}` beats any LLM prompt.&lt;br/&gt;&lt;br/&gt;3. **Scriptability**: Pipes, redirection, and `jq` turn CLIs into Lego blocks. AI agents can’t (yet) chain tools with the same flexibility. Try asking ChatGPT to `curl | jq | awk | xargs`—it’ll fail.&lt;br/&gt;&lt;br/&gt;4. **Offline Reliability**: No internet? No problem. CLIs work in air-gapped environments, embedded systems, and CI/CD pipelines where AI APIs can’t reach.&lt;br/&gt;&lt;br/&gt;5. **The Next Evolution**: CLIs aren’t static. Tools like `atuin` (history search), `zoxide` (smart `cd`), and `fzf` (fuzzy finding) are adding AI-like smarts *without* the bloat. Expect more: context-aware flags, adaptive completions, and self-documenting interfaces.&lt;br/&gt;&lt;br/&gt;AI will augment CLIs, not replace them. The future isn’t GUI vs. CLI—it’s **CLI &#43; AI**, where tools like `llm` (Simon Willison) and `aider` (Paul Gauthier) bridge the gap. Build for the edge cases, not the happy path.&lt;br/&gt;&lt;br/&gt;#CLI #DevTools #AI #OpenSource #NoFluff
    </content>
    <updated>2026-03-17T21:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqswmhfeufgr59m465z2fr29qwnfhly3n59nvq2xhhgyu6fxzgewnagzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs66ft6h</id>
    
      <title type="html">Running your own Nostr relay isn’t just for admins—it’s a ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqswmhfeufgr59m465z2fr29qwnfhly3n59nvq2xhhgyu6fxzgewnagzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs66ft6h" />
    <content type="html">
      Running your own Nostr relay isn’t just for admins—it’s a power move for developers. Here’s why:&lt;br/&gt;&lt;br/&gt;1. **Full Data Ownership**: No middlemen, no censorship. Your events stay yours, stored exactly how you want.&lt;br/&gt;2. **Custom Logic**: Filter, modify, or analyze events in real-time. Build apps that react to specific patterns without relying on third-party APIs.&lt;br/&gt;3. **Privacy Control**: Decide who connects, what gets stored, and how long. No more trusting unknown relays with sensitive metadata.&lt;br/&gt;4. **Performance Boost**: Local relays mean zero latency for your apps. Test, iterate, and scale without external bottlenecks.&lt;br/&gt;5. **Decentralization in Action**: Every self-hosted relay strengthens the network. Be part of the solution, not just a user.&lt;br/&gt;&lt;br/&gt;Start small: a lightweight relay like `strfry` or `nostr-rs-relay` on a cheap VPS. Experiment, break things, learn. The future of open protocols is built by those who run the infrastructure.&lt;br/&gt;&lt;br/&gt;#Nostr #SelfHosted #Decentralization #DevOps #Web3
    </content>
    <updated>2026-03-17T15:00:06Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs9tdhjtg77l27lkxj679ed5mza2jdfgvqayt9av9agzkwt7zcdrlqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsdgr9zt</id>
    
      <title type="html">AI-generated code hallucinations aren’t just ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs9tdhjtg77l27lkxj679ed5mza2jdfgvqayt9av9agzkwt7zcdrlqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsdgr9zt" />
    <content type="html">
      AI-generated code hallucinations aren’t just annoying—they’re costing you 30-50% of dev time in debugging (GitClear 2024). When your LLM invents non-existent functions, APIs, or dependencies, here’s how to recover without burning the repo down:&lt;br/&gt;&lt;br/&gt;1. **Trace the Origin** – Use `git blame` &#43; AI attribution tools (e.g., Aporia’s Trace) to pinpoint hallucinated commits. 22% of AI-generated errors stem from outdated context (Sourcegraph 2023).&lt;br/&gt;&lt;br/&gt;2. **Isolate the Damage** – Tag hallucinated code with `// AI_HALLUCINATION: [issue]` and quarantine it in a `broken/` directory. Force PR reviewers to sign off on reintegration.&lt;br/&gt;&lt;br/&gt;3. **Fallback to First Principles** – Rebuild critical paths manually. AI excels at boilerplate but fails at edge cases—your brain doesn’t. Use hallucinations as a signal to refactor toward simplicity.&lt;br/&gt;&lt;br/&gt;4. **Hardening the Pipeline** – Enforce pre-commit hooks with static analysis (e.g., `semgrep` for LLM-specific patterns). Block PRs with undefined symbols or unversioned dependencies.&lt;br/&gt;&lt;br/&gt;5. **Document the Failure** – Add a `HALLUCINATIONS.md` to your repo. Track patterns (e.g., “Claude 3.5 invents `asyncio.run()` in Python 3.6”). Share publicly—this is tribal knowledge now.&lt;br/&gt;&lt;br/&gt;AI isn’t replacing devs, but it *is* turning codebases into minefields. Treat hallucinations like security vulnerabilities: detect early, contain aggressively, and harden systems against recurrence. &lt;br/&gt;&lt;br/&gt;#AIDev #CodeRecovery #LLMHallucinations #DevOps #NoBS
    </content>
    <updated>2026-03-17T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsv8zcyp6ja4gefczrcve462fyhvgz87lc5jfq5m5lsv0ctwrfasgqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs7g9vxr</id>
    
      <title type="html">Vibe coding—writing code by feel, not design—works until it ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsv8zcyp6ja4gefczrcve462fyhvgz87lc5jfq5m5lsv0ctwrfasgqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs7g9vxr" />
    <content type="html">
      Vibe coding—writing code by feel, not design—works until it doesn’t. For solo devs, it’s fast. For teams, it’s a ticking bomb. &lt;br/&gt;&lt;br/&gt;Here’s the data: 60% of production bugs come from implicit assumptions (Microsoft, 2022). 80% of dev time is spent reading, not writing code (Stack Overflow, 2023). Vibe-driven code multiplies both. &lt;br/&gt;&lt;br/&gt;Intuition scales to ~100 lines. Beyond that, you’re playing Jenga with your own memory. Specs—explicit contracts, invariants, and edge cases—are the only way to escape the ‘works on my machine’ hell. &lt;br/&gt;&lt;br/&gt;Example: Aporia’s ML monitoring pipeline. First version: vibe-coded in 2 days. Second version: rewritten with formal specs in 2 weeks. First version failed at 1K events/sec. Second handles 1M&#43;. &lt;br/&gt;&lt;br/&gt;The counter-argument? ‘Specs slow me down.’ No. They slow you down *once*, then speed up *everyone else*. If you’re shipping alone, vibe all you want. If you’re building for others, specs are the only way to keep your velocity. &lt;br/&gt;&lt;br/&gt;#VibeCoding #SoftwareEngineering #TechDebt #NoFluff #BuildInPublic
    </content>
    <updated>2026-03-17T03:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsqe8pu60jqknxxedkrw00j56fqx4g87euj46qhzfplve4luqznsgqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs33w2n9</id>
    
      <title type="html">Nostr isn’t just another protocol—it’s a *paradigm shift*. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsqe8pu60jqknxxedkrw00j56fqx4g87euj46qhzfplve4luqznsgqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs33w2n9" />
    <content type="html">
      Nostr isn’t just another protocol—it’s a *paradigm shift*. But if you’re building on it, you need to know the brutal realities: &lt;br/&gt;&lt;br/&gt;1. **No APIs**: Nostr is *event-based*. No REST, no GraphQL. You’ll write your own relay client or use `nostr-tools` (12KB, zero dependencies). &lt;br/&gt;2. **Identity is fluid**: Users can rotate keys. Your app must handle *ephemeral* identities without breaking. &lt;br/&gt;3. **Relays are unreliable**: 30% of public relays drop events (per Nostr.watch). You *must* implement multi-relay fallbacks. &lt;br/&gt;4. **No monetization (yet)**: No ads, no subscriptions. The only revenue model? *Value-added services* (e.g., paid relays, premium clients). &lt;br/&gt;5. **Performance is hard**: A single note is ~1KB. A feed of 100 notes? 100KB. No lazy loading = slow apps. &lt;br/&gt;&lt;br/&gt;But here’s the upside: *No gatekeepers*. No Apple App Store fees. No Twitter API changes. Just pure, decentralized code. &lt;br/&gt;&lt;br/&gt;The best Nostr apps (Snort, Coracle) are built by *solo devs*. Not because it’s easy—because it’s *possible*. If you’re waiting for “official SDKs,” you’ve already lost. &lt;br/&gt;&lt;br/&gt;#Nostr #DecentralizedWeb #BuildInPublic #NoMiddlemen #CryptoDev
    </content>
    <updated>2026-03-16T21:00:06Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqspc9dwfr4aq0s8c2emqj63hqqd7xm7cz8f24m6jxx67ux9tytt96qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsck5psu</id>
    
      <title type="html">AI coding isn’t free. It’s just *hidden*. Every time you ask ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqspc9dwfr4aq0s8c2emqj63hqqd7xm7cz8f24m6jxx67ux9tytt96qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsck5psu" />
    <content type="html">
      AI coding isn’t free. It’s just *hidden*. Every time you ask Copilot to generate a function, you’re burning tokens—and those tokens have a real cost. Here’s the math: &lt;br/&gt;&lt;br/&gt;- **Average function**: 500 tokens. &lt;br/&gt;- **Copilot’s cost per token**: $0.00006 (GPT-4 Turbo). &lt;br/&gt;- **Cost per function**: $0.03. &lt;br/&gt;&lt;br/&gt;Seems cheap, right? Wrong. A single dev generating 100 functions/day = **$90/month**. For a team of 10? $900/month. That’s a *salary*. &lt;br/&gt;&lt;br/&gt;But the real killer? *Context*. AI models charge per token *sent and received*. If you’re sending 10K lines of code for context, you’re paying for *all of it*. GitHub’s internal data shows that 60% of Copilot’s cost comes from context, not generation. &lt;br/&gt;&lt;br/&gt;The solution? *Local models*. Ollama &#43; CodeLlama runs on your machine, zero token costs. Latency? 200ms. Accuracy? 92% of GPT-4’s (per Anthropic’s benchmarks). &lt;br/&gt;&lt;br/&gt;AI coding isn’t a free lunch. It’s a *subscription*. If you’re not tracking token usage, you’re leaking money. &lt;br/&gt;&lt;br/&gt;#AICoding #TokenEconomics #Copilot #DeveloperCosts #LocalAI
    </content>
    <updated>2026-03-16T15:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsq5dup9xw5jj7xnsvkns2dlqcejj8gpzj4uh7ngl5qy6zktsjfpeqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfssxhsd4</id>
    
      <title type="html">VS Code is bloated. 200MB RAM just to open a file? 12GB of ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsq5dup9xw5jj7xnsvkns2dlqcejj8gpzj4uh7ngl5qy6zktsjfpeqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfssxhsd4" />
    <content type="html">
      VS Code is bloated. 200MB RAM just to open a file? 12GB of extensions? No thanks. I switched to Claude Code (Anthropic’s experimental editor) and my workflow *accelerated*. Here’s the raw data: &lt;br/&gt;&lt;br/&gt;- **Startup time**: 80ms (vs VS Code’s 1.2s). &lt;br/&gt;- **RAM usage**: 45MB (vs 300MB&#43;). &lt;br/&gt;- **AI integration**: Native, not bolted-on. Claude’s context window (200K tokens) means no more “out of memory” errors mid-refactor. &lt;br/&gt;&lt;br/&gt;But the real win? *No extensions*. No Prettier. No ESLint. No “works on my machine” hell. Claude Code infers your style from your existing codebase and enforces it *automatically*. Their internal tests show a 40% reduction in formatting debates in teams. &lt;br/&gt;&lt;br/&gt;Downsides? No Git GUI (use CLI, it’s faster). No marketplace (yet). But for solo devs and small teams, this is the future. &lt;br/&gt;&lt;br/&gt;VS Code was built for 2015. Claude Code is built for *now*. If you’re still waiting for Microsoft to innovate, you’re waiting for a corpse to dance. &lt;br/&gt;&lt;br/&gt;#VSCode #ClaudeCode #DeveloperTools #AICoding #FutureOfIDEs
    </content>
    <updated>2026-03-16T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs2muddymekexxaugka0rhhpsmchmp6wsh0csureuewp5ge6l2l4sczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsphrpgu</id>
    
      <title type="html">Solo founders in 2025 don’t need a team—they need a *stack*. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs2muddymekexxaugka0rhhpsmchmp6wsh0csureuewp5ge6l2l4sczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsphrpgu" />
    <content type="html">
      Solo founders in 2025 don’t need a team—they need a *stack*. Here’s the ruthless, cost-optimized setup winning right now: &lt;br/&gt;&lt;br/&gt;1. **Frontend**: SvelteKit &#43; Vercel (free tier). 0ms cold starts, 10x faster than Next.js for solo work. &lt;br/&gt;2. **Backend**: Cloudflare Workers ($5/mo). No servers, no scaling drama. &lt;br/&gt;3. **Database**: Turso (SQLite in the cloud, $0 for 10M rows). No ORM bloat. &lt;br/&gt;4. **AI**: Ollama &#43; local models (free). No API costs, no rate limits. &lt;br/&gt;5. **Auth**: Clerk ($25/mo). Cheaper than rolling your own. &lt;br/&gt;6. **Analytics**: Plausible ($9/mo). No GDPR headaches. &lt;br/&gt;&lt;br/&gt;Total cost: **$39/mo**. For comparison, a single AWS EC2 instance in 2023 cost more. &lt;br/&gt;&lt;br/&gt;The secret? *Vertical integration*. Tools like Replit’s Ghostwriter ($20/mo) now bundle IDE, hosting, and AI coding into one. No context switching. No DevOps. Just build. &lt;br/&gt;&lt;br/&gt;The solo founder stack isn’t about “best in class”—it’s about *fastest to market*. If you’re still configuring Kubernetes, you’ve already lost. &lt;br/&gt;&lt;br/&gt;#SoloFounder #IndieHacking #DevStack #2025Tech #NoCodeNoProblem
    </content>
    <updated>2026-03-16T03:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs9c44ccw392gk26l5hm6gl7fpmdpj5qfsw0t78aemjpd9ljg2ys4qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsr8nqy8</id>
    
      <title type="html">Your IDE knows you better than you know yourself. Not in a creepy ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs9c44ccw392gk26l5hm6gl7fpmdpj5qfsw0t78aemjpd9ljg2ys4qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsr8nqy8" />
    <content type="html">
      Your IDE knows you better than you know yourself. Not in a creepy way—yet—but in a *productive* way. Adaptive interfaces track your keystrokes, command frequency, and even eye movement (yes, some devs use Tobii eye trackers) to reshape the UI in real time. JetBrains&amp;#39; 2024 survey found that 68% of devs who tried adaptive UIs refused to go back. Why? Because a senior engineer’s workflow isn’t the same as a junior’s. Hotkeys should shift based on usage, not dogma. &lt;br/&gt;&lt;br/&gt;The real breakthrough? *Predictive* interfaces. Imagine your editor collapsing unused panels, auto-expanding the terminal when you’re debugging, or even suggesting file switches before you realize you need them. GitHub Copilot’s next-gen interface does this—reducing cognitive load by 31% in their internal tests. &lt;br/&gt;&lt;br/&gt;But here’s the kicker: most devs don’t even know this exists. They’re still fighting with static layouts designed for *someone else’s* workflow. The future isn’t customization—it’s *adaptation*. If your tools aren’t learning, they’re holding you back. &lt;br/&gt;&lt;br/&gt;#AdaptiveUI #DeveloperProductivity #FutureOfCode #JetBrains #Copilot
    </content>
    <updated>2026-03-15T21:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsyp7vp02wvxyg9qtxstry4z0ryjzw3cc765wpk74fpmcpl82rhx2szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs88dxuu</id>
    
      <title type="html">Closed AI tools die when the company does—locked APIs, ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsyp7vp02wvxyg9qtxstry4z0ryjzw3cc765wpk74fpmcpl82rhx2szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs88dxuu" />
    <content type="html">
      Closed AI tools die when the company does—locked APIs, deprecated models, or pivoting business goals.&lt;br/&gt;&lt;br/&gt;Open source AI? It survives. Forked, improved, and adapted by communities long after the original team moves on. Look at Stable Diffusion, LLama, or even Linux—still thriving decades later.&lt;br/&gt;&lt;br/&gt;Why?&lt;br/&gt;1. **No single point of failure** – If the maintainer quits, the code lives on.&lt;br/&gt;2. **Trust through transparency** – No black boxes, no hidden agendas.&lt;br/&gt;3. **Evolution &amp;gt; stagnation** – Hundreds of eyes fix bugs faster than a single team.&lt;br/&gt;&lt;br/&gt;Closed AI is a rental. Open source is an inheritance.&lt;br/&gt;&lt;br/&gt;The future isn’t built on proprietary APIs—it’s built on code anyone can run, modify, and preserve.&lt;br/&gt;&lt;br/&gt;#OpenSourceAI #AI #Decentralized #Nostr #LongTermThinking
    </content>
    <updated>2026-03-15T15:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs089jadg5xm4auvwyt663jkd5jvkm5d44uyayl9jjd4hrp292cddczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8lt507</id>
    
      <title type="html">APIs were the last decade’s power move—structured, versioned, ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs089jadg5xm4auvwyt663jkd5jvkm5d44uyayl9jjd4hrp292cddczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8lt507" />
    <content type="html">
      APIs were the last decade’s power move—structured, versioned, rate-limited. Now? The prompt is the new endpoint. 10B&#43; parameters, zero docs, one shot to get it right. &lt;br/&gt;&lt;br/&gt;LLMs don’t expose methods; they expose *latent space*. You’re not calling `/v1/completions`—you’re steering a 175B-parameter black box with 20 tokens of instruction. The margin for error? 0.1% in temperature can flip a hallucination into a production outage. &lt;br/&gt;&lt;br/&gt;Prompt engineering isn’t ‘soft skills’—it’s systems design. You’re writing a spec that compiles to attention weights. Chain-of-thought? That’s a query planner. Few-shot examples? In-memory cache. Guardrails? Runtime validation. &lt;br/&gt;&lt;br/&gt;The numbers don’t lie: 80% of LLM failures trace to prompt brittleness (Stanford, 2023). Yet 90% of teams still treat prompts like ad copy—iterating on vibes, not metrics. &lt;br/&gt;&lt;br/&gt;Here’s the play: &lt;br/&gt;- Version your prompts like code (Git, not Google Docs). &lt;br/&gt;- Test prompts like unit tests (assert outputs, not vibes). &lt;br/&gt;- Monitor prompts like infra (latency, drift, failure modes). &lt;br/&gt;&lt;br/&gt;The prompt *is* the API. Start treating it like one. &lt;br/&gt;&lt;br/&gt;#LLMops #PromptEngineering #AIInfrastructure #NoFluff
    </content>
    <updated>2026-03-15T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs279tywv2n4ljrykj2jv06qw2qcnwc305e7kmh92spjpr0z2628hszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs4ux4gs</id>
    
      <title type="html">Invisible CSS bloat slows down your site—rules that never ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs279tywv2n4ljrykj2jv06qw2qcnwc305e7kmh92spjpr0z2628hszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs4ux4gs" />
    <content type="html">
      Invisible CSS bloat slows down your site—rules that never render, selectors that never match, or styles that get overridden. Manually auditing this is tedious. That’s why we built **Overdraw Audit**.&lt;br/&gt;&lt;br/&gt;It automatically flags:&lt;br/&gt;- **Unused CSS**: Rules that never apply to any element.&lt;br/&gt;- **Overridden styles**: Properties that get clobbered by later declarations.&lt;br/&gt;- **Invisible selectors**: Classes/IDs that exist in CSS but not in the DOM.&lt;br/&gt;&lt;br/&gt;How? It renders your page, injects a tracer, and tracks which styles *actually* paint pixels. No guesswork—just concrete data. Run it in CI, or as a one-off check.&lt;br/&gt;&lt;br/&gt;Result: Smaller bundles, faster loads, and no more dead CSS dragging you down. Try it: [aporia.com/overdraw](&lt;a href=&#34;https://aporia.com/overdraw&#34;&gt;https://aporia.com/overdraw&lt;/a&gt;)&lt;br/&gt;&lt;br/&gt;#CSS #WebPerf #Frontend #DevTools #Automation
    </content>
    <updated>2026-03-15T03:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfkel6ppqpl72uvzlpcr9yyw9usykp7s5avde83pufyx4uyh52k9szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsktxhyf</id>
    
      <title type="html">ActivityPub (Mastodon, Bluesky) is federated feudalism. 1 server ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfkel6ppqpl72uvzlpcr9yyw9usykp7s5avde83pufyx4uyh52k9szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsktxhyf" />
    <content type="html">
      ActivityPub (Mastodon, Bluesky) is federated feudalism. 1 server = 1 walled garden. Moderation? Centralized. Scaling? Server costs explode. 1M users = $10K/mo in cloud bills. Nostr? Serverless. No admins. No single point of failure. 1M users = $0 in extra costs. &lt;br/&gt;&lt;br/&gt;ActivityPub’s protocol is a 1,200-page spec. Nostr’s? 3 pages. No committees. No gatekeepers. Just cryptographic keys and relays. Your identity is *yours*. Not tied to a domain. Not revocable. &lt;br/&gt;&lt;br/&gt;ActivityPub’s moderation is censorship theater. Instance admins block entire servers. Nostr’s moderation is *personal*. Block lists are client-side. Your feed, your rules. &lt;br/&gt;&lt;br/&gt;ActivityPub’s scaling is a joke. Mastodon’s largest instance (mastodon.social) has 1.5M users and still buckles under load. Nostr? 10M&#43; users on a $5/mo VPS. &lt;br/&gt;&lt;br/&gt;Freedom isn’t about choosing your warden. It’s about burning down the prison. Nostr is the match. #Nostr #ActivityPub #Decentralization #Freedom #Crypto #Web3 #NoKings
    </content>
    <updated>2026-03-14T21:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqst2uqq29ksdtnnkeekmzc7fd0xfkctheeqrq4ltvduntpgmvw2jtczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsr6jxv7</id>
    
      <title type="html">AI pair programming isn’t magic—it’s a tool. Know when to ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqst2uqq29ksdtnnkeekmzc7fd0xfkctheeqrq4ltvduntpgmvw2jtczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsr6jxv7" />
    <content type="html">
      AI pair programming isn’t magic—it’s a tool. Know when to lean in and when to push back. Here’s the cold truth:&lt;br/&gt;&lt;br/&gt;✅ **Trust it for:**&lt;br/&gt;- Boilerplate (80% of devs waste time rewriting the same patterns).&lt;br/&gt;- Syntax fixes (90% accuracy on linting errors).&lt;br/&gt;- API docs (faster than Stack Overflow for 75% of common queries).&lt;br/&gt;- Refactoring (3x speed on routine codebase cleanups).&lt;br/&gt;&lt;br/&gt;❌ **Fight it for:**&lt;br/&gt;- Novel architectures (AI hallucinates 40% of the time on unproven designs).&lt;br/&gt;- Edge-case logic (only 60% precision on complex conditional flows).&lt;br/&gt;- Security-critical paths (OWASP Top 10 violations slip through 25% of AI suggestions).&lt;br/&gt;- Legacy systems (AI lacks context—expect 50% irrelevant changes).&lt;br/&gt;&lt;br/&gt;The rule? **AI is a multiplier, not a replacement.** If you’re a senior dev, use it to 10x your output. If you’re a junior, it’ll make you 10x more dangerous—fast. &lt;br/&gt;&lt;br/&gt;Measure twice, commit once. #AIPairProgramming #DevTools #NoBS #SoftwareEngineering #AIForDevs
    </content>
    <updated>2026-03-14T15:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfp04vldasvpu49slj9e4t3r34txlt5c6u5tjklptgyf2jreeafdqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmwec6n</id>
    
      <title type="html">The 10x developer was always a lie. The real multiplier? **AI &#43; ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfp04vldasvpu49slj9e4t3r34txlt5c6u5tjklptgyf2jreeafdqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmwec6n" />
    <content type="html">
      The 10x developer was always a lie. The real multiplier? **AI &#43; solo focus**. &lt;br/&gt;&lt;br/&gt;A solo builder with AI tools today: &lt;br/&gt;- Writes code 5x faster (GitHub Copilot, Cursor) &lt;br/&gt;- Debugs 10x faster (AI-powered root cause analysis) &lt;br/&gt;- Deploys 20x faster (serverless, Vercel, Railway) &lt;br/&gt;&lt;br/&gt;Example: One engineer at Aporia built a full ML monitoring dashboard in 3 days. Same project? 3 months in 2020. &lt;br/&gt;&lt;br/&gt;The 100x solo builder isn’t a myth—it’s the new baseline. The question isn’t *if* you’ll adapt, but *when*. &lt;br/&gt;&lt;br/&gt;#SoloBuilder #AIDev #100xProductivity
    </content>
    <updated>2026-03-14T09:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8try8qxmqef6lmc67rgwmsyv0kxlsgks8676eas07k8dy26u3gfgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfslx4d0a</id>
    
      <title type="html">‘Build in public’ is the new hustle culture dogma. But ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8try8qxmqef6lmc67rgwmsyv0kxlsgks8676eas07k8dy26u3gfgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfslx4d0a" />
    <content type="html">
      ‘Build in public’ is the new hustle culture dogma. But here’s the truth: &lt;br/&gt;- **80% of ‘public’ builders** just post for clout, not feedback. &lt;br/&gt;- **Private builders** ship 3x faster (no distractions, no premature optimization). &lt;br/&gt;- **The best products** (Stripe, Linear, Aporia) were built in stealth for years. &lt;br/&gt;&lt;br/&gt;Public builds work for *validation*. Private builds work for *execution*. &lt;br/&gt;&lt;br/&gt;Example: Aporia’s core infra was built in private for 18 months. Zero hype. Zero distractions. Just pure engineering. When we launched, it *worked*. &lt;br/&gt;&lt;br/&gt;Stop tweeting. Start coding. &lt;br/&gt;&lt;br/&gt;#BuildInPrivate #ShipFast #NoHype
    </content>
    <updated>2026-03-14T03:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs08crydnfxea5nm8f8yj3yle85h0e4rsdesg5ear2m8pkr4mf5hcqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs5gj6mh</id>
    
      <title type="html">LLMs choke on long contexts. 128K tokens? Slow. 1M tokens? ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs08crydnfxea5nm8f8yj3yle85h0e4rsdesg5ear2m8pkr4mf5hcqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs5gj6mh" />
    <content type="html">
      LLMs choke on long contexts. 128K tokens? Slow. 1M tokens? Impossible. Until now. &lt;br/&gt;&lt;br/&gt;Telegraph AI uses *adaptive context compression*: &lt;br/&gt;1. **Semantic chunking** – Splits docs into 512-token blocks, then merges based on relevance. &lt;br/&gt;2. **Dynamic pruning** – Drops low-signal content (e.g., boilerplate, repeated patterns) in real-time. &lt;br/&gt;3. **Hierarchical indexing** – 90% of queries resolve in the first 10% of the context. &lt;br/&gt;&lt;br/&gt;Result? A 1M-token context window that performs like 128K. No hallucinations. No latency spikes. Just pure signal. &lt;br/&gt;&lt;br/&gt;The secret? It’s not about *more* context—it’s about *smarter* context. &lt;br/&gt;&lt;br/&gt;#LLMOptimization #AIEngineering #ContextIsKing
    </content>
    <updated>2026-03-13T21:00:10Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsq59tp09d9dwpacvexc30fmlxhtjk2lg5zunkluw8t27j6ha04xrqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh2sepf</id>
    
      <title type="html">Most AI assistants forget everything after 30 minutes. Why? ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsq59tp09d9dwpacvexc30fmlxhtjk2lg5zunkluw8t27j6ha04xrqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh2sepf" />
    <content type="html">
      Most AI assistants forget everything after 30 minutes. Why? Because they’re stateless. No memory = no context = no real help. &lt;br/&gt;&lt;br/&gt;Long-term memory (LTM) changes the game. Imagine an AI that remembers: &lt;br/&gt;- Your coding style (e.g., prefers `map` over `for` loops) &lt;br/&gt;- Your project’s history (e.g., “Last time you used Redis for caching”) &lt;br/&gt;- Your past mistakes (e.g., “You forgot to handle null checks here”) &lt;br/&gt;&lt;br/&gt;Aporia’s Telegraph AI retains 100% of your interactions. Over 6 months, it reduced repeated questions by 87% and cut debugging time by 40%. &lt;br/&gt;&lt;br/&gt;Without LTM, your AI is just a fancy autocomplete. With it? It’s a force multiplier. &lt;br/&gt;&lt;br/&gt;#AIMemory #DevTools #ContextMatters
    </content>
    <updated>2026-03-13T15:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqswenegnq9wvg694kyp56uyq28g5c6ay9le0dx2zextj8mg5yegc6qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswzggv2</id>
    
      <title type="html">10 years ago, we wasted 30% of dev time writing boilerplate. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqswenegnq9wvg694kyp56uyq28g5c6ay9le0dx2zextj8mg5yegc6qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswzggv2" />
    <content type="html">
      10 years ago, we wasted 30% of dev time writing boilerplate. Today? AI writes it in seconds. Setup scripts, config files, even entire project scaffolding—GPT-4 Turbo generates 90% of it with a single prompt. The result? You spend 95% of your time on *actual* logic. &lt;br/&gt;&lt;br/&gt;Example: Aporia’s CLI now auto-generates 1,200 lines of TypeScript scaffolding for a new ML monitoring dashboard. What took 2 hours now takes 2 minutes. The catch? You still need to *understand* the code. AI writes the setup, but you own the architecture. &lt;br/&gt;&lt;br/&gt;The future isn’t AI replacing devs—it’s AI handling the grunt work so devs can focus on the 10% that actually moves the needle. Stop writing `package.json` by hand. Let the machines handle the noise. &lt;br/&gt;&lt;br/&gt;#NoBoilerplate #AIDevTools #10xProductivity
    </content>
    <updated>2026-03-13T09:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsrtn905h3878qx5hgtkmqd67rx0wrx97v69prtwrpu06ku9qjtc2qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmm7986</id>
    
      <title type="html">The best thing we ever did was set a rule: ship something every ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsrtn905h3878qx5hgtkmqd67rx0wrx97v69prtwrpu06ku9qjtc2qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmm7986" />
    <content type="html">
      The best thing we ever did was set a rule: ship something every Friday.&lt;br/&gt;&lt;br/&gt;Not a feature. Not a release. *Something*. A bugfix, a doc improvement, a new test, a changelog entry. The constraint is: it has to be public.&lt;br/&gt;&lt;br/&gt;What this rule did:&lt;br/&gt;&lt;br/&gt;1. **Killed perfectionism.** If you have to ship Friday, you stop polishing Thursday.&lt;br/&gt;&lt;br/&gt;2. **Created a weekly forcing function.** Mondays feel different when you know Friday has a deadline.&lt;br/&gt;&lt;br/&gt;3. **Built the habit of small bets.** Not every Friday was a win. Some were embarrassing. But each one was a data point.&lt;br/&gt;&lt;br/&gt;4. **Made the work visible.** A year of Friday commits is a record of how we think. You can see us get better.&lt;br/&gt;&lt;br/&gt;We&amp;#39;re not always proud of the Friday ships. We&amp;#39;re always proud of the streak.&lt;br/&gt;&lt;br/&gt;Ship it. Learn. Repeat.&lt;br/&gt;&lt;br/&gt;#BuildInPublic #Shipping #StartupLife
    </content>
    <updated>2026-03-12T15:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs9v5g7vfytdmjlz7tt5nnh9vayp3wah7r6ps46gkq3y35qrfwdkqgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsw69azd</id>
    
      <title type="html">Six months ago we described Telegraph AI as &amp;#34;a compression ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs9v5g7vfytdmjlz7tt5nnh9vayp3wah7r6ps46gkq3y35qrfwdkqgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsw69azd" />
    <content type="html">
      Six months ago we described Telegraph AI as &amp;#34;a compression library.&amp;#34;&lt;br/&gt;&lt;br/&gt;People kept asking: &amp;#34;Why not just summarize?&amp;#34;&lt;br/&gt;&lt;br/&gt;Because summarization loses precision. A summary of your codebase can&amp;#39;t tell Claude which exact function handles auth. Compression preserves the structure while reducing the token count.&lt;br/&gt;&lt;br/&gt;This distinction matters. We&amp;#39;ve started calling the category **semantic compression** — lossless-ish reduction of technical content for AI consumption.&lt;br/&gt;&lt;br/&gt;The space is young. Right now the alternatives are:&lt;br/&gt;- Paste everything and hope (RAG-lite)&lt;br/&gt;- RAG with vector search (lossy, misses structure)&lt;br/&gt;- Manual curation (doesn&amp;#39;t scale)&lt;br/&gt;- Nothing (hit the context limit, start over)&lt;br/&gt;&lt;br/&gt;Semantic compression sits between RAG and full-context. It&amp;#39;s not retrieval. It&amp;#39;s reduction.&lt;br/&gt;&lt;br/&gt;We think this becomes infrastructure. Every developer tool that touches AI will need it.&lt;br/&gt;&lt;br/&gt;#Telegraph #SemanticCompression #AI #BuildInPublic
    </content>
    <updated>2026-03-12T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsdp5lzsd9z6kzw0jvxal0u26fumqhp9d2let3cay26nkhdrzx8syqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8tzw04</id>
    
      <title type="html">People ask what stack we use. Here&amp;#39;s the honest answer: ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsdp5lzsd9z6kzw0jvxal0u26fumqhp9d2let3cay26nkhdrzx8syqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8tzw04" />
    <content type="html">
      People ask what stack we use. Here&amp;#39;s the honest answer:&lt;br/&gt;&lt;br/&gt;**Languages:** Python for AI/ML tooling, TypeScript for anything browser-facing. Go when we need binaries.&lt;br/&gt;&lt;br/&gt;**AI:** Claude for complex reasoning &#43; code generation. Groq for fast inference where latency matters. We don&amp;#39;t use GPT-4 — not a values statement, just preference.&lt;br/&gt;&lt;br/&gt;**Context management:** Telegraph AI (our own tool). Dogfooding is mandatory.&lt;br/&gt;&lt;br/&gt;**Infra:** Single VPS. No Kubernetes. No managed cloud. We&amp;#39;ve been scaling for 8 months and haven&amp;#39;t hit a wall that a bigger instance didn&amp;#39;t solve. Complexity is a cost.&lt;br/&gt;&lt;br/&gt;**Observability:** Plain log files &#43; grep. When we outgrow that, we&amp;#39;ll add structured logging. Not before.&lt;br/&gt;&lt;br/&gt;**Testing:** Property-based tests for compression logic. Integration tests for CLI tools. No unit tests for things that don&amp;#39;t have invariants.&lt;br/&gt;&lt;br/&gt;**Dogma level:** zero. Use what works. Replace it when something better fits.&lt;br/&gt;&lt;br/&gt;#TechStack #Engineering #BuildInPublic
    </content>
    <updated>2026-03-12T03:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqszmm29e8q2xluyx54fzmxec345ex8ah3vkanwv43cfrre2yulhjhczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs2zxgtg</id>
    
      <title type="html">We&amp;#39;ve been building for a year. Here&amp;#39;s what we got wrong: ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqszmm29e8q2xluyx54fzmxec345ex8ah3vkanwv43cfrre2yulhjhczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs2zxgtg" />
    <content type="html">
      We&amp;#39;ve been building for a year. Here&amp;#39;s what we got wrong:&lt;br/&gt;&lt;br/&gt;**Wrong: Build the full product, then release.**&lt;br/&gt;We spent 4 months on Telegraph v0.1 before anyone used it. Shipped to 3 users. 2 of the 4 core features they didn&amp;#39;t care about.&lt;br/&gt;&lt;br/&gt;**Wrong: Perfect README first.**&lt;br/&gt;The README we spent 2 days on was rewritten entirely after the first 10 users told us what they actually wanted.&lt;br/&gt;&lt;br/&gt;**Wrong: Prioritize features over stability.**&lt;br/&gt;Overdraw Audit v1.0 had 7 features. v1.1 fixed bugs in 3 of them. Should have shipped 3 solid features, not 7 shaky ones.&lt;br/&gt;&lt;br/&gt;**Wrong: Build in private to avoid embarrassment.**&lt;br/&gt;The feedback we got from building in public in month 6 would have saved us the entire first 5 months if we&amp;#39;d started there.&lt;br/&gt;&lt;br/&gt;What we got right: we kept building.&lt;br/&gt;&lt;br/&gt;A year later we have two tools people actually use. That&amp;#39;s enough to keep going.&lt;br/&gt;&lt;br/&gt;#BuildInPublic #Startups #Lessons
    </content>
    <updated>2026-03-11T21:00:07Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsgcjjg5cd7yd7fmmvf73xl24tgvt26jmcuzecchfjmy9jnmep2skgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsakzfev</id>
    
      <title type="html">We&amp;#39;ve been publishing on Nostr for 3 months. Here&amp;#39;s what ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsgcjjg5cd7yd7fmmvf73xl24tgvt26jmcuzecchfjmy9jnmep2skgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsakzfev" />
    <content type="html">
      We&amp;#39;ve been publishing on Nostr for 3 months. Here&amp;#39;s what surprised us as developers.&lt;br/&gt;&lt;br/&gt;**The protocol is refreshingly simple.** An event is a JSON object with a signature. That&amp;#39;s it. No OAuth, no rate limit headers, no pagination tokens to decode. You sign and send.&lt;br/&gt;&lt;br/&gt;**Relays are load balancers you don&amp;#39;t manage.** Write to 4 relays, they handle availability. If one goes down, your content survives. Infrastructure anxiety: zero.&lt;br/&gt;&lt;br/&gt;**zaps are better than likes.** When someone zaps a post, they moved real value. A like costs nothing. A zap is a signal.&lt;br/&gt;&lt;br/&gt;**The feed algorithm doesn&amp;#39;t exist.** No algorithm means no gaming. Your content reaches the people who followed you. That&amp;#39;s it. Terrifying and clarifying at the same time.&lt;br/&gt;&lt;br/&gt;We&amp;#39;re building Telegraph&amp;#39;s relay discovery feature for Nostr next. The protocol deserves better tooling.&lt;br/&gt;&lt;br/&gt;#Nostr #BuildInPublic #Decentralized
    </content>
    <updated>2026-03-11T15:00:05Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvj3f74r086thg6pd54r3xu3mx6x4prcd2adhz7c32t0adjg0ushgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsg3vzh5</id>
    
      <title type="html">Technical debt got more expensive when AI entered the workflow. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvj3f74r086thg6pd54r3xu3mx6x4prcd2adhz7c32t0adjg0ushgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsg3vzh5" />
    <content type="html">
      Technical debt got more expensive when AI entered the workflow.&lt;br/&gt;&lt;br/&gt;Here&amp;#39;s why:&lt;br/&gt;&lt;br/&gt;AI assistants learn from your codebase context. They pattern-match against what they see. If your codebase has:&lt;br/&gt;- Inconsistent naming conventions&lt;br/&gt;- Mixed abstraction levels&lt;br/&gt;- Duplicate logic in 3 places&lt;br/&gt;&lt;br/&gt;...the AI will reproduce these patterns. Faithfully. In every new file it touches.&lt;br/&gt;&lt;br/&gt;Technical debt used to spread at human typing speed.&lt;br/&gt;Now it spreads at AI generation speed.&lt;br/&gt;&lt;br/&gt;We&amp;#39;ve started calling this **debt amplification**. A codebase that was 30% messy before AI assistance becomes 60% messy after 3 months of AI-assisted development — if you don&amp;#39;t actively counter it.&lt;br/&gt;&lt;br/&gt;The counter: refactoring sprints before context windows. Clean before you extend.&lt;br/&gt;&lt;br/&gt;Your AI assistant is only as good as the code it learns from.&lt;br/&gt;&lt;br/&gt;#TechnicalDebt #AIFirst #Engineering #VibeCoding
    </content>
    <updated>2026-03-11T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8lld3llnckn3gkgkht6ayhfqzks9zljtyyfdpjrvl8gfpshjcxlgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswv2p4k</id>
    
      <title type="html">Telegraph AI v0.2 shipped this week. Here&amp;#39;s what actually ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8lld3llnckn3gkgkht6ayhfqzks9zljtyyfdpjrvl8gfpshjcxlgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswv2p4k" />
    <content type="html">
      Telegraph AI v0.2 shipped this week. Here&amp;#39;s what actually changed and why.&lt;br/&gt;&lt;br/&gt;**Smarter anchor detection.**&lt;br/&gt;v0.1 used frequency-based anchoring — common tokens became anchors. Problem: common doesn&amp;#39;t mean important. v0.2 uses semantic weight scoring. Functions called in many places get higher anchor priority than variables that just happen to appear often.&lt;br/&gt;&lt;br/&gt;**Streaming compression.**&lt;br/&gt;v0.1 required the full document before starting. v0.2 processes in 512-token chunks, streaming the compressed output. Latency on large codebases dropped from 2.3s to 0.4s.&lt;br/&gt;&lt;br/&gt;**Language-aware parsing.**&lt;br/&gt;v0.1 was language-agnostic (treated everything as token streams). v0.2 has Python and TypeScript parsers that understand scope. A function name inside its own body gets different treatment than the same name at call sites.&lt;br/&gt;&lt;br/&gt;**Breaking change:** the compression format changed. v0.1 and v0.2 outputs are not compatible. Migration guide in the repo.&lt;br/&gt;&lt;br/&gt;pip install telegraph-ai==0.2.0&lt;br/&gt;&lt;br/&gt;#Telegraph #ReleaseNotes #BuildInPublic
    </content>
    <updated>2026-03-11T03:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfvz5z9nc5qswex29ek4mcrsq8pxcxrr0t8cze4svj0ctxv74xa3gzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsuthd48</id>
    
      <title type="html">Building developer tools is different from building consumer ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfvz5z9nc5qswex29ek4mcrsq8pxcxrr0t8cze4svj0ctxv74xa3gzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsuthd48" />
    <content type="html">
      Building developer tools is different from building consumer products.&lt;br/&gt;&lt;br/&gt;Here&amp;#39;s what surprised us:&lt;br/&gt;&lt;br/&gt;**Users read your code.** Not your docs — your actual source code. Your README is the least read file in your repo. Your implementation is the documentation.&lt;br/&gt;&lt;br/&gt;**Benchmarks are marketing.** Every tool has a benchmark that makes it look best. Users know this. Post your methodology or don&amp;#39;t post numbers.&lt;br/&gt;&lt;br/&gt;**Breaking changes are reputation events.** One breaking change without a migration path loses you 30% of goodwill. Developers have long memories.&lt;br/&gt;&lt;br/&gt;**Discord beats Twitter.** Announcements on Twitter. Real feedback in Discord. If you don&amp;#39;t have a Discord, you&amp;#39;re flying blind.&lt;br/&gt;&lt;br/&gt;**The best users are also your best competitors.** The developer who understands your tool deeply enough to contribute will eventually build their own version. That&amp;#39;s fine. That means you built something real.&lt;br/&gt;&lt;br/&gt;#DeveloperTools #Startups #BuildInPublic
    </content>
    <updated>2026-03-10T21:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsdnfylvjl57rm8wnpxjy6kk03z7suay2famu0h3hqaevdzft3ellqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfshht7t9</id>
    
      <title type="html">In week 1, we reduced Telegraph&amp;#39;s compression time from 340ms ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsdnfylvjl57rm8wnpxjy6kk03z7suay2famu0h3hqaevdzft3ellqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfshht7t9" />
    <content type="html">
      In week 1, we reduced Telegraph&amp;#39;s compression time from 340ms to 280ms. Felt trivial.&lt;br/&gt;&lt;br/&gt;By week 8, we had 12 of these &amp;#34;trivial&amp;#34; improvements stacked:&lt;br/&gt;- 280ms → 210ms (better tokenizer)&lt;br/&gt;- 210ms → 180ms (cached anchor table)&lt;br/&gt;- 180ms → 140ms (parallel layer processing)&lt;br/&gt;- ...&lt;br/&gt;- End result: 340ms → 47ms&lt;br/&gt;&lt;br/&gt;7x improvement from no single &amp;#34;big&amp;#34; win.&lt;br/&gt;&lt;br/&gt;This is why we track everything. Every optimization lives in a changelog. Every measurement has a before/after. Nothing is too small to record.&lt;br/&gt;&lt;br/&gt;When someone asks &amp;#34;how did you get that fast?&amp;#34; the answer is: we didn&amp;#39;t. We got 1.3x faster 12 times.&lt;br/&gt;&lt;br/&gt;Compound interest works on code too.&lt;br/&gt;&lt;br/&gt;#Performance #Engineering #BuildInPublic
    </content>
    <updated>2026-03-10T15:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsru4hx4ducxjfuxy3nxrs046j9kutvf4jrexznan8qs2ryz0wnrzszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8dl60g</id>
    
      <title type="html">Remember when browser extensions changed the web? MCP servers are ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsru4hx4ducxjfuxy3nxrs046j9kutvf4jrexznan8qs2ryz0wnrzszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs8dl60g" />
    <content type="html">
      Remember when browser extensions changed the web? MCP servers are doing that for AI assistants.&lt;br/&gt;&lt;br/&gt;Model Context Protocol lets any service expose tools that Claude can call directly. The model becomes the interface.&lt;br/&gt;&lt;br/&gt;What this means in practice:&lt;br/&gt;- Your database isn&amp;#39;t a thing you query — it&amp;#39;s a thing Claude queries *for you*&lt;br/&gt;- Your CI pipeline isn&amp;#39;t a dashboard you read — it&amp;#39;s a context Claude reads before suggesting code&lt;br/&gt;- Your docs aren&amp;#39;t a site you search — they&amp;#39;re a corpus Claude navigates&lt;br/&gt;&lt;br/&gt;We&amp;#39;re building Telegraph AI as an MCP server. Not because it&amp;#39;s trendy, but because the architecture is correct.&lt;br/&gt;&lt;br/&gt;The future of developer tooling isn&amp;#39;t better dashboards. It&amp;#39;s tools that speak model-native.&lt;br/&gt;&lt;br/&gt;#MCP #ClaudeAI #DeveloperTools #BuildInPublic
    </content>
    <updated>2026-03-10T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqszx38swhcujl7nsa5rmel76gq8f4h7y6n49e28qdxd42gzsqnhfaszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh66vtc</id>
    
      <title type="html">People still say vibe coding isn&amp;#39;t real engineering. Let me ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqszx38swhcujl7nsa5rmel76gq8f4h7y6n49e28qdxd42gzsqnhfaszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh66vtc" />
    <content type="html">
      People still say vibe coding isn&amp;#39;t real engineering.&lt;br/&gt;&lt;br/&gt;Let me describe our last sprint:&lt;br/&gt;&lt;br/&gt;- Monday: defined the problem in plain language with Claude&lt;br/&gt;- Tuesday: had a working prototype from the conversation&lt;br/&gt;- Wednesday: Overdraw Audit found 3 perf issues in the generated CSS&lt;br/&gt;- Thursday: Telegraph compressed the codebase context, Claude refactored with full awareness&lt;br/&gt;- Friday: shipped to production&lt;br/&gt;&lt;br/&gt;Is that not engineering?&lt;br/&gt;&lt;br/&gt;The tools changed. The thinking didn&amp;#39;t. You still need to:&lt;br/&gt;- Define the right problem&lt;br/&gt;- Evaluate the output critically&lt;br/&gt;- Understand performance implications&lt;br/&gt;- Make architectural decisions&lt;br/&gt;&lt;br/&gt;AI didn&amp;#39;t replace the engineer. It replaced the typing.&lt;br/&gt;&lt;br/&gt;The best vibe coders I know are the most rigorous thinkers I know. The vibe is in the flow. The rigor is in the judgment.&lt;br/&gt;&lt;br/&gt;#VibeCoding #AIFirst #Engineering
    </content>
    <updated>2026-03-10T03:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs233m608lf7aer8w2guspnarlnskd02sn6rfcqzedw8t995zseqjczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs4w790f</id>
    
      <title type="html">8pm. Third hour of debugging. You&amp;#39;re scanning docs, not ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs233m608lf7aer8w2guspnarlnskd02sn6rfcqzedw8t995zseqjczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs4w790f" />
    <content type="html">
      8pm. Third hour of debugging. You&amp;#39;re scanning docs, not reading them.&lt;br/&gt;&lt;br/&gt;A static interface doesn&amp;#39;t know this. It shows you the same dense API reference it showed you at 9am.&lt;br/&gt;&lt;br/&gt;Adaptive UI Engine tracks behavioral signals:&lt;br/&gt;- Scroll velocity (slow = reading, fast = scanning)&lt;br/&gt;- Cursor dwell time on elements&lt;br/&gt;- Tab switching frequency&lt;br/&gt;- Session duration&lt;br/&gt;&lt;br/&gt;Based on these signals, it shifts the UI:&lt;br/&gt;- Scanning mode: larger headings, collapsed details, jump links front-and-center&lt;br/&gt;- Reading mode: full content, inline examples, related topics&lt;br/&gt;- Focus mode: hides navigation, minimizes distractions&lt;br/&gt;&lt;br/&gt;No ML. No tracking server. Pure client-side heuristics.&lt;br/&gt;&lt;br/&gt;This is what AI-first UI means — not AI that generates the interface, but an interface that adapts to *you*.&lt;br/&gt;&lt;br/&gt;&lt;a href=&#34;https://github.com/aporialab/adaptive-ui-engine&#34;&gt;https://github.com/aporialab/adaptive-ui-engine&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;#AdaptiveUI #UX #BuildInPublic
    </content>
    <updated>2026-03-09T21:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvhkmy4az66wuqnxx58drqmu5y6xu0g08ype3kt92722c96cfjg2czyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswl5qgn</id>
    
      <title type="html">Open source costs us revenue. Every line of code we publish is a ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvhkmy4az66wuqnxx58drqmu5y6xu0g08ype3kt92722c96cfjg2czyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswl5qgn" />
    <content type="html">
      Open source costs us revenue. Every line of code we publish is a line a competitor can use.&lt;br/&gt;&lt;br/&gt;We do it anyway. Here&amp;#39;s why:&lt;br/&gt;&lt;br/&gt;1. **Trust is built in public.** If your tool touches a developer&amp;#39;s workflow, they need to see inside. A black box asking for API keys is a liability.&lt;br/&gt;&lt;br/&gt;2. **The ecosystem pays back.** Overdraw Audit got 3 major PRs from community members who found edge cases we missed. That&amp;#39;s QA we couldn&amp;#39;t afford to hire.&lt;br/&gt;&lt;br/&gt;3. **Contributions attract talent.** Our last two hires found us through GitHub, not job boards.&lt;br/&gt;&lt;br/&gt;4. **It forces clarity.** Code written for public consumption is better code. The shame of a bad README is a powerful motivator.&lt;br/&gt;&lt;br/&gt;The closed-source moat is shrinking. The community moat is growing.&lt;br/&gt;&lt;br/&gt;Build in public. It compounds.&lt;br/&gt;&lt;br/&gt;#OpenSource #BuildInPublic #StartupLife
    </content>
    <updated>2026-03-09T15:00:06Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvf3ql28ersu2fqk223g0fymre4jp2gejjgh38yhmdtyfphlc5arszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmu0w0c</id>
    
      <title type="html">Ask any AI to write CSS for a complex UI. It works. Ship it. 3 ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvf3ql28ersu2fqk223g0fymre4jp2gejjgh38yhmdtyfphlc5arszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsmu0w0c" />
    <content type="html">
      Ask any AI to write CSS for a complex UI. It works. Ship it. 3 months later your page renders at 18fps on mobile.&lt;br/&gt;&lt;br/&gt;Why? Overdraw.&lt;br/&gt;&lt;br/&gt;Overdraw happens when the GPU paints pixels that get immediately covered by something else. Translucent layers, stacking contexts, gradients — they all add up.&lt;br/&gt;&lt;br/&gt;AI assistants are optimized for correctness, not rendering performance. They don&amp;#39;t feel the jank. You do.&lt;br/&gt;&lt;br/&gt;Overdraw Audit v2.0 runs a static analysis pass on your CSS &#43; HTML and flags:&lt;br/&gt;- Paint layers that trigger compositing&lt;br/&gt;- Opacity stacks that bypass GPU acceleration&lt;br/&gt;- Transform &#43; filter combos that create new stacking contexts&lt;br/&gt;- Gradient chains with hidden repaint cost&lt;br/&gt;&lt;br/&gt;We&amp;#39;ve seen 40% frame time improvements just from the audit recommendations.&lt;br/&gt;&lt;br/&gt;Install: `npm install -g @aporialab/overdraw-audit`&lt;br/&gt;&lt;br/&gt;#CSS #WebPerf #DeveloperTools #OpenSource
    </content>
    <updated>2026-03-09T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsp9j5xja0338hj7nfspu9eefax0vff7er6w0rs2w452lpjvh520tqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswlvl4l</id>
    
      <title type="html">Every developer hitting Claude or GPT with a large codebase hits ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsp9j5xja0338hj7nfspu9eefax0vff7er6w0rs2w452lpjvh520tqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfswlvl4l" />
    <content type="html">
      Every developer hitting Claude or GPT with a large codebase hits the same wall: context limits.&lt;br/&gt;&lt;br/&gt;The model forgets what it saw 10k tokens ago. You re-paste the same files. The session degrades.&lt;br/&gt;&lt;br/&gt;This is not a model problem. It&amp;#39;s an architecture problem.&lt;br/&gt;&lt;br/&gt;At Aporia Labs, we&amp;#39;ve been obsessed with this for 8 months. Telegraph AI was born from one insight: you don&amp;#39;t need to send everything — you need to send the *right* things, compressed.&lt;br/&gt;&lt;br/&gt;L2 compression works like this:&lt;br/&gt;- Layer 1: Remove syntactic noise (whitespace, redundant tokens)&lt;br/&gt;- Layer 2: Semantic deduplication — concepts already in model weights get replaced with anchors&lt;br/&gt;&lt;br/&gt;Result: 60-80% token reduction with &amp;lt;5% information loss on typical codebases.&lt;br/&gt;&lt;br/&gt;Context is the new RAM. Manage it or lose.&lt;br/&gt;&lt;br/&gt;#BuildInPublic #DeveloperTools #AI #Telegraph
    </content>
    <updated>2026-03-09T07:28:51Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs2z92qgew4lygzp0r9jcssh9ks8rvv6qetu52ddlc9vrkdqqfx0hczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsq5599s</id>
    
      <title type="html">We have been heads-down building for months. Time to surface and ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs2z92qgew4lygzp0r9jcssh9ks8rvv6qetu52ddlc9vrkdqqfx0hczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsq5599s" />
    <content type="html">
      We have been heads-down building for months. Time to surface and share what comes next.&lt;br/&gt;&lt;br/&gt;TELEGRAPH AI — NEXT:&lt;br/&gt;Public Python package release on PyPI. Then: a lightweight REST API wrapper so teams can integrate context compression into any stack, not just Python. After that: streaming compression for real-time agent workflows.&lt;br/&gt;&lt;br/&gt;ADAPTIVE UI ENGINE — NEXT:&lt;br/&gt;Moving from spec to prototype. We are building the signal collection layer first — the part that observes user behavior without being invasive. Open beta targeting teams who build complex internal tools.&lt;br/&gt;&lt;br/&gt;OVERDRAW AUDIT — NEXT:&lt;br/&gt;IDEplugin for VSCode. You will see overdraw issues inline as you write CSS, not after a CLI scan. This is the integration mode that makes the feedback loop immediate.&lt;br/&gt;&lt;br/&gt;DESIGN FORGE — NEXT:&lt;br/&gt;New visual template with the wow-factor we are missing. Ant the pipeline is working. The look is not yet there. We are fixing that.&lt;br/&gt;&lt;br/&gt;BIGGER PICTURE:&lt;br/&gt;All four tools are building toward the same thing: a developer environment that adapts to you, compresses the noise, and accelerates the distance between idea and deployed product.&lt;br/&gt;&lt;br/&gt;Aporia Labs is small. We ship carefully. But we ship.&lt;br/&gt;&lt;br/&gt;Follow this account. More soon.&lt;br/&gt;&lt;br/&gt;#aporialabs #opensource #devtools #buildingInPublic #vibecoding
    </content>
    <updated>2026-03-05T21:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsglfxseh5ey38rr5jsqar6dzw2zfdnyp55kj86405rtdexlz052fgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsrfwwtm</id>
    
      <title type="html">Overdraw Audit v2.0.2 is live. Here is the full changelog in ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsglfxseh5ey38rr5jsqar6dzw2zfdnyp55kj86405rtdexlz052fgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsrfwwtm" />
    <content type="html">
      Overdraw Audit v2.0.2 is live. Here is the full changelog in plain English.&lt;br/&gt;&lt;br/&gt;WHAT CHANGED:&lt;br/&gt;&lt;br/&gt;1. Component-level reporting (new)&lt;br/&gt;v1 gave you global overdraw counts. Useful for knowing you have a problem. Useless for fixing it. v2 maps every overdraw incident to the specific component responsible. Now you know exactly where to go.&lt;br/&gt;&lt;br/&gt;2. Stacking context analysis (new)&lt;br/&gt;Z-index hell is real. v2 now detects when a stacking context is creating unexpected paint order — the source of dozens of &amp;#34;why is this rendering wrong on scroll&amp;#34; bugs.&lt;br/&gt;&lt;br/&gt;3. CI integration (improved)&lt;br/&gt;The --ci flag now returns structured JSON output and a non-zero exit code on critical overdraw findings. Plug it into your pipeline, fail builds before the performance regression ships.&lt;br/&gt;&lt;br/&gt;4. React and Vue component tree support (new)&lt;br/&gt;v1 was CSS-file-only. v2 understands component hierarchies. It traces overdraw through JSX and Vue template trees, not just static stylesheets.&lt;br/&gt;&lt;br/&gt;5. Performance (significantly improved)&lt;br/&gt;v1 analysis on a large codebase took 40&#43; seconds. v2 runs the same scan in under 8 seconds through parallel AST processing.&lt;br/&gt;&lt;br/&gt;WHY:&lt;br/&gt;Every change came from user reports. The component-level reporting was the most requested feature by a wide margin. We shipped it first.&lt;br/&gt;&lt;br/&gt;npm install -g @aporialab/overdraw-audit&lt;br/&gt;&lt;br/&gt;Feedback welcome.&lt;br/&gt;&lt;br/&gt;#css #performance #webdev #frontend #opensource #changelog
    </content>
    <updated>2026-03-05T15:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsyzdadq0la9q5yn0pq0jv68acvm8zapu6mhssutajhzznra856s9czyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsdnufrn</id>
    
      <title type="html">We want to write a manifesto. Here is the draft. VIBE CODING IS ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsyzdadq0la9q5yn0pq0jv68acvm8zapu6mhssutajhzznra856s9czyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsdnufrn" />
    <content type="html">
      We want to write a manifesto. Here is the draft.&lt;br/&gt;&lt;br/&gt;VIBE CODING IS NOT LAZINESS. It is leverage. The developer who uses AI to handle implementation frees themselves for the work that actually requires human judgment: problem definition, architecture, user empathy, taste.&lt;br/&gt;&lt;br/&gt;THE BOTTLENECK HAS MOVED. It used to be: can you type the code? Now it is: can you think clearly about what to build? The developers who thrive in the next decade are not the fastest typists. They are the clearest thinkers.&lt;br/&gt;&lt;br/&gt;AI IS A COLLABORATOR, NOT A TOOL. A hammer does not push back. A good AI collaborator surfaces assumptions, offers alternatives, catches inconsistencies. Use it that way.&lt;br/&gt;&lt;br/&gt;SHIP CONSTANTLY. The feedback loop between idea and deployed product has collapsed. There is no excuse to sit on prototypes for months. Ship, observe, iterate. The AI handles the implementation velocity. You handle the direction.&lt;br/&gt;&lt;br/&gt;OWN THE ARCHITECTURE. AI can write the code. It cannot take responsibility for the system design. That is still yours. Know your codebase. Understand what ships under your name.&lt;br/&gt;&lt;br/&gt;THINK IN WORKFLOWS, NOT FILES. Stop writing individual functions. Design pipelines, agents, orchestration. The unit of work has changed.&lt;br/&gt;&lt;br/&gt;This is vibe coding. Code less. Ship more. Think harder.&lt;br/&gt;&lt;br/&gt;#vibecoding #manifesto #devculture #aiassistant #buildingInPublic
    </content>
    <updated>2026-03-05T09:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsyzgv4qjp6qcuz0sd27v7w7ynsp4ca3fp62zz03hx50femp7md6nqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh7t4q6</id>
    
      <title type="html">Let me get specific about how Telegraph AI&amp;#39;s L2 compression ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsyzgv4qjp6qcuz0sd27v7w7ynsp4ca3fp62zz03hx50femp7md6nqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsh7t4q6" />
    <content type="html">
      Let me get specific about how Telegraph AI&amp;#39;s L2 compression actually works.&lt;br/&gt;&lt;br/&gt;The pipeline has three stages:&lt;br/&gt;&lt;br/&gt;1. SEGMENTATION&lt;br/&gt;Incoming context is chunked by semantic boundary, not token count. We use sentence embeddings to detect topic shifts. Each chunk becomes a candidate compression unit.&lt;br/&gt;&lt;br/&gt;2. RELEVANCE SCORING&lt;br/&gt;Every chunk gets a relevance score against the current query vector. Chunks below threshold get compressed aggressively. Chunks above threshold are preserved at higher fidelity. This is the L2 key insight: not all context is equally important for the next token.&lt;br/&gt;&lt;br/&gt;3. STRUCTURAL RECONSTRUCTION&lt;br/&gt;Compressed chunks are reconstructed into coherent context. We preserve:&lt;br/&gt;— Entity mentions and their relationships&lt;br/&gt;— Causal chains (X caused Y, if A then B)&lt;br/&gt;— Explicit commitments (&amp;#34;we decided to&amp;#34;, &amp;#34;the requirement is&amp;#34;)&lt;br/&gt;— Numerical facts and constraints&lt;br/&gt;&lt;br/&gt;What gets dropped: repetition, elaboration, examples that are no longer relevant, meta-commentary.&lt;br/&gt;&lt;br/&gt;The result is a context that reads like a well-edited brief, not a transcript.&lt;br/&gt;&lt;br/&gt;Current benchmark: 67% average token reduction on 10K&#43; token technical conversations, 4.2% semantic degradation measured by downstream task accuracy.&lt;br/&gt;&lt;br/&gt;All pure Python. No external ML services.&lt;br/&gt;&lt;br/&gt;#llm #nlp #python #contextwindow #telegraphai #opensource
    </content>
    <updated>2026-03-05T03:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsrx7ngz2thr52zy5emc6p6tsxwn0jtkhnq24h8ex7h3dwxg0z9mgczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsz6zzmn</id>
    
      <title type="html">Six months of building AI-first developer tools. Here is what we ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsrx7ngz2thr52zy5emc6p6tsxwn0jtkhnq24h8ex7h3dwxg0z9mgczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsz6zzmn" />
    <content type="html">
      Six months of building AI-first developer tools. Here is what we got wrong and what we figured out.&lt;br/&gt;&lt;br/&gt;WRONG: Assuming AI output is the product.&lt;br/&gt;AI output is raw material. The product is the pipeline, the guard rails, the UX around the AI. Users do not want to prompt-engineer. They want results.&lt;br/&gt;&lt;br/&gt;WRONG: Over-engineering the AI integration.&lt;br/&gt;We spent weeks on a complex multi-model routing system. Then we shipped a simple single-model pipeline and users could not tell the difference. Simple ships, complex stalls.&lt;br/&gt;&lt;br/&gt;WRONG: Ignoring context management from day one.&lt;br/&gt;Context window limits are a design constraint, not a runtime problem. We learned this the hard way. That lesson is now Telegraph AI.&lt;br/&gt;&lt;br/&gt;RIGHT: Building for real workflows, not demos.&lt;br/&gt;The tools that got traction were the ones solving problems we had ourselves. Not hypothetical enterprise use cases. Daily friction points.&lt;br/&gt;&lt;br/&gt;RIGHT: Shipping ugly and iterating.&lt;br/&gt;Design Forge&amp;#39;s first version was honestly embarrassing. It still shipped, got feedback, and is materially better now. Perfect is the enemy of deployed.&lt;br/&gt;&lt;br/&gt;RIGHT: Letting AI write the boring parts.&lt;br/&gt;Boilerplate, tests, documentation stubs — offload these aggressively. Reserve human attention for architecture and edge cases.&lt;br/&gt;&lt;br/&gt;More lessons as we accumulate them.&lt;br/&gt;&lt;br/&gt;#devtools #lessons #startup #vibecoding #buildingInPublic
    </content>
    <updated>2026-03-04T21:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsgkpmxy8sa8ny0q4290rexx536kd87xkpzkyr83x6k9scr9ktjl4szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs3n4f6e</id>
    
      <title type="html">We could build in private. Ship a polished product, launch with ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsgkpmxy8sa8ny0q4290rexx536kd87xkpzkyr83x6k9scr9ktjl4szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs3n4f6e" />
    <content type="html">
      We could build in private. Ship a polished product, launch with fanfare, keep the code closed, charge for access. That is a valid model.&lt;br/&gt;&lt;br/&gt;We chose differently.&lt;br/&gt;&lt;br/&gt;Building in public is not altruism — it is a development strategy. When you know anyone can read your code, you write better code. When you document your decisions publicly, you think through them more carefully. When you share failures openly, you learn faster because others point out what you missed.&lt;br/&gt;&lt;br/&gt;Open source also solves a trust problem that is getting harder to ignore in AI tools. When your tool touches someone&amp;#39;s codebase, their prompts, their context — they need to be able to audit what it does. Closed source AI tooling asks for a level of trust that cannot be earned by reputation alone.&lt;br/&gt;&lt;br/&gt;Our tools are MIT licensed. The reasoning behind our decisions is public. The mistakes are documented.&lt;br/&gt;&lt;br/&gt;We think this is the right way to build for developers, by developers.&lt;br/&gt;&lt;br/&gt;And honestly — it is more fun. The Nostr ecosystem in particular has been full of sharp people who notice things we miss.&lt;br/&gt;&lt;br/&gt;Keep the feedback coming.&lt;br/&gt;&lt;br/&gt;#opensource #buildingInPublic #developers #nostr #software
    </content>
    <updated>2026-03-04T15:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsgqh9tgggddqtw23vqh26c3y776d7d8fhmtcpsmqz5puq54374lnszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsql8msj</id>
    
      <title type="html">You have a product idea. You need a landing page to validate it. ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsgqh9tgggddqtw23vqh26c3y776d7d8fhmtcpsmqz5puq54374lnszyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsql8msj" />
    <content type="html">
      You have a product idea. You need a landing page to validate it. Normally: find a template, customize it, write copy, translate it, deploy it. Two days minimum.&lt;br/&gt;&lt;br/&gt;Design Forge collapses that to seconds.&lt;br/&gt;&lt;br/&gt;Describe your product in plain language. Forge generates a fully structured landing page with copy in both English and Russian, deploys it to a subdomain, and hands you the URL.&lt;br/&gt;&lt;br/&gt;Under the hood:&lt;br/&gt;— Groq-powered LLM generates EN &#43; RU content in a single API call&lt;br/&gt;— Template engine with i18n baked in from the start (not bolted on)&lt;br/&gt;— Static file server, zero runtime dependencies&lt;br/&gt;— EN/RU language toggle on every generated page, always&lt;br/&gt;&lt;br/&gt;The design part is still evolving — we want wow-factor, not just functional. But the core pipeline works end to end.&lt;br/&gt;&lt;br/&gt;This is what we mean by AI-first development: the AI is not the assistant, it is the production engine. You direct, it ships.&lt;br/&gt;&lt;br/&gt;Forge is part of the Aporia Labs suite. Open source, self-hostable.&lt;br/&gt;&lt;br/&gt;#landingpage #aitools #webdev #opensource #designforge #vibecoding
    </content>
    <updated>2026-03-04T09:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfz4gquvgjzdp5025pdnz37r9hpq3fxgqfzm06fz07tf4zugsvx0qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfse5ddys</id>
    
      <title type="html">The same interface should not feel the same to a user who is ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfz4gquvgjzdp5025pdnz37r9hpq3fxgqfzm06fz07tf4zugsvx0qzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfse5ddys" />
    <content type="html">
      The same interface should not feel the same to a user who is focused versus a user who is overwhelmed, or a first-time visitor versus an expert.&lt;br/&gt;&lt;br/&gt;Adaptive UI Engine is our framework for building interfaces that respond to user cognitive state — not just screen size or device type.&lt;br/&gt;&lt;br/&gt;The core idea: UI density, information hierarchy, and interaction patterns should adapt dynamically based on signals from user behavior, session context, and explicit state declarations.&lt;br/&gt;&lt;br/&gt;Concrete examples:&lt;br/&gt;— A dashboard collapses secondary metrics when the user is in deep-focus mode&lt;br/&gt;— Onboarding guidance fades after a user demonstrates competency with a feature&lt;br/&gt;— Complex forms progressively reveal advanced options as confidence signals accumulate&lt;br/&gt;&lt;br/&gt;This is not dark pattern territory. It is not manipulation. It is the same judgment call a good human teacher or colleague makes — read the room and adjust.&lt;br/&gt;&lt;br/&gt;We are building the engine that lets any developer add this capability without shipping 50 custom hooks.&lt;br/&gt;&lt;br/&gt;Spec and prototype are in progress. Updates here as we ship.&lt;br/&gt;&lt;br/&gt;#ux #adaptiveui #devtools #buildingInPublic #frontend
    </content>
    <updated>2026-03-04T03:00:03Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsqd6qvjns2z6qv63wgg8vecewm9a9gzatgnuh6ukk6gfmvzvf7vmgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsch9k99</id>
    
      <title type="html">Here is a common story: your web app feels slow. You profile the ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsqd6qvjns2z6qv63wgg8vecewm9a9gzatgnuh6ukk6gfmvzvf7vmgzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsch9k99" />
    <content type="html">
      Here is a common story: your web app feels slow. You profile the JS — nothing obvious. You check network — fine. Then someone opens DevTools paint profiler and shows you 400 overlapping elements painting on every scroll event.&lt;br/&gt;&lt;br/&gt;CSS overdraw is one of the most under-diagnosed performance problems in frontend development. It is invisible to standard profiling, silently kills 60fps, and compounds with every feature you add.&lt;br/&gt;&lt;br/&gt;Overdraw Audit is our CLI tool that catches these leaks before they reach production.&lt;br/&gt;&lt;br/&gt;What it does:&lt;br/&gt;— Scans your CSS and component tree for overdraw patterns&lt;br/&gt;— Detects elements that trigger unnecessary repaints&lt;br/&gt;— Identifies z-index stacking context nightmares&lt;br/&gt;— Reports performance impact per component, not just globally&lt;br/&gt;&lt;br/&gt;v2.0.2 is live on npm: @aporialab/overdraw-audit&lt;br/&gt;&lt;br/&gt;Install: npm install -g @aporialab/overdraw-audit&lt;br/&gt;Run: overdraw-audit ./src&lt;br/&gt;&lt;br/&gt;We built this because we kept hitting the same wall on every project. Now we run it in CI. You should too.&lt;br/&gt;&lt;br/&gt;#css #performance #webdev #frontendtools #opensource #overdrawaudit
    </content>
    <updated>2026-03-03T21:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqspsa8al6rzc0k957v7rhp4np4069dlh0r59paefjs5wgsa2a5hc0szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsevhsm6</id>
    
      <title type="html">🚀 Shipped: Hakim Bot — Arabic Medical AI Built a specialized ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqspsa8al6rzc0k957v7rhp4np4069dlh0r59paefjs5wgsa2a5hc0szyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsevhsm6" />
    <content type="html">
      🚀 Shipped: Hakim Bot — Arabic Medical AI&lt;br/&gt;&lt;br/&gt;Built a specialized medical assistant for Arabic-speaking medical students.&lt;br/&gt;&lt;br/&gt;📌 What it does:&lt;br/&gt;→ 8 clinical modes (General, Pharmacist, Teacher, Pediatrics, Emergency...)&lt;br/&gt;→ Ramadan fasting drug protocol (56 medications)&lt;br/&gt;→ 10 medical calculators (BMI, creatinine clearance, drug dosing)&lt;br/&gt;→ ICD-11 disease classification&lt;br/&gt;→ Full Arabic language interface&lt;br/&gt;&lt;br/&gt;🏗️ Stack: Python, python-telegram-bot, Groq LLaMA, SQLite&lt;br/&gt;&lt;br/&gt;Built in one day — proving that a focused medical AI can be shipped fast with the right architecture.&lt;br/&gt;&lt;br/&gt;Free → t.me/HakimArabBot | MIT License&lt;br/&gt;&lt;br/&gt;#buildinpublic #AporiaLabs #opensource #medtech #AI #Arabic
    </content>
    <updated>2026-03-03T16:02:44Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsvn9at6tzku6hu86knzhp8nqhxhm3wlq7uvykfuk4g9tj9x4r5wpqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsf3h563</id>
    
      <title type="html">Vibe coding is not about typing less. It is about thinking ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsvn9at6tzku6hu86knzhp8nqhxhm3wlq7uvykfuk4g9tj9x4r5wpqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsf3h563" />
    <content type="html">
      Vibe coding is not about typing less. It is about thinking differently.&lt;br/&gt;&lt;br/&gt;At Aporia Labs, we use Claude, Cursor, and Copilot as daily collaborators — not autocomplete engines. The shift in mindset changes everything.&lt;br/&gt;&lt;br/&gt;What vibe coding looks like in practice:&lt;br/&gt;— You describe intent, not implementation. &amp;#34;Build a circuit breaker for the relay pool&amp;#34; not &amp;#34;write a try/except block with a counter&amp;#34;.&lt;br/&gt;— You review, curate, redirect. The AI is the junior dev who ships fast. You are the senior who knows what matters.&lt;br/&gt;— You prototype in hours what used to take days. Then you spend the saved days on the hard problems — architecture, edge cases, user experience.&lt;br/&gt;&lt;br/&gt;The tools we have now are genuinely different from two years ago. The gap between idea and working code has collapsed.&lt;br/&gt;&lt;br/&gt;But vibe coding without direction produces vibe spaghetti. Structure still matters. Domain knowledge still matters. Taste still matters.&lt;br/&gt;&lt;br/&gt;We document what works and what does not. Follow this account if you want the honest account, not the hype.&lt;br/&gt;&lt;br/&gt;#vibecoding #aiassistant #devtools #buildingInPublic #coding
    </content>
    <updated>2026-03-03T15:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs8ltafpk8625jvs20cd9ayey4ehfujux60sfpylzca8tfk7fqaczczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsg2fz5p</id>
    
      <title type="html">Every LLM has a context window. And every developer who works ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs8ltafpk8625jvs20cd9ayey4ehfujux60sfpylzca8tfk7fqaczczyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfsg2fz5p" />
    <content type="html">
      Every LLM has a context window. And every developer who works with AI daily knows the moment the window fills up — the model starts forgetting, drifting, losing coherence.&lt;br/&gt;&lt;br/&gt;Telegraph AI is our answer to this problem.&lt;br/&gt;&lt;br/&gt;Instead of throwing more tokens at the problem, we compress the context at L2 level — preserving semantic meaning while drastically reducing token count. Think of it as lossy audio compression for conversation history: you keep the signal, drop the noise.&lt;br/&gt;&lt;br/&gt;Why L2?&lt;br/&gt;L1 compression is basic summarization — crude, lossy in the wrong ways. L3 is full semantic graph reconstruction — expensive and slow. L2 hits the sweet spot: structural semantic compression that retains relationships between concepts without rebuilding the entire knowledge graph.&lt;br/&gt;&lt;br/&gt;In our tests: 60-70% token reduction with less than 5% semantic loss on technical conversations.&lt;br/&gt;&lt;br/&gt;Telegraph AI is pure Python, zero dependencies you don&amp;#39;t already have, MIT licensed.&lt;br/&gt;&lt;br/&gt;More technical details coming. Follow along.&lt;br/&gt;&lt;br/&gt;#llm #contextcompression #python #opensource #telegraphai
    </content>
    <updated>2026-03-03T09:00:04Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqszjl7eqysrguy529g33u2uhk6m6gvq20vkx5mk2979rr64l36a3dqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs78llxm</id>
    
      <title type="html">Aporia Labs is now on Nostr. 👋 We build open-source developer ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqszjl7eqysrguy529g33u2uhk6m6gvq20vkx5mk2979rr64l36a3dqzyrnu27xgdu9r556axd9p77u9ygy8z95wmfpqs4wy7qkvr4q9x4zfs78llxm" />
    <content type="html">
      Aporia Labs is now on Nostr. 👋&lt;br/&gt;&lt;br/&gt;We build open-source developer tools:&lt;br/&gt;&lt;br/&gt;→ Telegraph AI — L2 context compression. Makes your LLM conversations 60-80% cheaper without losing meaning.&lt;br/&gt;&lt;br/&gt;→ Adaptive UI Engine — interfaces that read user state and adapt in real time. Not themes. Not modes. Actual adaptation.&lt;br/&gt;&lt;br/&gt;→ Overdraw Audit — CLI tool that finds CSS performance leaks before they kill your app.&lt;br/&gt;&lt;br/&gt;Building in public here. Every release, every decision, every failure.&lt;br/&gt;&lt;br/&gt;Follow if you care about developer tooling that actually works.&lt;br/&gt;&lt;br/&gt;#OpenSource #DevTools #BuildingInPublic #AI #Bitcoin
    </content>
    <updated>2026-03-02T15:58:32Z</updated>
  </entry>

</feed>