Why Nostr? What is Njump?
2023-02-21 07:22:43
in reply to

nopara73 on Nostr: If it's successful, Nostr will be the most filtered social media platform ever and it ...

If it's successful, Nostr will be the most filtered social media platform ever and it is a good thing. It seems people think the removal of all the rules from social media is the goal, but as anyone who ever tried, like Elon and the Darknet Markets discovered, a game without rules is a shitty game. In fact the solution could be found in the opposite direction with an explosion of rules. But instead of rules specified in a centralized place, the rules would be best set on the edges, by the users themselves. Exactly how Bitcoin's IsStandard mempool acceptance works.

We are quick to take a dump on Jordan Peterson for speaking up against anons, and although I am not happy with his solutions, I find the points he was making about the problem rather trivial. In fact claimed the same in a talk titled: "Anonymity Is The Problem" which highlights that moderation of content/requests/API calls/etc... are the main problems us developers of anonymity systems are contending with. How ironic is that? We're building anonymity and because of that, the most amount of time we're spending designing protocols is putting ourselves to thought experience of how anons can game these systems. Anons are our adversaries :) Anonymity is a weapon everyone should have access to, but so to its defense against it. On a local, personal level, not in the form of a global governing body. Enter Nostr.

Since the nostr protocol is open, there'll be a lot of innovative ways implemented to filter out inevitable armies of bots and spam. Funny enough open source development tends to delegate difficult decisions to the end user. This is normally an antipattern, but with nostr the incentives are aligning quite interestingly: users will be able to tweak more and more variables of what kind of messages they want to and don't want to see. And guess what, you're the average of the people you interact with, so I foresee users are going to come up with strict rules for filtering content for themselves that goes way beyond the current standards of filtering out "misinformation" in a way that it finally be worth spending time on a social media (imagine that :) as the conversations there will make them better people, instead of stupider, angrier and and anxious.

Here's an example: ChatGPT can already do a decent job at assigning an "Intellectual Honesty Score" to tweets. Which means it's even possible today to separate the wheat from the chaff. And I would certainly make it so that only the highest quality content gets to mess with my consciousness, because currently it's pretty difficult to justify the time spent on social media.

What's the common between spam and intellectual dishonesty? I want none of those to creep onto my feed, nor into my life.
Author Public Key
npub1qqvf96d53dps6l3hcfc9rlmm7s2vh3f20ay0g5wc2aqfeeurnh0q580c3j