Why Nostr? What is Njump?
2023-05-03 11:11:19

s3x_JAY on Nostr: Yesterday Rabble put (new) "NIP-68" and a redraft of NIP-69 into the PR that was ...

Yesterday put (new) "NIP-68" and a redraft of NIP-69 into the PR that was originally started two weeks ago.

https://github.com/nostr-protocol/nips/pull/457/commits/dd967e52211e6245a3c4db9998b31069cb2b628e

NIP-68 deals with labeling. It can be used for everything from reviews, to scientific labeling, to stock ticker symbols. It allows for both structured and unstructured labels to be put on _any_ applicable event. With NIP-68 authors can update and correct the labeling of their events after initial publication. It also allows third parties to add labels. (It is expected that client apps will restrict visibility of 3rd party labels to people in the labeler's "network" or trusted in some other way.)

NIP-69 was largely rewritten. It is now based on NIP-68 labels. It specifies two "vocabularies" that can be used for content moderation. One of the vocabularies is fairly set and rigid and deals with the types of moderation issues that are most likely to arise on Nostr. The other vocabulary is completely organic and open, and intended for things like regional moderation issues (e.g. insulting the Thai king). Client apps can use as much or little of the vocabularies as they like.

NIP-69 tries to establish a model where content moderation isn't black and white, but rather has many shades of gray. So people can recommend everything from showing the content, to content warnings, to hiding the content, to actual deletion.

Another "shades of gray" factor is that our approach to content moderation is based on the idea that everyone can be a moderator - it's just some moderators are trusted by more people than others. Moderators that are trusted by relay owners will obviously have the biggest impact since only relays can actually delete events. It's a bottom-up approach where people pick their own moderators. (The next step will be a NIP for "Trust Lists" so people can specify whose reports can filter their feed.) Given that censorship is an act of power and control where someone imposes their preferences on someone else, this approach to content moderation is highly censorship-resistant since it's a voluntary, opt-in scenario.

Author Public Key
npub1veeqt66jt2j209y9nv4a90w3dej074lavqygq5q0c57dharkgw0qk738wt