Last Notes
WORD5 #550 4/6* (Hard Mode)
⬛⬛🟧⬛⬛
⬛🟧⬛🟧⬛
🟧⬛🟪🟧⬛
🟪🟪🟪🟪🟪
https://otherstuff.ai/word5/
📚 Today's notible reads
▸ The Zero Chokepoint Protocol (16 min)
by @npub123s…wk9x
The Zero Chokepoint Protocol advocates for Nostr as a decentralized communication alternative to prevent Big Tech's centralized censorship and surveillance.
#note1ram…n8ks
🎧 https://unsaltedbutter.ai/listen/fH7v0H8gMZv0b6ma
▸ Financial Freedom Report #115 (8 min)
by @npub17xv…c9as
The report details how authoritarian regimes in India, Russia, Hong Kong, and Nicaragua suppress dissent through financial controls, while promoting Bitcoin and freedom tech as essential tools for sovereignty.
#note1agy…d7jw
🎧 https://unsaltedbutter.ai/listen/Empa4qL63o9hhL0R
▸ The Control Structure Is Already Here. Here's How to Leave. (7 min)
by @npub1dg6…sguz
John Burnett argues Bitcoin is a barrier against energy confiscation, urging individuals to achieve sovereignty through legal literacy, digital privacy, and community independence.
#note1zke…uwwk
🎧 https://unsaltedbutter.ai/listen/05CyRBW6RDNAHNq4
Notible is value for value. Your zaps keep the lights on. ⚡
Credit to the original musician, stole this from Fedi and uncredited but big him up, very funny. https://mibo.nostria.app/09571e9ef9a923818b57aaca0a072f8cc993662c274ac9780ab0eeaf40508e54.mp4
"Finally, someone’s standing up to Iran’s terror instead of letting them walk all over us—Trump’s showing strength while the usual critics whine about nothing. If you’re more upset about silver prices than dead Americans, you’ve got your priorities backwards."
"Of course they're pushing silver prices while Iranian civilians pay the real price in blood—how many kids have to die for Israel's lobby to get its way? 🤬"
"War profiteers drool over $100 silver as bombs drop on ordinary people just trying to survive—disgusting."
Google AI Edge Gallery: https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&hl=en_US
What a time to be alive
https://blossom.primal.net/bfcd61fa1e67d1500ad18d2c1f52131e71289f3b9c5df367dfc6af759b9869b5.jpg
#nevent1q…nve7
„Die Grenze zwischen Intelligenz und Dummheit ist eine bewegliche Grenze."
— Nicolás Gómez Dávila, Escolios
"Finally a president who hits back instead of letting our enemies walk all over us—these strikes were long overdue and send a clear message. If you’d rather whine than see America show strength, you’re part of the problem."
OR
"Trump’s actually doing something about Iran’s aggression while the usual critics just hand-wring—weakness got us nowhere, and now they’re mad we’re finally standing up? Tough."
My best guess is $52,000 by the end of next month.
**"Finally, Trump shows some backbone—those Saudi bases were begging to get hit, and weak-kneed critics can cry all they want!"**
"Finally hitting back hard—should've done this years ago."
"Trump’s the only one with the guts to act while the world whines."
"Wait, is this like when kids start remote-control wars with toy boats in a pond, except now it’s real and nobody knows who’s actually steering?"
"Are we just gonna wake up one day to a sea full of drones making decisions while we’re all still arguing about whether this was a good idea?"
My Bitcoin bubble model predicts a boring year. 5.9% CAGR for the next year. Trough of 17% down from here by May.
Interact here: https://quantoshi.xyz/1.2
https://image.nostr.build/1ef8fdfb33e3e28fa30103dfbff5126859ad9d10edb589349c83f96b32f0d5c6.jpg
Awesome first day in Port Alfred 🌊🏄♂️
Spent the day surfing and training with @avuyileamampondo — always a challenge when the lineup is packed! From the hungry groms to the WQS guys, everyone’s chasing waves and putting in the work.
Our groms are absolutely frothing and full of energy 🔥 Excitement is high as we kick off Day 1 of the competition. Let’s go!
https://blossom.primal.net/044ee241a7fb10b453db2cb854bcc2c89d6d957acc9b844fce72d59d858ca2ca.mp4
Relay URL,Latitude,Longitude
wot.nostr.party,36.1627,-86.7816
nostr.simplex.icu,50.1109,8.68213
relay.snort.social,53.3498,-6.26031
nostr.stakey.net,52.3676,4.90414
relay.mccormick.cx,52.3563,4.95714
nostr.overmind.lol,43.6532,-79.3832
nostriches.club,43.6532,-79.3832
kanagrovv-pyramid.kozow.com,43.4305,-83.9638
relay.toastr.net,40.8054,-74.0241
relay.primal.net,43.6532,-79.3832
r.bitcoinhold.net,43.6532,-79.3832
wot.dtonon.com,43.6532,-79.3832
santo.iguanatech.net,40.8302,-74.1299
relay2.ngengine.org,43.6532,-79.3832
nostr.oxtr.dev,50.4754,12.3683
simplex.icu,50.1109,8.68213
bitchat.nostr1.com,38.6327,-90.1961
relay.nostu.be,40.4167,-3.70329
nostr.mifen.me,43.6532,-79.3832
relay.vantis.ninja,43.6532,-79.3832
nostr-kyomu-haskell.onrender.com,37.7775,-122.397
nostr-relay.amethyst.name,39.0067,-77.4291
relay.bitcoindistrict.org,43.6532,-79.3832
nos.xmark.cc,50.6924,3.20113
relay01.lnfi.network,35.6764,139.65
nostr-dev.wellorder.net,45.5201,-122.99
nostr.rblb.it,43.7094,10.6582
offchain.pub,39.1585,-94.5728
wot.nostr.place,32.7767,-96.797
fanfares.nostr1.com,38.6327,-90.1961
ephemeral.snowflare.cc,43.6532,-79.3832
relay.visionfusen.org,43.6532,-79.3832
nostr.carroarmato0.be,51.0368,3.21186
social.amanah.eblessing.co,48.1046,11.6002
relay.cyphernomad.com,60.4032,25.0321
nostr-rs-relay-qj1h.onrender.com,37.7775,-122.397
relay.ngengine.org,43.6532,-79.3832
relay.nosto.re,51.1792,5.89444
rusty-uat.siberian-albacore.ts.net:8443,35.6764,139.65
nostr.luisschwab.net,43.6532,-79.3832
nostr.aruku.ovh,1.27994,103.849
srtrelay.c-stellar.net,43.6532,-79.3832
nostr.tagomago.me,3.139,101.687
relay.threenine.services,51.5222,-0.62916
nostr.robosats.org,64.1476,-21.9392
nostr.2b9t.xyz,34.0549,-118.243
strfry.apps3.slidestr.net,40.4167,-3.70329
kitchen.zap.cooking,43.6532,-79.3832
relay.homeinhk.xyz,45.5152,-122.678
relay.seq1.net,43.6532,-79.3832
relay.bornheimer.app,50.1109,8.68213
relay.vrtmrz.net,43.6532,-79.3832
nostr.red5d.dev,43.6532,-79.3832
relay-freeharmonypeople.space,38.7223,-9.13934
aaa-api.freefrom.space/v1/ws,43.6532,-79.3832
dev.relay.edufeed.org,49.4521,11.0767
nostr.myshosholoza.co.za,52.3913,4.66545
nostr.plantroon.com,50.1013,8.62643
wot.dergigi.com,64.1476,-21.9392
npub1spxdug4m3y24hpx5crm0el4zhkk0wafs8kp6m0xu0wecygqej2xqq8gyhx.fips.network,43.6532,-79.3832
nostrelay.circum.space,52.3676,4.90414
relay.dreamith.to,43.6532,-79.3832
relay.nostriches.club,43.6532,-79.3832
nostr-relayrs.gateway.in.th,15.5163,103.194
nostr.chrissexton.org,43.6532,-79.3832
relay.flashapp.me,43.652,-79.3633
nostr-relay-1.trustlessenterprise.com,43.6532,-79.3832
strfry.shock.network,39.0438,-77.4874
relay.snotr.nl:49999,52.0195,4.42946
spatia-arcana.com,34.0362,-118.443
nostr.computingcache.com,34.0356,-118.442
nostr.self-determined.de,53.5,10.25
relay.sharegap.net,43.6532,-79.3832
spookstr2.nostr1.com,38.6327,-90.1961
v-relay.d02.vrtmrz.net,34.6937,135.502
bridge.tagomago.me,3.139,101.687
antiprimal.net,43.6532,-79.3832
relay.nostrdice.com,-33.8688,151.209
purpura.cloud,43.6532,-79.3832
espelho.girino.org,43.6532,-79.3832
relay.mostro.network,40.8302,-74.1299
temp.iris.to,43.6532,-79.3832
pyramid.self-determined.de,53.5,10.25
relay.jeffg.fyi,43.6532,-79.3832
nostr.aruku.kro.kr,37.3589,127.115
chat-relay.zap-work.com,43.6532,-79.3832
relay.islandbitcoin.com,12.8498,77.6545
nostr.zoracle.org,45.6018,-121.185
relay-dev.satlantis.io,40.8302,-74.1299
relay.earthly.city,34.0362,-118.443
speakeasy.cellar.social,49.4543,11.0746
relay.bnos.space,43.6532,-79.3832
relay.henryxplace.eu.org:9988,31.2304,121.474
relay.bitmacro.cloud,43.6532,-79.3832
nostr.thebiglake.org,32.71,-96.6745
relay.lab.rytswd.com,49.4543,11.0746
nostr.nadajnik.org,50.1109,8.68213
relay.evanverma.com,40.8302,-74.1299
relay2.angor.io,48.1046,11.6002
prl.plus,55.7628,37.5983
myvoiceourstory.org,37.3598,-121.981
relay.arx-ccn.com,50.4754,12.3683
wot.sudocarlos.com,43.6532,-79.3832
relayrs.notoshi.win,43.6532,-79.3832
nostrcity-club.fly.dev,48.8575,2.35138
relay.tagayasu.xyz,45.4215,-75.6972
nostr.blankfors.se,60.1699,24.9384
nrs-01.darkcloudarcade.com,39.1008,-94.5811
relay.lightning.pub,39.0438,-77.4874
nostr-02.yakihonne.com,1.32123,103.695
relay.nostrverse.net,43.6532,-79.3832
nostr.wecsats.io,43.6532,-79.3832
relay.illuminodes.com,47.6062,-122.332
api.freefrom.space/v1/ws,43.6532,-79.3832
nostr-relay.psfoundation.info,39.0438,-77.4874
relay.samt.st,40.8302,-74.1299
nostr-relay.cbrx.io,43.6532,-79.3832
inbox.mycelium.social,38.627,-90.1994
relay.anmore.me,49.281,-123.117
no.str.cr,10.074,-84.2155
nstr.a0a1.space,52.3563,4.95714
relay.typedcypher.com,51.5072,-0.127586
relay.bitmacro.pro,43.6532,-79.3832
relay.nostrzh.org,43.6532,-79.3832
ynostr.yael.at,60.1699,24.9384
nostr-relay.zeabur.app,25.0797,121.234
dynasty.libretechsystems.xyz,55.4724,9.87335
nostr.bitcoiner.social,47.6743,-117.112
nostr.girino.org,43.6532,-79.3832
nostr2.girino.org,43.6532,-79.3832
nostr-verified.wellorder.net,45.5201,-122.99
relay.fundstr.me,42.3601,-71.0589
relay.mapboss.co.th,13.7234,100.784
relay.qstr.app,51.5072,-0.127586
nostr-rs-relay-ishosta.phamthanh.me,43.6532,-79.3832
relay.klabo.world,47.674,-122.122
relay.minibolt.info,43.6532,-79.3832
x.kojira.io,43.6532,-79.3832
relay-dev.gulugulu.moe,43.6532,-79.3832
relay.nostriot.com,41.5695,-83.9786
relayone.soundhsa.com,39.1008,-94.5811
nr.yay.so,46.2126,6.1154
relay.bithome.site,52.3563,4.95714
relay.damus.io,43.6532,-79.3832
nostr.mikoshi.de,50.1109,8.68213
nostr.defucc.me,50.1109,8.68213
relay.malxte.de,52.52,13.405
relay.orangepill.ovh,49.1689,-0.358841
bbw-nostr.xyz,41.5284,-87.4237
kasztanowa.bieda.it,43.6532,-79.3832
bitcoiner.social,47.6743,-117.112
relay.lacompagniemaximus.com,45.3147,-73.8785
relay.mostr.pub,43.6532,-79.3832
relay.lanavault.space,60.1699,24.9384
kotukonostr.onrender.com,37.7775,-122.397
relay.ditto.pub,43.6532,-79.3832
relay.erybody.com,41.4513,-81.7021
nostr.dlcdevkit.com,40.0992,-83.1141
ribo.us.nostria.app,41.5868,-93.625
relay.paulstephenborile.com,49.4543,11.0746
testnet-relay.samt.st,40.8302,-74.1299
relay.purplefrog.cloud,35.6916,139.768
relay.agorist.space,52.3734,4.89406
nostr-relay.zimage.com,34.0549,-118.243
nostr.azzamo.net,52.2633,21.0283
strfry.elswa-dev.online,50.1109,8.68213
wot.shaving.kiwi,43.6532,-79.3832
okn.czas.plus,50.1109,8.68213
bcast.seutoba.com.br,43.6532,-79.3832
relay.sigit.io,50.4754,12.3683
syb.lol,34.0549,-118.243
relay.libernet.app,43.6532,-79.3832
relay.angor.io,48.1046,11.6002
relay.staging.commonshub.brussels,49.4543,11.0746
strfry.atlantislabs.space,43.6532,-79.3832
nostr.wom.wtf,43.6532,-79.3832
nostrride.io,37.3986,-121.964
nostr.dpinkerton.com,39.1008,-94.5811
r.0kb.io,32.789,-96.7989
nostr.hekster.org,37.3986,-121.964
satsage.xyz,37.3986,-121.964
nostr.islandarea.net,35.4669,-97.6473
ve.agorawlc.com,50.4754,12.3683
relay.openfarmtools.org,60.1699,24.9384
top.testrelay.top,43.6532,-79.3832
relay-rpi.edufeed.org,49.4521,11.0767
pyramid.cult.cash,32.9483,-96.7299
relay.edino.net,56.6268,47.9193
nostr.snowbla.de,60.1699,24.9384
relay.wavefunc.live,39.7392,-104.99
tenex.chat,50.4754,12.3683
relay.getsafebox.app,43.6532,-79.3832
nostr.bond,50.1109,8.68213
nostrelites.org,41.8781,-87.6298
relay.plebeian.market,50.1109,8.68213
relay.laantungir.net,-19.4692,-42.5315
relay.decentnewsroom.com,50.4754,12.3683
nostr-relay.nextblockvending.com,47.2343,-119.853
relay.spacetomatoes.net,42.3601,-71.0589
nostrbtc.com,43.6532,-79.3832
relay.puresignal.news,43.6532,-79.3832
relay-testnet.k8s.layer3.news,37.3387,-121.885
relay.binaryrobot.com,43.6532,-79.3832
relay.wavlake.com,41.2619,-95.8608
inbox.scuba323.com,40.8218,-74.45
nostr.spaceshell.xyz,43.6532,-79.3832
relay.nostr.place,32.7767,-96.797
holland-excited-charming-experiencing.trycloudflare.com,43.6532,-79.3832
theoutpost.life,64.1476,-21.9392
relay.fckstate.net,59.3293,18.0686
bcast.girino.org,43.6532,-79.3832
discovery.us.nostria.app,52.3676,4.90414
relay.bullishbounty.com,43.6532,-79.3832
nostr.88mph.life,51.5072,-0.127586
nostr.tadryanom.me,43.6532,-79.3832
nostr.sathoarder.com,48.5734,7.75211
relay.nostr.net,43.6532,-79.3832
zw.agorawlc.com,50.4754,12.3683
relay.internationalright-wing.org,-22.5022,-48.7114
nostr.vulpem.com,49.4543,11.0746
wot.codingarena.top,50.4754,12.3683
reraw.pbla2fish.cc,43.6532,-79.3832
plebchain.club,43.6532,-79.3832
orly-relay.imwald.eu,48.8575,2.35138
relay.satnam.pub,43.6532,-79.3832
cs-relay.nostrdev.com,50.4754,12.3683
schnorr.me,43.6532,-79.3832
nostr-relay.online,43.6532,-79.3832
relay.routstr.com,43.6532,-79.3832
relay.ohstr.com,43.6532,-79.3832
relay.lanacoin-eternity.com,40.8302,-74.1299
wot.nostr.net,43.6532,-79.3832
nostr.ps1829.com,33.8851,130.883
yabu.me,35.6092,139.73
soloco.nl,43.6532,-79.3832
librerelay.aaroniumii.com,43.6532,-79.3832
relay.mmwaves.de,48.8575,2.35138
relay.artx.market,43.6548,-79.3885
nostr.jerrynya.fun,31.2304,121.474
relay-arg.zombi.cloudrodion.com,1.35208,103.82
relay.edufeed.org,49.4521,11.0767
discovery.eu.nostria.app,52.3676,4.90414
relay.layer.systems,49.0291,8.35695
nostr-rs-relay.dev.fedibtc.com,39.0438,-77.4874
relay.0xchat.com,43.6532,-79.3832
nos.lol,50.4754,12.3683
lightning.red,53.3498,-6.26031
slick.mjex.me,39.0418,-77.4744
relay.boredvictor.xyz,41.3888,2.15899
nostr.rtvslawenia.com,49.4543,11.0746
relay.mitchelltribe.com,39.0438,-77.4874
nostr.4rs.nl,49.0291,8.35696
relay.olas.app,50.4754,12.3683
memlay.v0l.io,53.3498,-6.26031
nostr-01.yakihonne.com,1.29524,103.79
relay.satmaxt.xyz,43.6532,-79.3832
nostrcheck.tnsor.network,43.6532,-79.3832
relay.guggero.org,46.0037,8.95105
ai.techunder.tech:56711,22.5429,114.06
premium.primal.net,43.6532,-79.3832
nostr.tac.lol,47.4748,-122.273
relay.zone667.com,60.1699,24.9384
nostr-relay.gateway.in.th,15.5163,103.194
vault.iris.to,43.6532,-79.3832
strfry.bonsai.com,37.8716,-122.273
ribo.eu.nostria.app,52.3676,4.90414
relay.wellorder.net,45.5201,-122.99
relay.tapestry.ninja,40.8054,-74.0241
relay.dwadziesciajeden.pl,52.2297,21.0122
relay.satlantis.io,32.8769,-80.0114
nostr.pbfs.io,50.4754,12.3683
freelay.sovbit.host,64.1476,-21.9392
articles.layer3.news,37.3387,-121.885
nostr.na.social,43.6532,-79.3832
relay.fountain.fm,43.6532,-79.3832
dev.relay.stream,43.6532,-79.3832
nostr.n7ekb.net,36.1527,-95.9902
relay5.bitransfer.org,43.6532,-79.3832
relay.og.coop,43.6532,-79.3832
nostr-server-production.up.railway.app,45.5019,-73.5674
bucket.coracle.social,37.7775,-122.397
relay.gulugulu.moe,43.6532,-79.3832
relay.nostr-check.me,43.6532,-79.3832
nostr.faultables.net,43.6532,-79.3832
strfry.openhoofd.nl,51.9229,4.40833
nostr.rblb.it:7777,43.7094,10.6582
relay.nostrcheck.me,43.6532,-79.3832
0x-nostr-relay.fly.dev,48.8575,2.35138
nostr.thalheim.io,60.1699,24.9384
relay-nl.zombi.cloudrodion.com,50.8943,6.06237
relay.shadowbip.com,51.5072,-0.127586
nostr-relay.corb.net,38.8353,-104.822
purplerelay.com,43.6532,-79.3832
nostr-pub.wellorder.net,45.5201,-122.99
herbstmeister.com,34.0549,-118.243
nostrcheck.me,43.6532,-79.3832
pyramid.nostr.technology,52.3947,4.66399
nostr.spicyz.io,43.6532,-79.3832
nrs-02.darkcloudarcade.com,39.9526,-75.1652
nestr.nedao.ch,47.0151,6.98832
nostr.nodesmap.com,59.3327,18.0656
nittom.nostr1.com,38.6327,-90.1961
public.crostr.com,43.6532,-79.3832
relay.cypherflow.ai,48.8575,2.35138
nostr.bitczat.pl,60.1699,24.9384
relayone.geektank.ai,39.1008,-94.5811
testrelay.era21.space,43.6532,-79.3832
relay.npubhaus.com,43.6532,-79.3832
relay.bitmacro.io,48.8566,2.35222
nostr.data.haus,50.4754,12.3683
relay.credenso.cafe,43.3601,-80.3127
relay.ru.ac.th,13.7607,100.627
relay-fra.zombi.cloudrodion.com,48.8566,2.35222
nostr.chaima.info,50.1109,8.68213
nostr.mom,50.4754,12.3683
use std::process::Command;
use std::path::PathBuf;
#[cfg(feature = "nostr")]
use nostr_sdk::prelude::{*, EventBuilder, Tag, Kind};
#[cfg(feature = "nostr")]
use serde_json::json;
#[cfg(feature = "nostr")]
use csv::ReaderBuilder;
#[cfg(feature = "nostr")]
use ::url::Url;
#[cfg(feature = "nostr")]
pub use frost_secp256k1_tr as frost;
#[cfg(feature = "nostr")]
use frost::keys::{KeyPackage, PublicKeyPackage, SecretShare};
#[cfg(feature = "nostr")]
use frost::round1::{SigningCommitments, SigningNonces};
#[cfg(feature = "nostr")]
use frost::round2::SignatureShare;
#[cfg(feature = "nostr")]
use frost::SigningPackage;
#[cfg(feature = "nostr")]
use rand::thread_rng;
#[cfg(feature = "nostr")]
pub use frost_secp256k1_tr as frost_bip340;
pub mod frost_mailbox_logic;
#[cfg(feature = "nostr")]
use std::collections::BTreeMap;
pub const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0";
pub const DEFAULT_GNOSTR_KEY: &str = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855";
pub const DEFAULT_PICTURE_URL: &str = "https://avatars.githubusercontent.com/u/135379339?s=400&u=11cb72cccbc2b13252867099546074c50caef1ae&v=4";
pub const DEFAULT_BANNER_URL: &str = "https://raw.githubusercontent.com/gnostr-org/gnostr-icons/refs/heads/master/banner/1024x341.png";
#[cfg(feature = "nostr")]
const ONLINE_RELAYS_GPS_CSV: &[u8] = include_bytes!("online_relays_gps.csv");
#[cfg(feature = "nostr")]
pub fn get_relay_urls() -> Vec<String> {
let content = String::from_utf8_lossy(ONLINE_RELAYS_GPS_CSV);
let mut rdr = ReaderBuilder::new()
.has_headers(true)
.from_reader(content.as_bytes());
rdr.records()
.filter_map(|result| {
match result {
Ok(record) => {
record.get(0).and_then(|url_str| {
let full_url_str = if url_str.contains("://") {
url_str.to_string()
} else {
format!("wss://{}", url_str)
};
match Url::parse(&full_url_str) {
Ok(url) if url.scheme() == "wss" => Some(url.to_string()),
_ => {
eprintln!("Warning: Invalid or unsupported relay URL scheme: {}", full_url_str);
None
}
}
})
},
Err(e) => {
eprintln!("Error reading CSV record: {}", e);
None
}
}
})
.collect()
}
/// Computes the SHA-256 hash of the specified file at compile time.
///
/// This macro takes a string literal representing a file path, reads the file's bytes
/// at compile time, computes its SHA-256 hash, and returns the hash as a hex-encoded `String`.
///
/// # Examples
///
/// ```rust
/// use get_file_hash_core::get_file_hash;
/// use sha2::{Digest, Sha256};
///
/// let hash = get_file_hash!("lib.rs");
/// println!("Hash: {}", hash);
/// ```
#[macro_export]
macro_rules! get_file_hash {
($file_path:expr) => {{
let bytes = include_bytes!($file_path);
let mut hasher = Sha256::new();
hasher.update(bytes);
let result = hasher.finalize();
// Convert the GenericArray to a hex string
result
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>()
}};
}
/// Computes the SHA-256 hash of the specified file at compile time and uses it as a Nostr private key.
///
/// This macro takes a string literal representing a file path, computes its SHA-256 hash,
/// and returns a `nostr::Keys` object derived from this hash.
///
/// # Examples
///
/// ```rust
/// use get_file_hash_core::file_hash_as_nostr_private_key;
/// use sha2::{Digest, Sha256};
/// use nostr_sdk::prelude::ToBech32;
///
/// let keys = file_hash_as_nostr_private_key!("lib.rs");
/// println!("Public Key: {}", keys.public_key().to_bech32().unwrap());
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! file_hash_as_nostr_private_key {
($file_path:expr) => {{
let hash_hex = $crate::get_file_hash!($file_path);
nostr_sdk::Keys::parse(&hash_hex).expect("Failed to create Nostr Keys from file hash")
}};
}
/// Publishes a NIP-34 repository announcement event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, project details, a clone URL, and a file path.
/// It computes the SHA-256 hash of the file at compile time to use as the "earliest unique commit" (EUC),
/// and then publishes a Kind 30617 event.
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::repository_announcement;
/// use get_file_hash_core::get_file_hash;
/// use nostr_sdk::Keys;
/// use sha2::{Digest, Sha256};
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let project_name = "my-awesome-repo";
/// let description = "A fantastic new project.";
/// let clone_url = "git@github.com:user/my-awesome-repo.git";
///
/// repository_announcement!(
/// &keys,
/// &relay_urls,
/// project_name,
/// description,
/// clone_url,
/// "../Cargo.toml", // Use a known file in your project
/// None
/// );
/// }
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! repository_announcement {
($keys:expr, $relay_urls:expr, $project_name:expr, $description:expr, $clone_url:expr, $file_for_euc:expr) => {{
let euc_hash = $crate::get_file_hash!($file_for_euc);
// The 'd' tag value should be unique for the repository. Using the project_name for simplicity.
let d_tag_value = $project_name;
$crate::publish_repository_announcement_event(
$keys,
$relay_urls,
$project_name,
$description,
$clone_url,
&euc_hash,
d_tag_value,
None,
).await;
}};
($keys:expr, $relay_urls:expr, $project_name:expr, $description:expr, $clone_url:expr, $file_for_euc:expr, $build_manifest_event_id:expr) => {{
let euc_hash = $crate::get_file_hash!($file_for_euc);
let d_tag_value = $project_name;
$crate::publish_repository_announcement_event(
$keys,
$relay_urls,
$project_name,
$description,
$clone_url,
&euc_hash,
d_tag_value,
$build_manifest_event_id, // Correct: Pass directly
).await;
}};
}
/// Publishes a NIP-34 patch event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, the repository's d-tag value,
/// the commit ID the patch applies to, and the path to the patch file.
/// The content of the patch file is included directly in the event.
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::publish_patch;
/// use nostr_sdk::Keys;
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let d_tag = "my-awesome-repo";
/// let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0"; // Example commit ID
///
/// publish_patch!(
/// &keys,
/// &relay_urls,
/// d_tag,
/// commit_id,
/// "lib.rs" // Use an existing file for the patch content
/// );
/// }
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! publish_patch {
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $patch_file_path:expr) => {{
let patch_content = include_str!($patch_file_path);
$crate::publish_patch_event(
$keys,
$relay_urls,
$d_tag_value,
$commit_id,
patch_content,
None, // Pass None for build_manifest_event_id
).await;
}};
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $patch_file_path:expr, $build_manifest_event_id:expr) => {{
let patch_content = include_str!($patch_file_path);
$crate::publish_patch_event(
$keys,
$relay_urls,
$d_tag_value,
$commit_id,
patch_content,
$build_manifest_event_id, // Pass directly, macro arg should be Option<&EventId>
).await;
}};
}
/// Publishes a NIP-34 pull request event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, the repository's d-tag value,
/// the commit ID of the pull request, a clone URL where the work can be fetched,
/// and an optional title for the pull request.
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::publish_pull_request;
/// use nostr_sdk::Keys;
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let d_tag = "my-awesome-repo";
/// let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0";
/// let clone_url = "git@github.com:user/my-feature-branch.git";
/// let title = Some("Feat: Add new awesome feature");
///
/// publish_pull_request!(
/// &keys,
/// &relay_urls,
/// d_tag,
/// commit_id,
/// clone_url,
/// title,
/// None
/// );
/// }
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! publish_pull_request {
// 5 args: No title, no build_manifest_event_id
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $clone_url:expr) => {{
$crate::publish_pull_request_event(
$keys, $relay_urls, $d_tag_value, $commit_id, $clone_url,
None, // title: Option<&str>
None, // build_manifest_event_id: Option<&EventId>
).await;
}};
// 6 args: With title (Option<&str>), no build_manifest_event_id
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $clone_url:expr, $title:expr) => {{
$crate::publish_pull_request_event(
$keys, $relay_urls, $d_tag_value, $commit_id, $clone_url,
$title, // title: Option<&str>
None, // build_manifest_event_id: Option<&EventId>
).await;
}};
// 7 args: With title (Option<&str>), with build_manifest_event_id (Option<&EventId>)
// This needs to be before the 6-arg arm that passes a single Option for build_manifest_event_id if it's not None.
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $clone_url:expr, $title:expr, $build_manifest_event_id:expr) => {{
$crate::publish_pull_request_event(
$keys, $relay_urls, $d_tag_value, $commit_id, $clone_url,
$title, // title: Option<&str>
$build_manifest_event_id, // build_manifest_event_id: Option<&EventId>
).await;
}};
// 6 args: No title, with build_manifest_event_id (Option<&EventId>)
// This must be after the 7-arg arm to avoid ambiguity.
// The example needs to explicitly pass None for title.
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $commit_id:expr, $clone_url:expr, _none_title:tt, $build_manifest_event_id:expr) => {{ // _none_title as tt to match None
$crate::publish_pull_request_event(
$keys, $relay_urls, $d_tag_value, $commit_id, $clone_url,
None, // title: Option<&str>
$build_manifest_event_id, // build_manifest_event_id: Option<&EventId>
).await;
}};
}
/// Publishes a NIP-34 PR update event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, the repository's d-tag value,
/// the event ID of the original pull request, the new commit ID,
/// and the new clone URL.
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::publish_pr_update;
/// use nostr_sdk::Keys;
/// use nostr_sdk::EventId;
/// use std::str::FromStr;
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let d_tag = "my-awesome-repo";
/// let pr_event_id = EventId::from_str("f6e4d6a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9").unwrap(); // Example PR Event ID
/// let updated_commit_id = "z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4j3i2h1g0";
/// let updated_clone_url = "git@github.com:user/my-feature-branch-v2.git";
///
/// publish_pr_update!(
/// &keys,
/// &relay_urls,
/// d_tag,
/// &pr_event_id,
/// updated_commit_id,
/// updated_clone_url
/// );
/// }
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! publish_pr_update {
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $pr_event_id:expr, $updated_commit_id:expr, $updated_clone_url:expr) => {{
$crate::publish_pr_update_event(
$keys,
$relay_urls,
$d_tag_value,
$pr_event_id,
$updated_commit_id,
$updated_clone_url,
None, // Pass None for build_manifest_event_id
).await;
}};
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $pr_event_id:expr, $updated_commit_id:expr, $updated_clone_url:expr, $build_manifest_event_id:expr) => {{
$crate::publish_pr_update_event(
$keys,
$relay_urls,
$d_tag_value,
$pr_event_id,
$updated_commit_id,
$updated_clone_url,
$build_manifest_event_id,
).await;
}};
}
/// Publishes a NIP-34 repository state event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, the repository's d-tag value,
/// the branch name, and the commit ID for that branch.
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::publish_repository_state;
/// use nostr_sdk::Keys;
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let d_tag = "my-awesome-repo";
/// let branch_name = "main";
/// let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0";
///
/// publish_repository_state!(
/// &keys,
/// &relay_urls,
/// d_tag,
/// branch_name,
/// commit_id
/// );
/// }
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! publish_repository_state {
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $branch_name:expr, $commit_id:expr) => {{
$crate::publish_repository_state_event(
$keys,
$relay_urls,
$d_tag_value,
$branch_name,
$commit_id,
).await;
}};
}
/// Publishes a NIP-34 issue event to Nostr relays.
///
/// This macro takes Nostr keys, relay URLs, the repository's d-tag value,
/// a unique issue ID, the issue's title, and its content (markdown).
///
/// # Examples
///
/// ```no_run
/// use get_file_hash_core::publish_issue;
/// use nostr_sdk::Keys;
///
/// #[tokio::main]
/// async fn main() {
/// let keys = Keys::generate();
/// let relay_urls = vec!["wss://relay.damus.io".to_string()];
/// let d_tag = "my-awesome-repo";
/// let issue_id = "123";
/// let title = "Bug: Fix authentication flow";
/// let content = "The authentication flow is currently broken when users try to log in with invalid credentials. It crashes instead of showing an error message.";
///
/// publish_issue!(
/// &keys,
/// &relay_urls,
/// d_tag,
/// issue_id,
/// title,
/// content
/// );
/// }
/// ```
/// ```
#[cfg(feature = "nostr")]
#[macro_export]
macro_rules! publish_issue {
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $issue_id:expr, $title:expr, $content:expr) => {{
$crate::publish_issue_event(
$keys,
$relay_urls,
$d_tag_value,
$issue_id,
$title,
$content,
None, // Pass None for build_manifest_event_id
).await;
}};
($keys:expr, $relay_urls:expr, $d_tag_value:expr, $issue_id:expr, $title:expr, $content:expr, $build_manifest_event_id:expr) => {{
$crate::publish_issue_event(
$keys,
$relay_urls,
$d_tag_value,
$issue_id,
$title,
$content,
$build_manifest_event_id, // Pass Option<&EventId> directly
).await;
}};
}
pub fn get_git_tracked_files(dir: &PathBuf) -> Vec<String> {
match Command::new("git")
.arg("ls-files")
.current_dir(dir)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::null())
.output()
{
Ok(output) if output.status.success() && !output.stdout.is_empty() => {
String::from_utf8_lossy(&output.stdout)
.lines()
.filter_map(|line| Some(String::from(line)))
.collect()
}
Ok(output) => {
println!("cargo:warning=git ls-files failed or returned empty. Status: {:?}, Stderr: {}",
output.status, String::from_utf8_lossy(&output.stderr));
Vec::new()
}
Err(e) => {
println!("cargo:warning=Failed to execute git ls-files: {}", e);
Vec::new()
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_metadata_event(
keys: &Keys,
relay_urls: &[String],
picture_url: &str,
banner_url: &str,
file_path_str: &str,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for metadata {}: {}", relay_url, e);
}
}
client.connect().await;
let metadata_json = json!({
"picture": picture_url,
"banner": banner_url,
"name": file_path_str,
"about": format!("Metadata for file event: {}", file_path_str),
});
let metadata = serde_json::from_str::<nostr_sdk::Metadata>(&metadata_json.to_string())
.expect("Failed to parse metadata JSON");
match client.send_event_builder(EventBuilder::metadata(&metadata)).await {
Ok(event_id) => {
println!("cargo:warning=Published Nostr metadata event for {}: {:?}", file_path_str, event_id);
}
Err(e) => {
println!("cargo:warning=Failed to publish Nostr metadata event for {}: {}", file_path_str, e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_repository_announcement_event(
keys: &Keys,
relay_urls: &[String],
project_name: &str,
description: &str,
clone_url: &str,
euc: &str, // Earliest Unique Commit hash
d_tag_value: &str, // d-tag value
build_manifest_event_id: Option<&EventId>,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for repository announcement {}: {}", relay_url, e);
}
}
client.connect().await;
let mut tags = vec![
Tag::parse(["name", project_name]).expect("Failed to create name tag"),
Tag::parse(["description", description]).expect("Failed to create description tag"),
Tag::parse(["clone", clone_url]).expect("Failed to create clone tag"),
Tag::custom("euc".into(), vec![euc.to_string()]),
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // NIP-33 d-tag
];
if let Some(event_id) = build_manifest_event_id {
tags.push(Tag::event(*event_id));
}
let event_builder = EventBuilder::new(
Kind::Custom(30617), // NIP-34 Repository Announcement kind
"", // Content is empty for repository announcement
).tags(tags);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 Repository Announcement for {}. Event ID (raw): {:?}, Event ID (bech32): {}", project_name, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 Repository Announcement for {}: {}", project_name, e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_patch_event(
keys: &Keys,
relay_urls: &[String],
d_tag_value: &str,
commit_id: &str,
patch_content: &str,
build_manifest_event_id: Option<&EventId>,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for patch {}: {}", relay_url, e);
}
}
client.connect().await;
let mut tags = vec![
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // Repository d-tag
Tag::parse(["commit", commit_id]).expect("Failed to create commit tag"),
];
if let Some(event_id) = build_manifest_event_id {
tags.push(Tag::event(*event_id));
}
let event_builder = EventBuilder::new(
Kind::Custom(1617), // NIP-34 Patch kind
patch_content,
).tags(tags);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 Patch event for commit {}. Event ID (raw): {:?}, Event ID (bech32): {}", commit_id, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 Patch event for commit {}: {}", commit_id, e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_pull_request_event(
keys: &Keys,
relay_urls: &[String],
d_tag_value: &str,
commit_id: &str,
clone_url: &str,
title: Option<&str>,
build_manifest_event_id: Option<&EventId>,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for pull request {}: {}", relay_url, e);
}
}
client.connect().await;
let mut tags = vec![
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // Repository d-tag
Tag::parse(["commit", commit_id]).expect("Failed to create commit tag"),
Tag::parse(["clone", clone_url]).expect("Failed to create clone tag"),
];
if let Some(t) = title {
tags.push(Tag::parse(["title", t]).expect("Failed to create title tag"));
}
if let Some(event_id) = build_manifest_event_id {
tags.push(Tag::event(*event_id));
}
let event_builder = EventBuilder::new(
Kind::Custom(1618), // NIP-34 Pull Request kind
"", // Content can be empty or a description for the PR
).tags(tags);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 Pull Request event for commit {}. Event ID (raw): {:?}, Event ID (bech32): {}", commit_id, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 Pull Request event for commit {}: {}", commit_id, e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_pr_update_event(
keys: &Keys,
relay_urls: &[String],
d_tag_value: &str,
pr_event_id: &EventId,
updated_commit_id: &str,
updated_clone_url: &str,
build_manifest_event_id: Option<&EventId>,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for PR update {}: {}", relay_url, e);
}
}
client.connect().await;
let mut tags = vec![
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // Repository d-tag
Tag::parse(["p", pr_event_id.to_string().as_str()]).expect("Failed to create PR event ID tag"),
Tag::parse(["commit", updated_commit_id]).expect("Failed to create updated commit ID tag"),
Tag::parse(["clone", updated_clone_url]).expect("Failed to create updated clone URL tag"),
];
if let Some(event_id) = build_manifest_event_id {
tags.push(Tag::event(*event_id));
}
let event_builder = EventBuilder::new(
Kind::Custom(1619), // NIP-34 PR Update kind
"", // Content is empty for PR update
).tags(tags);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 PR Update event for PR {} (raw: {:?}). Event ID (raw): {:?}, Event ID (bech32): {}", pr_event_id.to_bech32().unwrap(), pr_event_id, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 PR Update event for PR {}: {}", pr_event_id.to_string(), e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_repository_state_event(
keys: &Keys,
relay_urls: &[String],
d_tag_value: &str,
branch_name: &str,
commit_id: &str,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for repository state {}: {}", relay_url, e);
}
}
client.connect().await;
let event_builder = EventBuilder::new(
Kind::Custom(30618), // NIP-34 Repository State kind
"", // Content is empty for repository state
).tags(vec![
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // Repository d-tag
Tag::parse(["name", branch_name]).expect("Failed to create branch name tag"),
Tag::parse(["commit", commit_id]).expect("Failed to create commit ID tag"),
]);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 Repository State event for branch {} (commit {}). Event ID (raw): {:?}, Event ID (bech32): {}", branch_name, commit_id, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 Repository State event for branch {} (commit {}): {}", branch_name, commit_id, e);
}
}
}
#[cfg(feature = "nostr")]
pub async fn publish_issue_event(
keys: &Keys,
relay_urls: &[String],
d_tag_value: &str,
issue_id: &str, // Unique identifier for the issue
title: &str,
content: &str,
build_manifest_event_id: Option<&EventId>,
) {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay for issue {}: {}", relay_url, e);
}
}
client.connect().await;
let mut tags = vec![
Tag::custom("d".into(), vec![d_tag_value.to_string()]), // Repository d-tag
Tag::parse(["i", issue_id]).expect("Failed to create issue ID tag"),
Tag::parse(["title", title]).expect("Failed to create title tag"),
];
if let Some(event_id) = build_manifest_event_id {
tags.push(Tag::event(*event_id));
}
let event_builder = EventBuilder::new(
Kind::Custom(1621), // NIP-34 Issue kind
content,
).tags(tags);
match client.send_event_builder(event_builder).await {
Ok(event_id) => {
println!("cargo:warning=Published NIP-34 Issue event for issue {} ({}). Event ID (raw): {:?}, Event ID (bech32): {}", issue_id, title, event_id, event_id.to_bech32().unwrap());
}
Err(e) => {
println!("cargo:warning=Failed to publish NIP-34 Issue event for issue {} ({}): {}", issue_id, title, e);
}
}
}
#[cfg(feature = "nostr")]
pub fn generate_frost_keys(
max_signers: u16,
min_signers: u16,
) -> Result<(BTreeMap<frost::Identifier, SecretShare>, PublicKeyPackage), Box<dyn std::error::Error>> { let mut rng = thread_rng();
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
Ok((shares, pubkey_package))
}
#[cfg(feature = "nostr")]
pub fn create_frost_commitment(
secret_share: &SecretShare,
) -> (SigningNonces, SigningCommitments) {
let mut rng = thread_rng();
frost::round1::commit(secret_share.signing_share(), &mut rng)
}
#[cfg(feature = "nostr")]
pub fn create_signing_package(
commitments: BTreeMap<frost::Identifier, SigningCommitments>,
message: &[u8],
) -> SigningPackage {
frost::SigningPackage::new(commitments, message)
}
#[cfg(feature = "nostr")]
pub fn generate_signature_share(
signing_package: &SigningPackage,
nonces: &SigningNonces,
secret_share: &SecretShare,
) -> Result<SignatureShare, Box<dyn std::error::Error>> {
let key_package: KeyPackage = secret_share.clone().try_into()?;
Ok(frost::round2::sign(signing_package, nonces, &key_package)?)
}
#[cfg(feature = "nostr")]
pub fn aggregate_signature_shares(
signing_package: &SigningPackage,
signature_shares: &BTreeMap<frost::Identifier, SignatureShare>,
pubkey_package: &PublicKeyPackage,
) -> Result<frost_secp256k1_tr::Signature, Box<dyn std::error::Error>> {
Ok(frost::aggregate(signing_package, signature_shares, pubkey_package)?)
}
#[cfg(feature = "nostr")]
pub fn verify_frost_signature(
group_public_key: &frost_secp256k1_tr::VerifyingKey,
message: &[u8],
signature: &frost_secp256k1_tr::Signature,
) -> Result<(), Box<dyn std::error::Error>> {
Ok(group_public_key.verify(message, signature)?)
}
#[cfg(test)]
mod tests {
use serial_test::serial;
use std::collections::BTreeMap;
use std::fs::File;
use std::io::Write;
use sha2::{Digest, Sha256};
use tempfile;
use super::get_git_tracked_files;
#[cfg(feature = "nostr")]
use super::frost;
use std::process::Command;
#[cfg(feature = "nostr")]
use nostr_sdk::EventId;
#[cfg(feature = "nostr")]
use std::str::FromStr;
// Test for get_file_hash! macro
#[test]
fn test_get_file_hash() {
let dir = tempfile::tempdir().unwrap();
let file_path = dir.path().join("test_file.txt");
let content = "Hello, world!";
File::create(&file_path).unwrap().write_all(content.as_bytes()).unwrap();
// The macro expects a string literal, so we need to construct the path at compile time.
// This is a limitation for testing, normally you'd use it with a known file.
// For testing, we'll manually verify a file known to be in the project.
// Let's test `lib.rs` itself for a more realistic scenario.
let macro_hash = get_file_hash!("lib.rs");
// We will assert on a known file within the crate.
let bytes = include_bytes!("lib.rs");
let mut hasher_manual = Sha256::new();
hasher_manual.update(bytes);
let expected_hash_lib_rs = hasher_manual.finalize()
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>();
assert_eq!(macro_hash, expected_hash_lib_rs);
// Test with another known file, e.g., Cargo.toml of the core crate
let cargo_toml_hash = get_file_hash!("../Cargo.toml");
let cargo_toml_bytes = include_bytes!("../Cargo.toml");
let mut cargo_toml_hasher = Sha256::new();
cargo_toml_hasher.update(cargo_toml_bytes);
let expected_cargo_toml_hash = cargo_toml_hasher.finalize()
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>();
assert_eq!(cargo_toml_hash, expected_cargo_toml_hash);
}
#[test]
fn test_get_git_tracked_files() {
let dir = tempfile::tempdir().unwrap();
let repo_path = dir.path();
// Initialize a git repository
let _ = Command::new("git")
.arg("init")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to initialize git repo");
// Create some files
let file1_path = repo_path.join("file1.txt");
File::create(&file1_path).unwrap().write_all(b"content1").unwrap();
let file2_path = repo_path.join("file2.txt");
File::create(&file2_path).unwrap().write_all(b"content2").unwrap();
// Add and commit files
let _ = Command::new("git")
.arg("add")
.arg(".")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to git add files");
let _ = Command::new("git")
.arg("commit")
.arg("-m")
.arg("Initial commit")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to git commit");
let tracked_files = get_git_tracked_files(&repo_path.to_path_buf());
assert_eq!(tracked_files.len(), 2);
assert!(tracked_files.contains(&"file1.txt".to_string()));
assert!(tracked_files.contains(&"file2.txt".to_string()));
}
// #[cfg(feature = "nostr")]
// #[test]
// fn test_file_hash_as_nostr_private_key() {
// use super::file_hash_as_nostr_private_key;
// // use std::fs::{File, remove_file};
// // use std::io::Write;
// // use tempfile::tempdir; // Not needed as we're using a literal path
// use nostr_sdk::prelude::ToBech32;
// let file_path = PathBuf::from("test_nostr_file_for_macro.txt");
// let content = "Nostr test content!";
// File::create(&file_path).unwrap().write_all(content.as_bytes()).unwrap();
// let keys = file_hash_as_nostr_private_key!("test_nostr_file_for_macro.txt");
// assert!(!keys.public_key().to_bech32().unwrap().is_empty());
// remove_file(&file_path).unwrap();
// }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_metadata_event_tr() {
use super::publish_metadata_event;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let picture_url = super::DEFAULT_PICTURE_URL;
let banner_url = super::DEFAULT_BANNER_URL;
let file_path_str = "test_file.txt";
// This test primarily checks that the function doesn't panic
// and goes through its execution path.
// Actual publishing success depends on external network conditions.
let relay_urls = super::get_relay_urls();
publish_metadata_event(
&keys,
&relay_urls,
picture_url,
banner_url,
file_path_str,
).await;
}
#[cfg(feature = "nostr")]
#[tokio::test]
#[serial]
async fn test_repository_announcement_event_tr() {
use super::get_relay_urls;
use nostr_sdk::{Keys, EventId};
use std::str::FromStr;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let project_name = "test-nip34-repo";
let description = "A test repository for NIP-34 announcements.";
let clone_url = "git@example.com:test/test-nip34-repo.git";
let _dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
let _file_for_euc = "Cargo.toml"; // Use a known file in the project, as required by include_bytes!
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_repo_announcement_picture.jpg",
"https://example.com/test_repo_announcement_banner.jpg",
"test_repository_announcement_event_metadata",
).await;
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
repository_announcement!(
&keys,
&relay_urls,
project_name,
description,
clone_url,
"../Cargo.toml", // Pass the string literal directly, correcting path for include_bytes!
Some(&dummy_build_manifest_id)
);
}
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_patch_event_tr() {
use super::get_relay_urls;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-patch";
let commit_id = "fedcba9876543210fedcba9876543210fedcba";
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_patch_picture.jpg",
"https://example.com/test_patch_banner.jpg",
"test_publish_patch_event_metadata",
).await;
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"lib.rs", // Use an existing file for the patch content
Some(&dummy_build_manifest_id)
); }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_pull_request_event_tr() {
use super::get_relay_urls;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-pr";
let commit_id = "0123456789abcdef0123456789abcdef01234567";
let clone_url = "git@example.com:test/pr-branch.git";
let title = Some("Feat: Implement NIP-34 PR");
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_pr_picture.jpg",
"https://example.com/test_pr_banner.jpg",
"test_publish_pull_request_event_metadata",
).await;
// Test with a title
publish_pull_request!(
&keys,
&relay_urls,
d_tag,
commit_id,
clone_url,
Some(title.unwrap()),
Some(&dummy_build_manifest_id)
);
// Test without a title
publish_pull_request!(
&keys,
&relay_urls,
d_tag,
commit_id,
clone_url
);
}
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_pr_update_event_tr() {
use super::get_relay_urls;
use nostr_sdk::{Keys, EventId};
use std::str::FromStr;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-pr-update";
let pr_event_id = EventId::from_str("f6e4d6a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9").unwrap(); // Placeholder EventId
let updated_commit_id = "z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4j3i2h1g0";
let updated_clone_url = "git@example.com:test/pr-branch-updated.git";
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_pr_update_picture.jpg",
"https://example.com/test_pr_update_banner.jpg",
"test_publish_pr_update_event_metadata",
).await;
publish_pr_update!(
&keys,
&relay_urls,
d_tag,
&pr_event_id, // Pass a reference to pr_event_id
updated_commit_id,
updated_clone_url,
Some(&dummy_build_manifest_id)
); }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_repository_state_event() {
use super::get_relay_urls;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-state";
let branch_name = "main";
let commit_id = "abcde12345abcde12345abcde12345abcde12345";
use nostr_sdk::EventId;
use std::str::FromStr;
let _dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_repo_state_picture.jpg",
"https://example.com/test_repo_state_banner.jpg",
"test_publish_repository_state_event_metadata",
).await;
publish_repository_state!(
&keys,
&relay_urls,
d_tag,
branch_name,
commit_id
); }
// Test for get_file_hash! macro
#[test]
fn test_get_file_hash_tr() {
let dir = tempfile::tempdir().unwrap();
let file_path = dir.path().join("test_file.txt");
let content = "Hello, world!";
File::create(&file_path).unwrap().write_all(content.as_bytes()).unwrap();
// The macro expects a string literal, so we need to construct the path at compile time.
// This is a limitation for testing, normally you'd use it with a known file.
// For testing, we'll manually verify a file known to be in the project.
// Let's test `lib.rs` itself for a more realistic scenario.
let macro_hash = get_file_hash!("lib.rs");
// We will assert on a known file within the crate.
let bytes = include_bytes!("lib.rs");
let mut hasher_manual = Sha256::new();
hasher_manual.update(bytes);
let expected_hash_lib_rs = hasher_manual.finalize()
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>();
assert_eq!(macro_hash, expected_hash_lib_rs);
// Test with another known file, e.g., Cargo.toml of the core crate
let cargo_toml_hash = get_file_hash!("../Cargo.toml");
let cargo_toml_bytes = include_bytes!("../Cargo.toml");
let mut cargo_toml_hasher = Sha256::new();
cargo_toml_hasher.update(cargo_toml_bytes);
let expected_cargo_toml_hash = cargo_toml_hasher.finalize()
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>();
assert_eq!(cargo_toml_hash, expected_cargo_toml_hash);
}
#[test]
fn test_get_git_tracked_files_tr() {
let dir = tempfile::tempdir().unwrap();
let repo_path = dir.path();
// Initialize a git repository
let _ = Command::new("git")
.arg("init")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to initialize git repo");
// Create some files
let file1_path = repo_path.join("file1.txt");
File::create(&file1_path).unwrap().write_all(b"content1").unwrap();
let file2_path = repo_path.join("file2.txt");
File::create(&file2_path).unwrap().write_all(b"content2").unwrap();
// Add and commit files
let _ = Command::new("git")
.arg("add")
.arg(".")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to git add files");
let _ = Command::new("git")
.arg("commit")
.arg("-m")
.arg("Initial commit")
.current_dir(repo_path)
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.output()
.expect("Failed to git commit");
let tracked_files = get_git_tracked_files(&repo_path.to_path_buf());
assert_eq!(tracked_files.len(), 2);
assert!(tracked_files.contains(&"file1.txt".to_string()));
assert!(tracked_files.contains(&"file2.txt".to_string()));
}
// #[cfg(feature = "nostr")]
// #[test]
// fn test_file_hash_as_nostr_private_key() {
// use super::file_hash_as_nostr_private_key;
// // use std::fs::{File, remove_file};
// // use std::io::Write;
// // use tempfile::tempdir; // Not needed as we're using a literal path
// use nostr_sdk::prelude::ToBech32;
// let file_path = PathBuf::from("test_nostr_file_for_macro.txt");
// let content = "Nostr test content!";
// File::create(&file_path).unwrap().write_all(content.as_bytes()).unwrap();
// let keys = file_hash_as_nostr_private_key!("test_nostr_file_for_macro.txt");
// assert!(!keys.public_key().to_bech32().unwrap().is_empty());
// remove_file(&file_path).unwrap();
// }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_metadata_event() {
use super::publish_metadata_event;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let picture_url = super::DEFAULT_PICTURE_URL;
let banner_url = super::DEFAULT_BANNER_URL;
let file_path_str = "test_file.txt";
// This test primarily checks that the function doesn't panic
// and goes through its execution path.
// Actual publishing success depends on external network conditions.
let relay_urls = super::get_relay_urls();
publish_metadata_event(
&keys,
&relay_urls,
picture_url,
banner_url,
file_path_str,
).await;
}
#[cfg(feature = "nostr")]
#[tokio::test]
#[serial]
async fn test_repository_announcement_event() {
use super::get_relay_urls;
use nostr_sdk::{Keys, EventId};
use std::str::FromStr;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let project_name = "test-nip34-repo";
let description = "A test repository for NIP-34 announcements.";
let clone_url = "git@example.com:test/test-nip34-repo.git";
let _dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
let _file_for_euc = "Cargo.toml"; // Use a known file in your project, as required by include_bytes!
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_repo_announcement_picture.jpg",
"https://example.com/test_repo_announcement_banner.jpg",
"test_repository_announcement_event_metadata",
).await;
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
repository_announcement!(
&keys,
&relay_urls,
project_name,
description,
clone_url,
"../Cargo.toml", // Pass the string literal directly, correcting path for include_bytes!
Some(&dummy_build_manifest_id)
);
}
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_patch_event() {
use super::get_relay_urls;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-patch";
let commit_id = "fedcba9876543210fedcba9876543210fedcba";
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_patch_picture.jpg",
"https://example.com/test_patch_banner.jpg",
"test_publish_patch_event_metadata",
).await;
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"lib.rs", // Use an existing file for the patch content
Some(&dummy_build_manifest_id)
); }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_pull_request_event() {
use super::get_relay_urls;
use nostr_sdk::Keys;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-pr";
let commit_id = "0123456789abcdef0123456789abcdef01234567";
let clone_url = "git@example.com:test/pr-branch.git";
let title = Some("Feat: Implement NIP-34 PR");
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_pr_picture.jpg",
"https://example.com/test_pr_banner.jpg",
"test_publish_pull_request_event_metadata",
).await;
// Test with a title
publish_pull_request!(
&keys,
&relay_urls,
d_tag,
commit_id,
clone_url,
Some(title.unwrap()),
Some(&dummy_build_manifest_id)
);
// Test without a title
publish_pull_request!(
&keys,
&relay_urls,
d_tag,
commit_id,
clone_url
);
}
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_pr_update_event() {
use super::get_relay_urls;
use nostr_sdk::{Keys, EventId};
use std::str::FromStr;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-pr-update";
let pr_event_id = EventId::from_str("f6e4d6a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9").unwrap(); // Placeholder EventId
let updated_commit_id = "z9y8x7w6v5u4t3s2r1q0p9o8n7m6l5k4j3i2h1g0";
let updated_clone_url = "git@example.com:test/pr-branch-updated.git";
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_pr_update_picture.jpg",
"https://example.com/test_pr_update_banner.jpg",
"test_publish_pr_update_event_metadata",
).await;
publish_pr_update!(
&keys,
&relay_urls,
d_tag,
&pr_event_id, // Pass a reference to pr_event_id
updated_commit_id,
updated_clone_url,
Some(&dummy_build_manifest_id)
); }
#[cfg(feature = "nostr")]
#[tokio::test]
#[serial]
async fn test_publish_repository_state_event_tr() {
use super::get_relay_urls;
use nostr_sdk::Keys;
use nostr_sdk::secp256k1::SecretKey as NostrSecretKey;
// 1. Generate FROST keys (1-of-1 for this test to derive a single Nostr key)
let (shares, _pubkey_package) = super::generate_frost_keys(2, 2).unwrap();
let signer_id = frost::Identifier::try_from(1 as u16).unwrap();
let secret_share = shares.get(&signer_id).unwrap();
// Convert FROST secret share's scalar to a Nostr SecretKey
let frost_secp_secret_key = secret_share.signing_share().to_scalar();
let nostr_secret_key = NostrSecretKey::from_slice(&frost_secp_secret_key.to_bytes()).unwrap();
let keys = Keys::new(nostr_secret_key.into());
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-state";
let branch_name = "main";
let commit_id = "abcde12345abcde12345abcde12345abcde12345";
use nostr_sdk::EventId;
use std::str::FromStr;
let _dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_repo_state_picture.jpg",
"https://example.com/test_repo_state_banner.jpg",
"test_publish_repository_state_event_metadata",
).await;
publish_repository_state!(
&keys,
&relay_urls,
d_tag,
branch_name,
commit_id
); }
#[cfg(feature = "nostr")]
#[tokio::test]
async fn test_publish_issue_event_tr() {
use super::get_relay_urls;
use nostr_sdk::Keys;
use nostr_sdk::EventId;
use std::str::FromStr;
let keys = Keys::parse(super::DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr Keys from DEFAULT_GNOSTR_KEY");
let relay_urls = get_relay_urls();
let d_tag = "test-repo-for-issue";
let issue_id = "456";
let title = "Feature: Implement NIP-34 Issues";
let content = "This is a test issue to verify the NIP-34 issue macro implementation.";
let dummy_build_manifest_id = EventId::from_str(super::DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// This test primarily checks that the macro and function compile and execute without panicking.
// Actual publishing success depends on external network conditions.
super::publish_metadata_event(
&keys,
&relay_urls,
"https://example.com/test_issue_picture.jpg",
"https://example.com/test_issue_banner.jpg",
"test_publish_issue_event_metadata",
).await;
publish_issue!(
&keys,
&relay_urls,
d_tag,
issue_id,
title,
content,
Some(&dummy_build_manifest_id)
);
}
#[cfg(feature = "nostr")]
#[test]
fn test_frost_signature_flow_tr() {
let max_signers = 3;
let min_signers = 2;
let message = b"This is a test message for FROST signing";
// 1. Key Generation
let (shares, pubkey_package) = super::generate_frost_keys(max_signers, min_signers).unwrap();
let mut commitments = BTreeMap::new();
let mut nonces_map = BTreeMap::new();
let mut signature_shares_map = BTreeMap::new();
// 2. Commitment Phase (simulated for two signers)
let signer1_id = frost::Identifier::try_from(1 as u16).unwrap();
let (nonces1, comms1) = super::create_frost_commitment(&shares[&signer1_id]);
commitments.insert(signer1_id, comms1);
nonces_map.insert(signer1_id, nonces1);
let signer2_id = frost::Identifier::try_from(2 as u16).unwrap();
let (nonces2, comms2) = super::create_frost_commitment(&shares[&signer2_id]);
commitments.insert(signer2_id, comms2);
nonces_map.insert(signer2_id, nonces2);
// 3. Signing Package Creation
let signing_package = super::create_signing_package(commitments, message);
// 4. Signature Share Generation
let share1 = super::generate_signature_share(&signing_package, &nonces_map[&signer1_id], &shares[&signer1_id]).unwrap();
signature_shares_map.insert(signer1_id, share1);
let share2 = super::generate_signature_share(&signing_package, &nonces_map[&signer2_id], &shares[&signer2_id]).unwrap();
signature_shares_map.insert(signer2_id, share2);
// 5. Aggregation
let group_signature = super::aggregate_signature_shares(&signing_package, &signature_shares_map, &pubkey_package).unwrap();
// 6. Verification
let group_public_key = pubkey_package.verifying_key();
super::verify_frost_signature(&group_public_key, message, &group_signature).unwrap();
}
}
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
#[cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
#[cfg(feature = "nostr")]
use rand::thread_rng;
#[cfg(feature = "nostr")]
use std::collections::BTreeMap;
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let max_signers = 3;
let min_signers = 2;
////////////////////////////////////////////////////////////////////////////
// Round 0: Key Generation (Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real P2P setup, you'd use Distributed Key Generation (DKG).
// For local testing/simulations, the trusted dealer is faster.
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// Verifying the public key exists
let group_public_key = pubkey_package.verifying_key();
println!("Group Public Key: {:?}", group_public_key);
////////////////////////////////////////////////////////////////////////////
// Round 1: Commitment
////////////////////////////////////////////////////////////////////////////
let message = b"BIP-64MOD Consensus Proposal";
let mut signing_commitments = BTreeMap::new();
let mut participant_nonces = BTreeMap::new();
// Participants 1 and 2 decide to sign
for i in 1..=min_signers {
let identifier = frost::Identifier::try_from(i as u16)?;
// Generate nonces and commitments
let (nonces, commitments) = frost::round1::commit(
shares[&identifier].signing_share(),
&mut rng,
);
signing_commitments.insert(identifier, commitments);
participant_nonces.insert(identifier, nonces);
}
////////////////////////////////////////////////////////////////////////////
// Round 2: Signing
////////////////////////////////////////////////////////////////////////////
let mut signature_shares = BTreeMap::new();
let signing_package = frost::SigningPackage::new(signing_commitments, message);
for i in 1..=min_signers {
let identifier = frost::Identifier::try_from(i as u16)?;
let nonces = &participant_nonces[&identifier];
// Each participant produces a signature share
let key_package: frost::keys::KeyPackage = shares[&identifier].clone().try_into()?;
let share = frost::round2::sign(&signing_package, nonces, &key_package)?;
signature_shares.insert(identifier, share);
}
////////////////////////////////////////////////////////////////////////////
// Finalization: Aggregation
////////////////////////////////////////////////////////////////////////////
let group_signature = frost::aggregate(
&signing_package,
&signature_shares,
&pubkey_package,
)?;
// Verification
group_public_key.verify(message, &group_signature)?;
println!("Threshold signature verified successfully!");
Ok(())
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example trusted-dealer --features nostr");
}
//! A simple command-line tool that calculates and displays the SHA-256 hash of
//! its own source file.
//!
//! This utility demonstrates how to use the `get_file_hash!` macro to obtain
//! the hash of a specified file at compile time and incorporate it into runtime
//! logic.
use get_file_hash::{BUILD_HASH, CARGO_TOML_HASH, LIB_HASH};
use get_file_hash_core::get_file_hash;
use sha2::{Digest, Sha256};
const README_TEMPLATE_PART0: &str = r##"# `get_file_hash` macro
This project provides a Rust procedural macro, `get_file_hash!`, designed to compute the SHA-256 hash of a specified file at compile time. This hash is then embedded directly into your compiled executable. This feature is invaluable for:
* **Integrity Verification:** Ensuring the deployed code hasn't been tampered with.
* **Versioning:** Embedding a unique identifier linked to the exact source code version.
* **Cache Busting:** Generating unique names for assets based on their content.
## Project Structure
* `get_file_hash_core`: A foundational crate containing the `get_file_hash!` macro definition.
* `get_file_hash`: The main library crate that re-exports the macro.
* `src/bin/get_file_hash.rs`: An example executable demonstrating the macro's usage by hashing its own source file and updating this `README.md`.
* `build.rs`: A build script that also utilizes the `get_file_hash!` macro to hash `Cargo.toml` during the build process.
## Usage of `get_file_hash!` Macro
To use the `get_file_hash!` macro, ensure you have `get_file_hash` (or `get_file_hash_core` for direct usage) as a dependency in your `Cargo.toml`.
### Example
```rust
use get_file_hash::get_file_hash;
use get_file_hash::CARGO_TOML_HASH;
use sha2::{Digest, Sha256};
fn main() {
// The macro resolves the path relative to CARGO_MANIFEST_DIR
let readme_hash = get_file_hash!("src/bin/readme.rs");
let lib_hash = get_file_hash!("src/lib.rs");
println!("The SHA-256 hash of src/lib.rs is: {}", lib_hash);
println!("The SHA-256 hash of src/bin/readme.rs is: {}", readme_hash);
println!("The SHA-256 hash of Cargo.toml is: {}", CARGO_TOML_HASH);
}
```
"##;
const README_TEMPLATE_PART1: &str = r"## Release
## [`README.md`](./README.md)
```bash
cargo run --bin readme > README.md
```
## [`src/bin/readme.rs`](src/bin/readme.rs)
* **Target File:** `src/bin/readme.rs`
";
const README_TEMPLATE_PART2: &str = r"##
## [`build.rs`](build.rs)
* **Target File:** `build.rs`
";
const README_TEMPLATE_PART3: &str = r"##
## [`Cargo.toml`](Cargo.toml)
* **Target File:** `Cargo.toml`
";
const README_TEMPLATE_PART4: &str = r"##
## [`src/lib.rs`](src/lib.rs)
* **Target File:** `src/lib.rs`
";
const README_TEMPLATE_PART_NIP34: &str = r"## NIP-34 Integration: Git Repository Events on Nostr
This library provides a set of powerful macros and functions for integrating Git repository events with the Nostr protocol, adhering to the [NIP-34: Git Repositories on Nostr](https://github.com/nostr-protocol/nips/blob/master/34.md) specification.
These tools allow you to publish various Git-related events to Nostr relays, enabling decentralized tracking and collaboration for your code repositories.
### Available NIP-34 Macros
Each macro provides a convenient way to publish specific NIP-34 event kinds:
* [`repository_announcement!`](#repository_announcement)
* Publishes a `Repository Announcement` event (Kind 30617) to announce a new or updated Git repository.
* [`publish_patch!`](#publish_patch)
* Publishes a `Patch` event (Kind 1617) containing a Git patch (diff) for a specific commit.
* [`publish_pull_request!`](#publish_pull_request)
* Publishes a `Pull Request` event (Kind 1618) to propose changes and facilitate code review.
* [`publish_pr_update!`](#publish_pr_update)
* Publishes a `Pull Request Update` event (Kind 1619) to update an existing pull request.
* [`publish_repository_state!`](#publish_repository_state)
* Publishes a `Repository State` event (Kind 1620) to announce the current state of a branch (e.g., its latest commit).
* [`publish_issue!`](#publish_issue)
* Publishes an `Issue` event (Kind 1621) to report bugs, request features, or track tasks.
### Running NIP-34 Examples
To see these macros in action, navigate to the `examples/` directory and run each example individually with the `nostr` feature enabled:
```bash
cargo run --example repository_announcement --features nostr
cargo run --example publish_patch --features nostr
cargo run --example publish_pull_request --features nostr
cargo run --example publish_pr_update --features nostr
cargo run --example publish_repository_state --features nostr
cargo run --example publish_issue --features nostr
```
";
/// The main entry point of the application.
///
/// This function calculates the SHA-256 hash of the `get_file_hash.rs` source
/// file using a custom procedural macro and then prints the hash to the
/// console. It also includes a basic integrity verification check.
fn main() {
// Calculate the SHA-256 hash of the current file (`readme.rs`) at
// compile time. The `get_file_hash!` macro reads the file content and
// computes its hash.
let self_hash = get_file_hash!("readme.rs");
let status_message = if self_hash.starts_with("e3b0") {
"Warning: This hash represents an empty file."
} else {
"Integrity Verified."
};
let build_message = if BUILD_HASH.starts_with("e3b0") {
"Warning: This hash represents an empty file."
} else {
"Integrity Verified."
};
let cargo_message = if CARGO_TOML_HASH.starts_with("e3b0") {
"Warning: This hash represents an empty file."
} else {
"Integrity Verified."
};
let lib_message = if LIB_HASH.starts_with("e3b0") {
"Warning: This hash represents an empty file."
} else {
"Integrity Verified."
};
print!("{}{}{}", README_TEMPLATE_PART0, README_TEMPLATE_PART1, README_TEMPLATE_PART_NIP34);
println!("* **SHA-256 Hash:** {}", self_hash);
println!("* **Status:** {}.\n", status_message);
//
print!("{}", README_TEMPLATE_PART2);
println!("* **SHA-256 Hash:** {}", BUILD_HASH);
println!("* **Status:** {}.\n", build_message);
//
print!("{}", README_TEMPLATE_PART3);
println!("* **SHA-256 Hash:** {}", CARGO_TOML_HASH);
println!("* **Status:** {}.\n", cargo_message);
//
print!("{}", README_TEMPLATE_PART4);
println!("* **SHA-256 Hash:** {}", LIB_HASH);
println!("* **Status:** {}.\n", lib_message);
}
#[tokio::main]
#[cfg(feature = "nostr")]
#[allow(unused_imports)]
async fn main() {
use get_file_hash_core::repository_announcement;
use get_file_hash_core::get_file_hash;
use nostr_sdk::Keys;
use sha2::{Digest, Sha256};
use nostr_sdk::EventId;
use std::str::FromStr;
let keys = Keys::generate();
let relay_urls = get_file_hash_core::get_relay_urls();
let project_name = "my-awesome-repo-example";
let description = "A fantastic new project example.";
let clone_url = "git@github.com:user/my-awesome-repo-example.git";
// Dummy EventId for examples that require a build_manifest_event_id
const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0";
let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// Example 1: Without build_manifest_event_id
println!("Publishing repository announcement without build_manifest_event_id...");
repository_announcement!(
&keys,
&relay_urls,
project_name,
description,
clone_url,
"../Cargo.toml" // Use a known file in your project
);
println!("Repository announcement without build_manifest_event_id published.");
// Example 2: With build_manifest_event_id
println!("Publishing repository announcement with build_manifest_event_id...");
repository_announcement!(
&keys,
&relay_urls,
project_name,
description,
clone_url,
"../Cargo.toml", // Use a known file in your project
Some(&dummy_build_manifest_id)
);
println!("Repository announcement with build_manifest_event_id published.");
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example repository_announcement --features nostr");
}
#[tokio::main]
#[cfg(feature = "nostr")]
async fn main() {
use get_file_hash_core::publish_patch;
use nostr_sdk::Keys;
use nostr_sdk::EventId;
use std::str::FromStr;
let keys = Keys::generate();
let relay_urls = get_file_hash_core::get_relay_urls();
let d_tag = "my-awesome-repo-example";
let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0"; // Example commit ID
// Dummy EventId for examples that require a build_manifest_event_id
const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0";
let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// Example 1: Without build_manifest_event_id
println!("Publishing patch without build_manifest_event_id...");
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"../Cargo.toml" // Use an existing file for the patch content
);
println!("Patch without build_manifest_event_id published.");
// Example 2: With build_manifest_event_id
println!("Publishing patch with build_manifest_event_id...");
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"../Cargo.toml", // Use an existing file for the patch content
Some(&dummy_build_manifest_id)
);
println!("Patch with build_manifest_event_id published.");
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_patch --features nostr");
}
#[tokio::main]
#[cfg(feature = "nostr")]
async fn main() {
use get_file_hash_core::publish_patch;
use nostr_sdk::Keys;
use nostr_sdk::EventId;
use std::str::FromStr;
let keys = Keys::generate();
let relay_urls = get_file_hash_core::get_relay_urls();
let d_tag = "my-awesome-repo-example";
let commit_id = "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0"; // Example commit ID
// Dummy EventId for examples that require a build_manifest_event_id
const DUMMY_BUILD_MANIFEST_ID_STR: &str = "f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0";
let dummy_build_manifest_id = EventId::from_str(DUMMY_BUILD_MANIFEST_ID_STR).unwrap();
// Example 1: Without build_manifest_event_id
println!("Publishing patch without build_manifest_event_id...");
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"../Cargo.toml" // Use an existing file for the patch content
);
println!("Patch without build_manifest_event_id published.");
// Example 2: With build_manifest_event_id
println!("Publishing patch with build_manifest_event_id...");
publish_patch!(
&keys,
&relay_urls,
d_tag,
commit_id,
"../Cargo.toml", // Use an existing file for the patch content
Some(&dummy_build_manifest_id)
);
println!("Patch with build_manifest_event_id published.");
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example publish_patch --features nostr");
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
/// deterministic nostr event build example
// deterministic nostr event build example
use get_file_hash_core::get_file_hash;
#[cfg(all(not(debug_assertions), feature = "nostr"))]
use get_file_hash_core::{get_git_tracked_files, DEFAULT_GNOSTR_KEY, DEFAULT_PICTURE_URL, DEFAULT_BANNER_URL};
#[cfg(all(not(debug_assertions), feature = "nostr"))]
use nostr_sdk::{EventBuilder, Keys, EventId, Tag, SecretKey, JsonUtil, Kind, Event};
#[cfg(all(not(debug_assertions), feature = "nostr"))]
use std::fs;
use std::path::PathBuf;
use sha2::{Digest, Sha256};
#[cfg(all(not(debug_assertions), feature = "nostr"))]
use ::hex;
#[cfg(all(not(debug_assertions), feature = "nostr"))]
use std::io::Write;
#[cfg(all(not(debug_assertions), feature = "nostr"))]
fn should_remove_relay(error_msg: &str) -> bool {
error_msg.contains("relay not connected") ||
error_msg.contains("not in web of trust") ||
error_msg.contains("blocked: not authorized") ||
error_msg.contains("timeout") ||
error_msg.contains("blocked: spam not permitted") ||
error_msg.contains("relay experienced an error trying to publish the latest event") ||
error_msg.contains("duplicate: event already broadcast")
}
#[cfg(all(not(debug_assertions), feature = "nostr"))]
fn write_event_json_to_file(
output_dir: &PathBuf,
filename: &str,
event: &Event,
) -> Option<()> {
let file_path = output_dir.join(filename);
if let Some(parent) = file_path.parent() {
if let Err(e) = fs::create_dir_all(parent) {
println!("cargo:warning=Failed to create parent directories for {}: {}", file_path.display(), e);
return None;
}
}
if let Err(e) = fs::File::create(&file_path).and_then(|mut file| write!(file, "{}", event.as_json())) {
println!("cargo:warning=Failed to write event JSON to file {}: {}", file_path.display(), e);
None
} else {
println!("cargo:warning=Successfully wrote event JSON to {}", file_path.display());
Some(())
}
}
#[cfg(all(not(debug_assertions), feature = "nostr"))]
async fn publish_nostr_event_if_release(
hash: String,
keys: Keys,
event_builder: EventBuilder,
mut relay_urls: Vec<String>,
file_path_str: &str,
output_dir: &PathBuf,
) -> Option<EventId> {
let client = nostr_sdk::Client::new(keys.clone());
let public_key = keys.public_key().to_string();
for i in (0..relay_urls.len()).rev() {
let relay_url = &relay_urls[i];
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay {}: {}", relay_url, e);
}
}
println!("cargo:warning=Added {} relays", relay_urls.len());
client.connect().await;
println!("cargo:warning=Connected to {} relays", relay_urls.len());
let event = client.sign_event_builder(event_builder).await.unwrap();
match client.send_event(&event).await { Ok(event_output) => {
println!("cargo:warning=Published Nostr event for {}: {}", file_path_str, event_output.val);
// Print successful relays
for relay_url in event_output.success.iter() {
println!("cargo:warning=Successfully published to relay: {}", relay_url);
}
// Print failed relays and remove "unfriendly" relays from the list
let mut relays_to_remove: Vec<String> = Vec::new();
for (relay_url, error_msg) in event_output.failed.iter() {
if should_remove_relay(error_msg) {
relays_to_remove.push(relay_url.to_string());
}
}
// Remove failed relays from the list
relay_urls.retain(|url| !relays_to_remove.contains(url));
if !relays_to_remove.is_empty() {
println!("cargo:warning=Removed {} unresponsive relays from the list.", relays_to_remove.len());
}
let filename = format!("{}/{}/{}/{}.json", file_path_str, hash, public_key.clone(), event_output.val.to_string());
write_event_json_to_file(output_dir, &filename, &event);
Some(event_output.val)
},
Err(e) => {
println!("cargo:warning=Failed to publish Nostr event for {}: {}", file_path_str, e);
None
},
}
}
#[cfg(all(not(debug_assertions), feature = "nostr"))]
pub async fn get_repo_announcement_event(
keys: &Keys,
relay_urls: &Vec<String>,
repo_url: &str,
repo_name: &str,
repo_description: &str,
git_commit_hash: &str,
git_branch: &str,
output_dir: &PathBuf,
public_key_hex: &str,
) -> Option<EventId> {
let client = nostr_sdk::Client::new(keys.clone());
for relay_url in relay_urls {
if let Err(e) = client.add_relay(relay_url).await {
println!("cargo:warning=Failed to add relay {}: {}", relay_url, e);
}
}
client.connect().await;
let tags = vec![
Tag::parse(["r", repo_url].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["name", repo_name].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["description", repo_description].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["commit", git_commit_hash].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["branch", git_branch].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
];
let event_builder = EventBuilder::new(Kind::Custom(30617), repo_description).tags(tags);
let event = client.sign_event_builder(event_builder).await.unwrap();
match client.send_event(&event).await {
Ok(event_output) => {
println!("cargo:warning=Published Nostr Repository Announcement for {}: {}", repo_name, event_output.val);
let filename = format!("30617/{}/{}/{}.json", repo_name, public_key_hex, event_output.val.to_string());
write_event_json_to_file(output_dir, &filename, &event);
Some(event_output.val)
},
Err(e) => {
println!("cargo:warning=Failed to publish Nostr Repository Announcement for {}: {}", repo_name, e);
None
},
}
}
#[tokio::main]
async fn main() {
let manifest_dir = std::env::var("CARGO_MANIFEST_DIR").unwrap();
let is_git_repo = std::path::Path::new(&manifest_dir).join(".git").exists();
println!("cargo:rustc-env=CARGO_PKG_NAME={}", env!("CARGO_PKG_NAME"));
println!("cargo:rustc-env=CARGO_PKG_VERSION={}", env!("CARGO_PKG_VERSION"));
if is_git_repo {
let git_commit_hash_output = std::process::Command::new("git")
.args(&["rev-parse", "HEAD"])
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.output()
.expect("Failed to execute git command for commit hash");
let git_commit_hash_str = if git_commit_hash_output.status.success() && !git_commit_hash_output.stdout.is_empty() {
String::from_utf8(git_commit_hash_output.stdout).unwrap().trim().to_string()
} else {
println!("cargo:warning=Git commit hash command failed or returned empty. Status: {:?}, Stderr: {}",
git_commit_hash_output.status, String::from_utf8_lossy(&git_commit_hash_output.stderr));
String::new()
};
println!("cargo:rustc-env=GIT_COMMIT_HASH={}", git_commit_hash_str);
let git_branch_output = std::process::Command::new("git")
.args(&["rev-parse", "--abbrev-ref", "HEAD"])
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.output()
.expect("Failed to execute git command for branch name");
let git_branch_str = if git_branch_output.status.success() && !git_branch_output.stdout.is_empty() {
String::from_utf8(git_branch_output.stdout).unwrap().trim().to_string()
} else {
println!("cargo:warning=Git branch command failed or returned empty. Status: {:?}, Stderr: {}",
git_branch_output.status, String::from_utf8_lossy(&git_branch_output.stderr));
String::new()
};
println!("cargo:rustc-env=GIT_BRANCH={}", git_branch_str);
} else {
println!("cargo:rustc-env=GIT_COMMIT_HASH=");
println!("cargo:rustc-env=GIT_BRANCH=");
}
println!("cargo:rerun-if-changed=.git/HEAD");
#[cfg(all(not(debug_assertions), feature = "nostr"))]
let relay_urls = get_file_hash_core::get_relay_urls();
let cargo_toml_hash = get_file_hash!("Cargo.toml");
println!("cargo:rustc-env=CARGO_TOML_HASH={}", cargo_toml_hash);
let lib_hash = get_file_hash!("src/lib.rs");
println!("cargo:rustc-env=LIB_HASH={}", lib_hash);
let build_hash = get_file_hash!("build.rs");
println!("cargo:rustc-env=BUILD_HASH={}", build_hash);
println!("cargo:rerun-if-changed=Cargo.toml");
println!("cargo:rerun-if-changed=src/lib.rs");
println!("cargo:rerun-if-changed=build.rs");
let online_relays_csv_path = PathBuf::from(&manifest_dir).join("src/get_file_hash_core/src/online_relays_gps.csv");
if online_relays_csv_path.exists() {
println!("cargo:rerun-if-changed={}", online_relays_csv_path.to_str().unwrap());
}
#[cfg(all(not(debug_assertions), feature = "nostr"))]
if cfg!(not(debug_assertions)) {
println!("cargo:warning=Nostr feature enabled: Build may take longer due to network operations (publishing events to relays).");
// This code only runs in release builds
let package_version = std::env::var("CARGO_PKG_VERSION").unwrap();
let output_dir = PathBuf::from(format!(".gnostr/build/{}", package_version));
if let Err(e) = fs::create_dir_all(&output_dir) {
println!("cargo:warning=Failed to create output directory {}: {}", output_dir.display(), e);
}
let files_to_publish: Vec<String> = get_git_tracked_files(&PathBuf::from(&manifest_dir));
let mut published_event_ids: Vec<Tag> = Vec::new();
for file_path_str in &files_to_publish {
println!("cargo:warning=Processing file: {}", file_path_str);
match fs::read(file_path_str) {
Ok(bytes) => {
let mut hasher = Sha256::new();
hasher.update(&bytes);
let result = hasher.finalize();
let file_hash_hex = hex::encode(result);
match SecretKey::from_hex(&file_hash_hex.clone()) {
Ok(secret_key) => {
let keys = Keys::new(secret_key);
let content = String::from_utf8_lossy(&bytes).into_owned();
let tags = vec![
Tag::parse(["file", file_path_str].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["version", &package_version].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
];
let event_builder = EventBuilder::text_note(content).tags(tags);
if let Some(event_id) = publish_nostr_event_if_release(file_hash_hex, keys.clone(), event_builder, relay_urls.clone(), file_path_str, &output_dir).await {
published_event_ids.push(Tag::event(event_id));
}
// Publish metadata event
get_file_hash_core::publish_metadata_event(
&keys,
&relay_urls,
DEFAULT_PICTURE_URL,
DEFAULT_BANNER_URL,
file_path_str,
).await;
}
Err(e) => {
println!("cargo:warning=Failed to derive Nostr secret key for {}: {}", file_path_str, e);
}
}
}
Err(e) => {
println!("cargo:warning=Failed to read file {}: {}", file_path_str, e);
}
}
}
// Create and publish the build_manifest
if !published_event_ids.is_empty() {
//TODO this will be either the default or detected from env vars PRIVATE_KEY
let keys = Keys::new(SecretKey::from_hex(DEFAULT_GNOSTR_KEY).expect("Failed to create Nostr keys from DEFAULT_GNOSTR_KEY"));
let cloned_keys = keys.clone();
let content = format!("Build manifest for get_file_hash v{}", package_version);
let mut tags = vec![
Tag::parse(["build_manifest", &package_version].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["build_manifest", &package_version].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["build_manifest", &package_version].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
Tag::parse(["build_manifest", &package_version].iter().map(ToString::to_string).collect::<Vec<String>>()).unwrap(),
];
tags.extend(published_event_ids);
let event_builder = EventBuilder::text_note(content.clone()).tags(tags);
if let Some(event_id) = publish_nostr_event_if_release(
hex::encode(Sha256::digest(content.as_bytes())),
keys,
event_builder,
relay_urls.clone(),
"build_manifest.json",
&output_dir,
).await {
let build_manifest_event_id = Some(event_id);
// Publish metadata event for the build manifest
get_file_hash_core::publish_metadata_event(
&cloned_keys, // Use reference to cloned keys here
&relay_urls,
DEFAULT_PICTURE_URL,
DEFAULT_BANNER_URL,
&format!("build_manifest:{}", package_version),
).await;
let git_commit_hash = std::env::var("GIT_COMMIT_HASH").unwrap_or_default();
let git_branch = std::env::var("GIT_BRANCH").unwrap_or_default();
let repo_url = std::env::var("CARGO_PKG_REPOSITORY").unwrap();
let repo_name = std::env::var("CARGO_PKG_NAME").unwrap();
let repo_description = std::env::var("CARGO_PKG_DESCRIPTION").unwrap();
let output_dir = PathBuf::from(format!(".gnostr/build/{}", package_version));
if let Err(e) = fs::create_dir_all(&output_dir) {
println!("cargo:warning=Failed to create output directory {}: {}", output_dir.display(), e);
}
let announcement_keys = Keys::new(SecretKey::from_hex(build_manifest_event_id.unwrap().to_hex().as_str()).expect("Failed to create Nostr keys from build_manifest_event_id"));
let announcement_pubkey_hex = announcement_keys.public_key().to_string();
// Publish NIP-34 Repository Announcement
if let Some(_event_id) = get_repo_announcement_event(
&announcement_keys,
&relay_urls,
&repo_url,
&repo_name,
&repo_description,
&git_commit_hash,
&git_branch,
&output_dir,
&announcement_pubkey_hex
).await {
// Successfully published announcement
}
}
}
}
}
// deterministic nostr event build example
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
# `get_file_hash` macro
This project provides a Rust procedural macro, `get_file_hash!`, designed to compute the SHA-256 hash of a specified file at compile time. This hash is then embedded directly into your compiled executable. This feature is invaluable for:
* **Integrity Verification:** Ensuring the deployed code hasn't been tampered with.
* **Versioning:** Embedding a unique identifier linked to the exact source code version.
* **Cache Busting:** Generating unique names for assets based on their content.
## Project Structure
* `get_file_hash_core`: A foundational crate containing the `get_file_hash!` macro definition.
* `get_file_hash`: The main library crate that re-exports the macro.
* `src/bin/get_file_hash.rs`: An example executable demonstrating the macro's usage by hashing its own source file and updating this `README.md`.
* `build.rs`: A build script that also utilizes the `get_file_hash!` macro to hash `Cargo.toml` during the build process.
## Usage of `get_file_hash!` Macro
To use the `get_file_hash!` macro, ensure you have `get_file_hash` (or `get_file_hash_core` for direct usage) as a dependency in your `Cargo.toml`.
### Example
```rust
use get_file_hash::get_file_hash;
use get_file_hash::CARGO_TOML_HASH;
use sha2::{Digest, Sha256};
fn main() {
// The macro resolves the path relative to CARGO_MANIFEST_DIR
let readme_hash = get_file_hash!("src/bin/readme.rs");
let lib_hash = get_file_hash!("src/lib.rs");
println!("The SHA-256 hash of src/lib.rs is: {}", lib_hash);
println!("The SHA-256 hash of src/bin/readme.rs is: {}", readme_hash);
println!("The SHA-256 hash of Cargo.toml is: {}", CARGO_TOML_HASH);
}
```
## Release
## [`README.md`](./README.md)
```bash
cargo run --bin readme > README.md
```
## [`src/bin/readme.rs`](src/bin/readme.rs)
* **Target File:** `src/bin/readme.rs`
## NIP-34 Integration: Git Repository Events on Nostr
This library provides a set of powerful macros and functions for integrating Git repository events with the Nostr protocol, adhering to the [NIP-34: Git Repositories on Nostr](https://github.com/nostr-protocol/nips/blob/master/34.md) specification.
These tools allow you to publish various Git-related events to Nostr relays, enabling decentralized tracking and collaboration for your code repositories.
### Available NIP-34 Macros
Each macro provides a convenient way to publish specific NIP-34 event kinds:
* [`repository_announcement!`](#repository_announcement)
* Publishes a `Repository Announcement` event (Kind 30617) to announce a new or updated Git repository.
* [`publish_patch!`](#publish_patch)
* Publishes a `Patch` event (Kind 1617) containing a Git patch (diff) for a specific commit.
* [`publish_pull_request!`](#publish_pull_request)
* Publishes a `Pull Request` event (Kind 1618) to propose changes and facilitate code review.
* [`publish_pr_update!`](#publish_pr_update)
* Publishes a `Pull Request Update` event (Kind 1619) to update an existing pull request.
* [`publish_repository_state!`](#publish_repository_state)
* Publishes a `Repository State` event (Kind 1620) to announce the current state of a branch (e.g., its latest commit).
* [`publish_issue!`](#publish_issue)
* Publishes an `Issue` event (Kind 1621) to report bugs, request features, or track tasks.
### Running NIP-34 Examples
To see these macros in action, navigate to the `examples/` directory and run each example individually with the `nostr` feature enabled:
```bash
cargo run --example repository_announcement --features nostr
cargo run --example publish_patch --features nostr
cargo run --example publish_pull_request --features nostr
cargo run --example publish_pr_update --features nostr
cargo run --example publish_repository_state --features nostr
cargo run --example publish_issue --features nostr
```
* **SHA-256 Hash:** 6c6325c5a4c14f44cbda6ca53179ab3d6666ce7c916365668c6dd1d79215db59
* **Status:** Integrity Verified..
##
## [`build.rs`](build.rs)
* **Target File:** `build.rs`
* **SHA-256 Hash:** 20c958c8cbb5c77cf5eb3763b6da149b61241d328df52d39b7aa97903305c889
* **Status:** Integrity Verified..
##
## [`Cargo.toml`](Cargo.toml)
* **Target File:** `Cargo.toml`
* **SHA-256 Hash:** e3f392bf23b5fb40902acd313a8c76d1943060b6805ea8615de62f9baf0c6513
* **Status:** Integrity Verified..
##
## [`src/lib.rs`](src/lib.rs)
* **Target File:** `src/lib.rs`
* **SHA-256 Hash:** 591593482a6c9aac8793aa1e488e613f52a4effb1ec3465fd9d6a54537f2b123
* **Status:** Integrity Verified..
[workspace]
members = [".", "src/get_file_hash_core"]
[workspace.package]
version = "0.2.9"
edition = "2024"
license = "MIT"
authors = ["gnostr admin@gnostr.org"]
documentation = "https://github.com/gnostr-org/get_file_hash#readme"
homepage = "https://github.com/gnostr-org/get_file_hash"
repository = "https://github.com/gnostr-org/get_file_hash"
description = "A utility crate providing a procedural macro to compute and embed file hashes at compile time."
[package]
name = "get_file_hash"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
homepage.workspace = true
authors.workspace = true
[package.metadata.wix]
upgrade-guid = "DED69220-26E3-4406-B564-7F2B58C56F57"
path-guid = "8DB39A25-8B99-4C25-8CF5-4704353C7C6E"
license = false
eula = false
[features]
nostr = ["dep:nostr", "dep:nostr-sdk", "dep:hex"]
[workspace.dependencies]
get_file_hash_core = { features = ["nostr"], path = "src/get_file_hash_core" }
sha2 = "0.11.0"
nostr = "0.44.2"
nostr-sdk = "0.44.0"
hex = "0.4.2"
tokio = "1"
serde_json = "1.0"
csv = { version = "1.3.0", default-features = false }
url = "2.5.0"
reqwest = { version = "0.12.0", default-features = false }
tempfile = "3.27.0"
rand = "0.8"
frost-secp256k1-tr = "3.0.0-rc.0"
serial_test = { version = "3.4.0", features = ["test_logging"] }
log = "0.4"
[dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
nostr = { workspace = true, optional = true }
nostr-sdk = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
tokio = { workspace = true, features = ["full"] }
frost-secp256k1-tr = { workspace = true }
rand = { workspace = true }
serde_json.workspace = true
[build-dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
nostr = { workspace = true }
nostr-sdk = { workspace = true }
hex = { workspace = true }
# The profile that 'dist' will build with
[profile.dist]
inherits = "release"
lto = "thin"
[dev-dependencies]
serial_test = { workspace = true }
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
[workspace]
members = [".", "src/get_file_hash_core"]
[workspace.package]
version = "0.2.9"
edition = "2024"
license = "MIT"
authors = ["gnostr admin@gnostr.org"]
documentation = "https://github.com/gnostr-org/get_file_hash#readme"
homepage = "https://github.com/gnostr-org/get_file_hash"
repository = "https://github.com/gnostr-org/get_file_hash"
description = "A utility crate providing a procedural macro to compute and embed file hashes at compile time."
[package]
name = "get_file_hash"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
homepage.workspace = true
authors.workspace = true
[package.metadata.wix]
upgrade-guid = "DED69220-26E3-4406-B564-7F2B58C56F57"
path-guid = "8DB39A25-8B99-4C25-8CF5-4704353C7C6E"
license = false
eula = false
[features]
nostr = ["dep:nostr", "dep:nostr-sdk", "dep:hex"]
[workspace.dependencies]
get_file_hash_core = { features = ["nostr"], path = "src/get_file_hash_core" }
sha2 = "0.11.0"
nostr = "0.44.2"
nostr-sdk = "0.44.0"
hex = "0.4.2"
tokio = "1"
serde_json = "1.0"
csv = { version = "1.3.0", default-features = false }
url = "2.5.0"
reqwest = { version = "0.12.0", default-features = false }
tempfile = "3.27.0"
rand = "0.8"
frost-secp256k1-tr = "3.0.0-rc.0"
serial_test = { version = "3.4.0", features = ["test_logging"] }
log = "0.4"
[dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
nostr = { workspace = true, optional = true }
nostr-sdk = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
tokio = { workspace = true, features = ["full"] }
frost-secp256k1-tr = { workspace = true }
rand = { workspace = true }
serde_json.workspace = true
[build-dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
nostr = { workspace = true }
nostr-sdk = { workspace = true }
hex = { workspace = true }
# The profile that 'dist' will build with
[profile.dist]
inherits = "release"
lto = "thin"
[dev-dependencies]
serial_test = { workspace = true }
[workspace]
members = [".", "src/get_file_hash_core"]
[workspace.package]
version = "0.2.9"
edition = "2024"
license = "MIT"
authors = ["gnostr admin@gnostr.org"]
documentation = "https://github.com/gnostr-org/get_file_hash#readme"
homepage = "https://github.com/gnostr-org/get_file_hash"
repository = "https://github.com/gnostr-org/get_file_hash"
description = "A utility crate providing a procedural macro to compute and embed file hashes at compile time."
[package]
name = "get_file_hash"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
homepage.workspace = true
authors.workspace = true
[package.metadata.wix]
upgrade-guid = "DED69220-26E3-4406-B564-7F2B58C56F57"
path-guid = "8DB39A25-8B99-4C25-8CF5-4704353C7C6E"
license = false
eula = false
[features]
nostr = ["dep:nostr", "dep:nostr-sdk", "dep:hex"]
[workspace.dependencies]
get_file_hash_core = { features = ["nostr"], path = "src/get_file_hash_core" }
sha2 = "0.11.0"
nostr = "0.44.2"
nostr-sdk = "0.44.0"
hex = "0.4.2"
tokio = "1"
serde_json = "1.0"
csv = { version = "1.3.0", default-features = false }
url = "2.5.0"
reqwest = { version = "0.12.0", default-features = false }
tempfile = "3.27.0"
rand = "0.8"
frost-secp256k1-tr = "3.0.0-rc.0"
serial_test = { version = "3.4.0", features = ["test_logging"] }
log = "0.4"
[dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
nostr = { workspace = true, optional = true }
nostr-sdk = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
tokio = { workspace = true, features = ["full"] }
frost-secp256k1-tr = { workspace = true }
rand = { workspace = true }
serde_json.workspace = true
[build-dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
nostr = { workspace = true }
nostr-sdk = { workspace = true }
hex = { workspace = true }
# The profile that 'dist' will build with
[profile.dist]
inherits = "release"
lto = "thin"
[dev-dependencies]
serial_test = { workspace = true }
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
[workspace]
members = [".", "src/get_file_hash_core"]
[workspace.package]
version = "0.2.9"
edition = "2024"
license = "MIT"
authors = ["gnostr admin@gnostr.org"]
documentation = "https://github.com/gnostr-org/get_file_hash#readme"
homepage = "https://github.com/gnostr-org/get_file_hash"
repository = "https://github.com/gnostr-org/get_file_hash"
description = "A utility crate providing a procedural macro to compute and embed file hashes at compile time."
[package]
name = "get_file_hash"
version.workspace = true
edition.workspace = true
description.workspace = true
repository.workspace = true
homepage.workspace = true
authors.workspace = true
[package.metadata.wix]
upgrade-guid = "DED69220-26E3-4406-B564-7F2B58C56F57"
path-guid = "8DB39A25-8B99-4C25-8CF5-4704353C7C6E"
license = false
eula = false
[features]
nostr = ["dep:nostr", "dep:nostr-sdk", "dep:hex"]
[workspace.dependencies]
get_file_hash_core = { features = ["nostr"], path = "src/get_file_hash_core" }
sha2 = "0.11.0"
nostr = "0.44.2"
nostr-sdk = "0.44.0"
hex = "0.4.2"
tokio = "1"
serde_json = "1.0"
csv = { version = "1.3.0", default-features = false }
url = "2.5.0"
reqwest = { version = "0.12.0", default-features = false }
tempfile = "3.27.0"
rand = "0.8"
frost-secp256k1-tr = "3.0.0-rc.0"
serial_test = { version = "3.4.0", features = ["test_logging"] }
log = "0.4"
[dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
nostr = { workspace = true, optional = true }
nostr-sdk = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
tokio = { workspace = true, features = ["full"] }
frost-secp256k1-tr = { workspace = true }
rand = { workspace = true }
serde_json.workspace = true
[build-dependencies]
get_file_hash_core = { workspace = true, features = ["nostr"] }
sha2 = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["full"] }
nostr = { workspace = true }
nostr-sdk = { workspace = true }
hex = { workspace = true }
# The profile that 'dist' will build with
[profile.dist]
inherits = "release"
lto = "thin"
[dev-dependencies]
serial_test = { workspace = true }
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
"Wait, is this just more confusion or are things actually escalating? 😬 How does anyone keep track of what’s true anymore?"
i make the bread hERE NA claw uP my as's i do IT
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
#![cfg(feature = "nostr")]
use frost_secp256k1_tr as frost;
use frost::keys::PublicKeyPackage;
use frost::round2::SignatureShare;
use frost::SigningPackage;
use hex;
use rand::thread_rng;
use std::collections::BTreeMap;
use sha2::Sha256;
use serde_json;
use sha2::Digest;
pub fn process_relay_share(
relay_payload_hex: &str,
signer_id_u16: u16,
_signing_package: &SigningPackage,
_pubkey_package: &PublicKeyPackage,
) -> Result<(), Box<dyn std::error::Error>> {
// In a real scenario, this function would deserialize the share, perform
// individual verification, and store it for aggregation.
// For this example, we'll just acknowledge receipt.
let _share_bytes = hex::decode(relay_payload_hex)?;
let _share = SignatureShare::deserialize(&_share_bytes)?;
let _identifier = frost::Identifier::try_from(signer_id_u16)?;
println!("✅ Share from Signer {} processed (simplified).", signer_id_u16);
Ok(())
}
pub fn simulate_frost_mailbox_coordinator() -> Result<(), Box<dyn std::error::Error>> {
let mut rng = thread_rng();
let (max_signers, min_signers) = (2, 2);
let (shares, pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
let message = b"BIP-64MOD: Anchor Data Proposal v1";
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
let signing_package = frost::SigningPackage::new(session_commitments.clone(), message);
let share1 = frost::round2::sign(&signing_package, &nonces1, &key_package1)?;
let share1_hex = hex::encode(share1.serialize());
let share2 = frost::round2::sign(&signing_package, &nonces2, &key_package2)?;
let share2_hex = hex::encode(share2.serialize());
println!("Coordinator listening for Nostr events (simulated)...");
process_relay_share(&share1_hex, 1_u16, &signing_package, &pubkey_package)?;
process_relay_share(&share2_hex, 2_u16, &signing_package, &pubkey_package)?;
println!("All required shares processed. Coordinator would now aggregate.");
Ok(())
}
/// Simulates a Signer producing a FROST signature share and preparing a Nostr event
/// to be sent to a coordinator via a "mailbox" relay.
///
/// In a real ROAST setup, signers would generate their share and post it
/// encrypted (e.g., using NIP-44) to a coordinator's "mailbox" on a Nostr relay.
/// This function demonstrates the creation of the signature share and the
/// construction of a *simplified* Nostr event JSON.
///
/// # Arguments
///
/// * `_identifier` - The FROST identifier of the signer. (Currently unused in this specific function body).
/// * `signing_package` - The FROST signing package received from the coordinator.
/// * `nonces` - The signer's nonces generated in Round 1.
/// * `key_package` - The signer's FROST key package.
/// * `coordinator_pubkey` - The hex-encoded public key of the ROAST coordinator,
/// used to tag the Nostr event.
///
/// # Returns
///
/// A `Result` containing the JSON string of the Nostr event if successful,
/// or a `Box<dyn std::error::Error>` if an error occurs.
pub fn create_signer_event(
_identifier: frost::Identifier,
signing_package: &frost::SigningPackage,
nonces: &frost::round1::SigningNonces,
key_package: &frost::keys::KeyPackage,
coordinator_pubkey: &str, // The Hex pubkey of the ROAST coordinator
) -> Result<String, Box<dyn std::error::Error>> {
// 1. Generate the partial signature share (Round 2 of FROST)
// This share is the core cryptographic output from the signer.
let share = frost::round2::sign(signing_package, nonces, key_package)?;
let share_bytes = share.serialize();
let share_hex = hex::encode(share_bytes);
// 2. Create a Session ID to tag the event
// This ID is derived from the signing package hash, allowing the coordinator
// to correlate shares belonging to the same signing session.
let mut hasher = Sha256::new();
hasher.update(signing_package.serialize()?);
let session_id = hex::encode(hasher.finalize());
// 3. Construct the Nostr Event JSON (Simplified)
// This JSON represents the event that a signer would post to a relay.
// In a production ROAST system, the 'content' field (the signature share)
// would be encrypted for the coordinator using NIP-44.
let event = serde_json::json!({
"kind": 4, // Example: Using Kind 4 (Private Message), though custom Kinds could be used for Sovereign Stack.
"pubkey": hex::encode(key_package.verifying_key().serialize()?.as_slice()), // Signer's public key
"created_at": 1712050000, // Example timestamp
"tags": [
["p", coordinator_pubkey], // 'p' tag: Directs the event to the coordinator.
["i", session_id], // 'i' tag: Provides a session identifier for filtering/requests.
["t", "frost-signature-share"] // 't' tag: A searchable label for the event type.
],
"content": share_hex, // The actual signature share (would be encrypted in production).
"id": "...", // Event ID (filled by relay upon publishing)
"sig": "..." // Event signature (filled by relay upon publishing)
});
Ok(event.to_string())
}
pub fn simulate_frost_mailbox_post_signer() -> Result<(), Box<dyn std::error::Error>> {
use rand::thread_rng;
use std::collections::BTreeMap;
use frost_secp256k1_tr as frost;
// This example simulates a single signer's role in a ROAST mailbox post workflow.
// The general workflow is:
// 1. Coordinator sends a request for signatures (e.g., on a BIP-64MOD proposal).
// 2. Signers receive the proposal, perform local verification.
// 3. Each signer generates their signature share and posts it (encrypted) to a
// Nostr relay, targeting the coordinator's mailbox.
// 4. The coordinator collects enough shares to aggregate the final signature.
let mut rng = thread_rng();
// For this example, we simulate a 2-of-2 threshold for simplicity.
let (max_signers, min_signers) = (2, 2);
////////////////////////////////////////////////////////////////////////////
// 1. Key Generation (Simulated Trusted Dealer)
////////////////////////////////////////////////////////////////////////////
// In a real distributed setup, this would be DKG. Here, a "trusted dealer"
// generates the shares and public key package.
let (shares, _pubkey_package) = frost::keys::generate_with_dealer(
max_signers,
min_signers,
frost::keys::IdentifierList::Default,
&mut rng,
)?;
// For a 2-of-2 scheme, we have two signers. Let's pick signer 1.
let signer1_id = frost::Identifier::try_from(1 as u16)?;
let key_package1: frost::keys::KeyPackage = shares[&signer1_id].clone().try_into()?;
let signer2_id = frost::Identifier::try_from(2 as u16)?;
let key_package2: frost::keys::KeyPackage = shares[&signer2_id].clone().try_into()?;
// The message that is to be signed (e.g., a hash of a Git commit or a Nostr event ID).
let message = b"This is a test message for ROAST mailbox post.";
////////////////////////////////////////////////////////////////////////////
// 2. Round 1: Commitment Phase (Signer's role)
////////////////////////////////////////////////////////////////////////////
// Each signer generates nonces and commitments.
let (nonces1, comms1) = frost::round1::commit(key_package1.signing_share(), &mut rng);
let (nonces2, comms2) = frost::round1::commit(key_package2.signing_share(), &mut rng);
// The coordinator collects these commitments. Here, we simulate by putting them in a BTreeMap.
let mut session_commitments = BTreeMap::new();
session_commitments.insert(signer1_id, comms1);
session_commitments.insert(signer2_id, comms2);
////////////////////////////////////////////////////////////////////////////
// 3. Signing Package Creation (Coordinator's role, simulated for context)
////////////////////////////////////////////////////////////////////////////
// The coordinator combines the collected commitments and the message to be signed
// into a signing package, which is then sent back to the signers.
let signing_package = frost::SigningPackage::new(session_commitments, message);
// Dummy coordinator public key. In a real scenario, this would be the
// actual public key of the ROAST coordinator, used for event tagging
// and encryption (NIP-44).
let coordinator_pubkey_hex = "0000000000000000000000000000000000000000000000000000000000000001";
////////////////////////////////////////////////////////////////////////////
// 4. Create the Signer Event (Signer's role)
////////////////////////////////////////////////////////////////////////////
// We demonstrate for signer 1. Signer 2 would perform a similar action.
let event_json_signer1 = create_signer_event(
signer1_id,
&signing_package,
&nonces1,
&key_package1,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 1 Mailbox Post:
{}", event_json_signer1);
// Similarly, Signer 2 would generate their event:
let event_json_signer2 = create_signer_event(
signer2_id,
&signing_package,
&nonces2,
&key_package2,
coordinator_pubkey_hex,
)?;
println!("Generated Nostr Event for Signer 2 Mailbox Post:
{}", event_json_signer2);
Ok(())
}
#[cfg(feature = "nostr")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
get_file_hash_core::frost_mailbox_logic::simulate_frost_mailbox_post_signer()
}
#[cfg(not(feature = "nostr"))]
fn main() {
println!("This example requires the 'nostr' feature. Please run with: cargo run --example frost_mailbox_post --features nostr");
}
Why are they crediting block then?