{"type":"rich","version":"1.0","title":"Olaoluwa Osuntokun [ARCHIVE] wrote","author_name":"Olaoluwa Osuntokun [ARCHIVE] (npub19h…zkvn4)","author_url":"https://yabu.me/npub19helcfnqgk2jrwzjex2aflq6jwfc8zd9uzzkwlgwhve7lykv23mq5zkvn4","provider_name":"njump","provider_url":"https://yabu.me","html":"📅 Original date posted:2022-06-29\n📝 Original message:\nHi Rusty,\n\nThanks for the feedback!\n\n\u003e This is over-design: if you fail to get reliable gossip, your routing will\n\u003e suffer anyway.  Nothing new here.\n\nIdk, it's pretty simple: you're already watching for closes, so if a close\nlooks a certain way, it's a splice. When you see that, you can even take\nnote of the _new_ channel size (funds added/removed) and update your\npathfinding/blindedpaths/hophints accordingly.\n\nIf this is an over-designed solution, that I'd categorize _only_ waiting N\nblocks as wishful thinking, given we have effectively no guarantees w.r.t\nhow long it'll take a message to propagate.\n\nIf by routing you mean a routing node then: no, a routing node doesn't even\nreally need the graph at all to do their job.\n\nIf by routing you mean a sender, then imo still no: you don't necessarily\nneed _all_ gossip, just the latest policies of the nodes you route most\nfrequently to. On top of that, since you can get the latest policy each time\nyou incur a routing failure, as you make payments, you'll get the latest\npolicies of the nodes you care about over time. Also consider that you might\nfail to get \"reliable\" gossip, simply just due to your peer neighborhood\naggressively rate limiting gossip (they only allow 1 update a day for a\nnode, you updated your fee, oops, no splice msg for you).\n\nSo it appears you don't agree that the \"wait N blocks before you close your\nchannels\" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?\n\n\u003eFrom my PoV, the whole point of even signalling that a splice is on going,\nis for the sender's/receivers: they can continue to send/recv payments over\nthe channel while the splice is in process. It isn't that a node isn't\ngetting any gossip, it's that if the node fails to obtain the gossip message\nwithin the N block period of time, then the channel has effectively closed\nfrom their PoV, and it may be an hour+ until it's seen as a usable (new)\nchannel again.\n\nIf there isn't a 100% reliable way to signal that a splice is in progress,\nthen this disincentives its usage, as routers can lose out on potential fee\nrevenue, and sends/receivers may grow to favor only very long lived\nchannels. IMO _only_ having a gossip message simply isn't enough: there're\nno real guarantees w.r.t _when_ all relevant parties will get your gossip\nmessage. So why not give them a 100% reliable on chain signal that:\nsomething is in progress here, stay tuned for the gossip message, whenever\nyou receive that.\n\n-- Laolu\n\n\nOn Tue, Jun 28, 2022 at 6:40 PM Rusty Russell \u003crusty at rustcorp.com.au\u003e wrote:\n\n\u003e Hi Roasbeef,\n\u003e\n\u003e This is over-design: if you fail to get reliable gossip, your routing\n\u003e will suffer anyway.  Nothing new here.\n\u003e\n\u003e And if you *know* you're missing gossip, you can simply delay onchain\n\u003e closures for longer: since nodes should respect the old channel ids for\n\u003e a while anyway.\n\u003e\n\u003e Matt's proposal to simply defer treating onchain closes is elegant and\n\u003e minimal.  We could go further and relax requirements to detect onchain\n\u003e closes at all, and optionally add a perm close message.\n\u003e\n\u003e Cheers,\n\u003e Rusty.\n\u003e\n\u003e Olaoluwa Osuntokun \u003claolu32 at gmail.com\u003e writes:\n\u003e \u003e Hi y'all,\n\u003e \u003e\n\u003e \u003e This mail was inspired by this [1] spec PR from Lisa. At a high level, it\n\u003e \u003e proposes the nodes add a delay between the time they see a channel\n\u003e closed on\n\u003e \u003e chain, to when they remove it from their local channel graph. The motive\n\u003e \u003e here is to give the gossip message that indicates a splice is in process,\n\u003e \u003e \"enough\" time to propagate through the network. If a node can see this\n\u003e \u003e message before/during the splicing operation, then they'll be able relate\n\u003e \u003e the old and the new channels, meaning it's usable again by\n\u003e senders/receiver\n\u003e \u003e _before_ the entire chain of transactions confirms on chain.\n\u003e \u003e\n\u003e \u003e IMO, this sort of arbitrary delay (expressed in blocks) won't actually\n\u003e \u003e address the issue in practice. The proposal suffers from the following\n\u003e \u003e issues:\n\u003e \u003e\n\u003e \u003e   1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement\n\u003e \u003e   takes longer than 2 hours to reach the \"economic majority\" of\n\u003e \u003e   senders/receivers, then the channel won't be able to mask the splicing\n\u003e \u003e   downtime.\n\u003e \u003e\n\u003e \u003e   2. Gossip propagation delay and offline peers. These days most nodes\n\u003e \u003e   throttle gossip pretty aggressively. As a result, a pair of nodes doing\n\u003e \u003e   several in-flight splices (inputs become double spent or something, so\n\u003e \u003e   they need to try a bunch) might end up being rate limited within the\n\u003e \u003e   network, causing the splice update msg to be lost or delayed\n\u003e significantly\n\u003e \u003e   (IIRC CLN resets these values after 24 hours). On top of that, if a\n\u003e peer\n\u003e \u003e   is offline for too long (think mobile senders), then they may miss the\n\u003e \u003e   update all together as most nodes don't do a full historical\n\u003e \u003e   _channel_update_ dump anymore.\n\u003e \u003e\n\u003e \u003e In order to resolve these issues, I think instead we need to rely on the\n\u003e \u003e primary splicing signal being sourced from the chain itself. In other\n\u003e words,\n\u003e \u003e if I see a channel close, and a closing transaction \"looks\" a certain\n\u003e way,\n\u003e \u003e then I know it's a splice. This would be used in concert w/ any new\n\u003e gossip\n\u003e \u003e messages, as the chain signal is a 100% foolproof way of letting an aware\n\u003e \u003e peer know that a splice is actually happening (not a normal close). A\n\u003e chain\n\u003e \u003e signal doesn't suffer from any of the gossip/time related issues above,\n\u003e as\n\u003e \u003e the signal is revealed at the same time a peer learns of a channel\n\u003e \u003e close/splice.\n\u003e \u003e\n\u003e \u003e Assuming, we agree that a chain signal has some sort of role in the\n\u003e ultimate\n\u003e \u003e plans for splicing, we'd need to decide on exactly _what_ such a signal\n\u003e \u003e looks like. Off the top, a few options are:\n\u003e \u003e\n\u003e \u003e   1. Stuff something in the annex. Works in theory, but not in practice,\n\u003e as\n\u003e \u003e   bitcoind (being the dominant full node implementation on the p2p\n\u003e network,\n\u003e \u003e   as well as what all the miners use) treats annexes as non-standard.\n\u003e Also\n\u003e \u003e   the annex itself might have some fundamental issues that get in the\n\u003e way of\n\u003e \u003e   its use all together [2].\n\u003e \u003e\n\u003e \u003e   2. Re-use the anchors for this purpose. Anchor are nice as they allow\n\u003e for\n\u003e \u003e   1st/2nd/3rd party CPFP. As a splice might have several inputs and\n\u003e outputs,\n\u003e \u003e   both sides will want to make sure it gets confirmed in a timely manner.\n\u003e \u003e   Ofc, RBF can be used here, but that requires both sides to be online to\n\u003e \u003e   make adjustments. Pre-signing can work too, but the effectiveness\n\u003e \u003e   (minimizing chain cost while expediting confirmation) would be\n\u003e dependent\n\u003e \u003e   on the fee step size.\n\u003e \u003e\n\u003e \u003e   In this case, we'd use a different multi-sig output (both sides can\n\u003e rotate\n\u003e \u003e   keys if they want to), and then roll the anchors into this splicing\n\u003e \u003e   transaction. Given that all nodes on the network know what the anchor\n\u003e size\n\u003e \u003e   is (assuming feature bit understanding), they're able to realize that\n\u003e it's\n\u003e \u003e   actually a splice, and they don't need to remove it from the channel\n\u003e graph\n\u003e \u003e   (yet).\n\u003e \u003e\n\u003e \u003e   3. Related to the above: just re-use the same multi-sig output. If\n\u003e nodes\n\u003e \u003e   don't care all that much about rotating these keys, then they can just\n\u003e use\n\u003e \u003e   the same output. This is trivially recognizable by nodes, as they\n\u003e already\n\u003e \u003e   know the funding keys used, as they're in the channel_announcement.\n\u003e \u003e\n\u003e \u003e   4. OP_RETURN (yeh, I had to list it). Self explanatory, push some\n\u003e bytes in\n\u003e \u003e   an OP_RETURN and use that as the marker.\n\u003e \u003e\n\u003e \u003e   5. Fiddle w/ the locktime+sequence somehow to make it identifiable to\n\u003e \u003e   verifiers. This might run into some unintended interactions if the\n\u003e inputs\n\u003e \u003e   provided have either relative or absolute lock times. There might also\n\u003e be\n\u003e \u003e   some interaction w/ the main constructing for eltoo (uses the\n\u003e locktime).\n\u003e \u003e\n\u003e \u003e Of all the options, I think #2 makes the most sense: we already use\n\u003e anchors\n\u003e \u003e to be able to do fee bumping after-the-fact for closing transactions, so\n\u003e why\n\u003e \u003e not inherit them here. They make the splicing transaction slightly\n\u003e larger,\n\u003e \u003e so maybe #3 (or something else) is a better choice.\n\u003e \u003e\n\u003e \u003e The design space for spicing is preeetty large, so I figure the most\n\u003e \u003e productive route might be discussing isolated aspects of it at a time.\n\u003e \u003e Personally, I'm not suuuper caught up w/ what the latest design drafts\n\u003e are\n\u003e \u003e (aside from convos at the recent LN Dev Summit), but from my PoV, how to\n\u003e \u003e communicate the splice to other peers has been an outstanding design\n\u003e \u003e question.\n\u003e \u003e\n\u003e \u003e [1]: https://github.com/lightning/bolts/pull/1004\n\u003e \u003e [2]:\n\u003e \u003e\n\u003e https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020045.html\n\u003e \u003e\n\u003e \u003e -- Laolu\n\u003e \u003e _______________________________________________\n\u003e \u003e Lightning-dev mailing list\n\u003e \u003e Lightning-dev at lists.linuxfoundation.org\n\u003e \u003e https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev\n\u003e\n-------------- next part --------------\nAn HTML attachment was scrubbed...\nURL: \u003chttp://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220629/0823b5a9/attachment.html\u003e"}
