<oembed><type>rich</type><version>1.0</version><title>Olaoluwa Osuntokun [ARCHIVE] wrote</title><author_name>Olaoluwa Osuntokun [ARCHIVE] (npub19h…zkvn4)</author_name><author_url>https://yabu.me/npub19helcfnqgk2jrwzjex2aflq6jwfc8zd9uzzkwlgwhve7lykv23mq5zkvn4</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>📅 Original date posted:2022-06-29&#xA;📝 Original message:&#xA;Hi Rusty,&#xA;&#xA;Thanks for the feedback!&#xA;&#xA;&gt; This is over-design: if you fail to get reliable gossip, your routing will&#xA;&gt; suffer anyway.  Nothing new here.&#xA;&#xA;Idk, it&#39;s pretty simple: you&#39;re already watching for closes, so if a close&#xA;looks a certain way, it&#39;s a splice. When you see that, you can even take&#xA;note of the _new_ channel size (funds added/removed) and update your&#xA;pathfinding/blindedpaths/hophints accordingly.&#xA;&#xA;If this is an over-designed solution, that I&#39;d categorize _only_ waiting N&#xA;blocks as wishful thinking, given we have effectively no guarantees w.r.t&#xA;how long it&#39;ll take a message to propagate.&#xA;&#xA;If by routing you mean a routing node then: no, a routing node doesn&#39;t even&#xA;really need the graph at all to do their job.&#xA;&#xA;If by routing you mean a sender, then imo still no: you don&#39;t necessarily&#xA;need _all_ gossip, just the latest policies of the nodes you route most&#xA;frequently to. On top of that, since you can get the latest policy each time&#xA;you incur a routing failure, as you make payments, you&#39;ll get the latest&#xA;policies of the nodes you care about over time. Also consider that you might&#xA;fail to get &#34;reliable&#34; gossip, simply just due to your peer neighborhood&#xA;aggressively rate limiting gossip (they only allow 1 update a day for a&#xA;node, you updated your fee, oops, no splice msg for you).&#xA;&#xA;So it appears you don&#39;t agree that the &#34;wait N blocks before you close your&#xA;channels&#34; isn&#39;t a fool proof solution? Why 12 blocks, why not 15? Or 144?&#xA;&#xA;&gt;From my PoV, the whole point of even signalling that a splice is on going,&#xA;is for the sender&#39;s/receivers: they can continue to send/recv payments over&#xA;the channel while the splice is in process. It isn&#39;t that a node isn&#39;t&#xA;getting any gossip, it&#39;s that if the node fails to obtain the gossip message&#xA;within the N block period of time, then the channel has effectively closed&#xA;from their PoV, and it may be an hour+ until it&#39;s seen as a usable (new)&#xA;channel again.&#xA;&#xA;If there isn&#39;t a 100% reliable way to signal that a splice is in progress,&#xA;then this disincentives its usage, as routers can lose out on potential fee&#xA;revenue, and sends/receivers may grow to favor only very long lived&#xA;channels. IMO _only_ having a gossip message simply isn&#39;t enough: there&#39;re&#xA;no real guarantees w.r.t _when_ all relevant parties will get your gossip&#xA;message. So why not give them a 100% reliable on chain signal that:&#xA;something is in progress here, stay tuned for the gossip message, whenever&#xA;you receive that.&#xA;&#xA;-- Laolu&#xA;&#xA;&#xA;On Tue, Jun 28, 2022 at 6:40 PM Rusty Russell &lt;rusty at rustcorp.com.au&gt; wrote:&#xA;&#xA;&gt; Hi Roasbeef,&#xA;&gt;&#xA;&gt; This is over-design: if you fail to get reliable gossip, your routing&#xA;&gt; will suffer anyway.  Nothing new here.&#xA;&gt;&#xA;&gt; And if you *know* you&#39;re missing gossip, you can simply delay onchain&#xA;&gt; closures for longer: since nodes should respect the old channel ids for&#xA;&gt; a while anyway.&#xA;&gt;&#xA;&gt; Matt&#39;s proposal to simply defer treating onchain closes is elegant and&#xA;&gt; minimal.  We could go further and relax requirements to detect onchain&#xA;&gt; closes at all, and optionally add a perm close message.&#xA;&gt;&#xA;&gt; Cheers,&#xA;&gt; Rusty.&#xA;&gt;&#xA;&gt; Olaoluwa Osuntokun &lt;laolu32 at gmail.com&gt; writes:&#xA;&gt; &gt; Hi y&#39;all,&#xA;&gt; &gt;&#xA;&gt; &gt; This mail was inspired by this [1] spec PR from Lisa. At a high level, it&#xA;&gt; &gt; proposes the nodes add a delay between the time they see a channel&#xA;&gt; closed on&#xA;&gt; &gt; chain, to when they remove it from their local channel graph. The motive&#xA;&gt; &gt; here is to give the gossip message that indicates a splice is in process,&#xA;&gt; &gt; &#34;enough&#34; time to propagate through the network. If a node can see this&#xA;&gt; &gt; message before/during the splicing operation, then they&#39;ll be able relate&#xA;&gt; &gt; the old and the new channels, meaning it&#39;s usable again by&#xA;&gt; senders/receiver&#xA;&gt; &gt; _before_ the entire chain of transactions confirms on chain.&#xA;&gt; &gt;&#xA;&gt; &gt; IMO, this sort of arbitrary delay (expressed in blocks) won&#39;t actually&#xA;&gt; &gt; address the issue in practice. The proposal suffers from the following&#xA;&gt; &gt; issues:&#xA;&gt; &gt;&#xA;&gt; &gt;   1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement&#xA;&gt; &gt;   takes longer than 2 hours to reach the &#34;economic majority&#34; of&#xA;&gt; &gt;   senders/receivers, then the channel won&#39;t be able to mask the splicing&#xA;&gt; &gt;   downtime.&#xA;&gt; &gt;&#xA;&gt; &gt;   2. Gossip propagation delay and offline peers. These days most nodes&#xA;&gt; &gt;   throttle gossip pretty aggressively. As a result, a pair of nodes doing&#xA;&gt; &gt;   several in-flight splices (inputs become double spent or something, so&#xA;&gt; &gt;   they need to try a bunch) might end up being rate limited within the&#xA;&gt; &gt;   network, causing the splice update msg to be lost or delayed&#xA;&gt; significantly&#xA;&gt; &gt;   (IIRC CLN resets these values after 24 hours). On top of that, if a&#xA;&gt; peer&#xA;&gt; &gt;   is offline for too long (think mobile senders), then they may miss the&#xA;&gt; &gt;   update all together as most nodes don&#39;t do a full historical&#xA;&gt; &gt;   _channel_update_ dump anymore.&#xA;&gt; &gt;&#xA;&gt; &gt; In order to resolve these issues, I think instead we need to rely on the&#xA;&gt; &gt; primary splicing signal being sourced from the chain itself. In other&#xA;&gt; words,&#xA;&gt; &gt; if I see a channel close, and a closing transaction &#34;looks&#34; a certain&#xA;&gt; way,&#xA;&gt; &gt; then I know it&#39;s a splice. This would be used in concert w/ any new&#xA;&gt; gossip&#xA;&gt; &gt; messages, as the chain signal is a 100% foolproof way of letting an aware&#xA;&gt; &gt; peer know that a splice is actually happening (not a normal close). A&#xA;&gt; chain&#xA;&gt; &gt; signal doesn&#39;t suffer from any of the gossip/time related issues above,&#xA;&gt; as&#xA;&gt; &gt; the signal is revealed at the same time a peer learns of a channel&#xA;&gt; &gt; close/splice.&#xA;&gt; &gt;&#xA;&gt; &gt; Assuming, we agree that a chain signal has some sort of role in the&#xA;&gt; ultimate&#xA;&gt; &gt; plans for splicing, we&#39;d need to decide on exactly _what_ such a signal&#xA;&gt; &gt; looks like. Off the top, a few options are:&#xA;&gt; &gt;&#xA;&gt; &gt;   1. Stuff something in the annex. Works in theory, but not in practice,&#xA;&gt; as&#xA;&gt; &gt;   bitcoind (being the dominant full node implementation on the p2p&#xA;&gt; network,&#xA;&gt; &gt;   as well as what all the miners use) treats annexes as non-standard.&#xA;&gt; Also&#xA;&gt; &gt;   the annex itself might have some fundamental issues that get in the&#xA;&gt; way of&#xA;&gt; &gt;   its use all together [2].&#xA;&gt; &gt;&#xA;&gt; &gt;   2. Re-use the anchors for this purpose. Anchor are nice as they allow&#xA;&gt; for&#xA;&gt; &gt;   1st/2nd/3rd party CPFP. As a splice might have several inputs and&#xA;&gt; outputs,&#xA;&gt; &gt;   both sides will want to make sure it gets confirmed in a timely manner.&#xA;&gt; &gt;   Ofc, RBF can be used here, but that requires both sides to be online to&#xA;&gt; &gt;   make adjustments. Pre-signing can work too, but the effectiveness&#xA;&gt; &gt;   (minimizing chain cost while expediting confirmation) would be&#xA;&gt; dependent&#xA;&gt; &gt;   on the fee step size.&#xA;&gt; &gt;&#xA;&gt; &gt;   In this case, we&#39;d use a different multi-sig output (both sides can&#xA;&gt; rotate&#xA;&gt; &gt;   keys if they want to), and then roll the anchors into this splicing&#xA;&gt; &gt;   transaction. Given that all nodes on the network know what the anchor&#xA;&gt; size&#xA;&gt; &gt;   is (assuming feature bit understanding), they&#39;re able to realize that&#xA;&gt; it&#39;s&#xA;&gt; &gt;   actually a splice, and they don&#39;t need to remove it from the channel&#xA;&gt; graph&#xA;&gt; &gt;   (yet).&#xA;&gt; &gt;&#xA;&gt; &gt;   3. Related to the above: just re-use the same multi-sig output. If&#xA;&gt; nodes&#xA;&gt; &gt;   don&#39;t care all that much about rotating these keys, then they can just&#xA;&gt; use&#xA;&gt; &gt;   the same output. This is trivially recognizable by nodes, as they&#xA;&gt; already&#xA;&gt; &gt;   know the funding keys used, as they&#39;re in the channel_announcement.&#xA;&gt; &gt;&#xA;&gt; &gt;   4. OP_RETURN (yeh, I had to list it). Self explanatory, push some&#xA;&gt; bytes in&#xA;&gt; &gt;   an OP_RETURN and use that as the marker.&#xA;&gt; &gt;&#xA;&gt; &gt;   5. Fiddle w/ the locktime+sequence somehow to make it identifiable to&#xA;&gt; &gt;   verifiers. This might run into some unintended interactions if the&#xA;&gt; inputs&#xA;&gt; &gt;   provided have either relative or absolute lock times. There might also&#xA;&gt; be&#xA;&gt; &gt;   some interaction w/ the main constructing for eltoo (uses the&#xA;&gt; locktime).&#xA;&gt; &gt;&#xA;&gt; &gt; Of all the options, I think #2 makes the most sense: we already use&#xA;&gt; anchors&#xA;&gt; &gt; to be able to do fee bumping after-the-fact for closing transactions, so&#xA;&gt; why&#xA;&gt; &gt; not inherit them here. They make the splicing transaction slightly&#xA;&gt; larger,&#xA;&gt; &gt; so maybe #3 (or something else) is a better choice.&#xA;&gt; &gt;&#xA;&gt; &gt; The design space for spicing is preeetty large, so I figure the most&#xA;&gt; &gt; productive route might be discussing isolated aspects of it at a time.&#xA;&gt; &gt; Personally, I&#39;m not suuuper caught up w/ what the latest design drafts&#xA;&gt; are&#xA;&gt; &gt; (aside from convos at the recent LN Dev Summit), but from my PoV, how to&#xA;&gt; &gt; communicate the splice to other peers has been an outstanding design&#xA;&gt; &gt; question.&#xA;&gt; &gt;&#xA;&gt; &gt; [1]: https://github.com/lightning/bolts/pull/1004&#xA;&gt; &gt; [2]:&#xA;&gt; &gt;&#xA;&gt; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020045.html&#xA;&gt; &gt;&#xA;&gt; &gt; -- Laolu&#xA;&gt; &gt; _______________________________________________&#xA;&gt; &gt; Lightning-dev mailing list&#xA;&gt; &gt; Lightning-dev at lists.linuxfoundation.org&#xA;&gt; &gt; https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev&#xA;&gt;&#xA;-------------- next part --------------&#xA;An HTML attachment was scrubbed...&#xA;URL: &lt;http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20220629/0823b5a9/attachment.html&gt;</html></oembed>