<oembed><type>rich</type><version>1.0</version><title>Carla Kirk-Cohen [ARCHIVE] wrote</title><author_name>Carla Kirk-Cohen [ARCHIVE] (npub17x…anw36)</author_name><author_url>https://yabu.me/npub17xugd458km0nm8edu8u2efuqmxzft3tmu92j3tyc0fa4gxdk9mkqmanw36</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>📅 Original date posted:2023-07-19&#xA;🗒️ Summary of this message: The text is a summary of the annual specification meeting held in NYC. It includes discussions on package relay, commitment transactions, and zero fee commitments.&#xA;📝 Original message:&#xA;Hi List,&#xA;&#xA;At the end of June we got together in NYC for the annual specification&#xA;meeting. This time around we made an attempt at taking transcript-style&#xA;notes which are available here:&#xA;https://docs.google.com/document/d/1MZhAH82YLEXWz4bTnSQcdTQ03FpH4JpukK9Pm7V02bk/edit?usp=sharing&#xA;.&#xA;To decrease our dependence on my google drive I&#39;ve also included the&#xA;full set of notes at the end of this email (no promises about the&#xA;formatting however).&#xA;&#xA;We made a semi-successful attempt at recording larger group topics, so&#xA;these notes roughly follow the structure of the discussions that we had&#xA;at the summit (rather than being a summary). Speakers are not&#xA;attributed, and any mistakes are my own.&#xA;&#xA;Thanks to everyone who traveled far, Wolf for hosting us in style in&#xA;NYC and to Michael Levin for helping out with notes &lt;3&#xA;&#xA;# LN Summit - NYC 2023&#xA;&#xA;## Day One&#xA;&#xA;### Package Relay&#xA;- The current proposal for package relay is ancestor package relay:&#xA;  - One child can have up to 24 ancestors.&#xA;  - Right now, we only score mempool transactions by ancestry anyway, so&#xA;there isn’t much point in other types of packages.&#xA;- For base package relay, commitment transactions will still need to have&#xA;the minimum relay fee.&#xA;  - No batch bumping is allowed, because it can open up pinning attacks.&#xA;  - With one anchor, we can package RBF.&#xA;- Once we have package relay, it will be easier to get things into the&#xA;mempool.&#xA;- Once we have V3 transactions, we can drop minimum relay fees because we&#xA;are restricted to one child pays for one parent transaction:&#xA;  - The size of these transactions is limited.&#xA;  - You can’t arbitrarily attach junk to pin them.&#xA;- If we want to get rid of 330 sat anchors, we will need ephemeral anchors:&#xA;  - If there is an OP_TRUE output, it can be any value including zero.&#xA;  - It must be spent by a child in the same package.&#xA;  - Only one child can spend the anchor (because it’s one output).&#xA;  - The parent must be zero fee because we never want it in a block on its&#xA;own, or relayed without the child.&#xA;  - If the child is evicted, we really want the parent to be evicted as&#xA;well (there are some odd edge cases at the bottom of the mempool, so zero&#xA;ensures that we’ll definitely be evicted).&#xA;- The bigger change is with HLTCs:&#xA;  - With SIGHASH_ANYONECANPAY, your counterparty can inflate the size of&#xA;your transaction - eg: HTLC success, anyone can attack junk.&#xA;  - How much do we want to change here?&#xA;- So, we can get to zero fee commitment transactions and one (ephemeral)&#xA;anchor per transaction.&#xA;  - With zero fee commitments, where do we put trimmed HTLCs?&#xA;     - You can just drop it in an OP_TRUE, and reasonably expect the miner&#xA;to take it.&#xA;- In a commitment with to_self and to_remote and an ephemeral anchor that&#xA;must be spent, you can drop the 1 block CSV in the transaction that does&#xA;not have a revocation path (ie, you can drop it for to_remote).&#xA;  - Any spend of this transaction must spend the one anchor in the same&#xA;block.&#xA;  - No other output is eligible to have a tx attached to it, so we don’t&#xA;need the delay anymore.&#xA;     - Theoretically, your counterparty could get hold of your signed local&#xA;copy, but then you can just RBF.&#xA;- Since these will be V3 transactions, the size of the child must be small&#xA;so you can’t pin it.&#xA;  - Both parties can RBF the child spending the ephemeral anchor.&#xA;  - This isn’t specifically tailored to lightning, it’s a more general&#xA;concept.&#xA;  - Child transactions of V3 are implicitly RBF, so we won’t have to worry&#xA;about it not being replaceable (no inheritance bug).&#xA;- In general, when we’re changing the mempool policy we want to make the&#xA;minimal relaxation that allows the best improvement.&#xA;- We need “top of block” mempool / cluster mempool:&#xA;  - The mempool can have clusters of transactions (parents/children&#xA;arranged in various topologies) and the whole mempool could even be a&#xA;single cluster (think a trellis of parents and children “zigzagging”).&#xA;  - The mining algorithm will pick one “vertical” using ancestor fee rate.&#xA;  - There are some situations where you add a single transaction and it&#xA;would completely change the order in which our current selection picks&#xA;things.&#xA;  - Cluster mempool groups transactions into “clusters” that make this&#xA;easier to sort and reason about.&#xA;  - It can be expensive (which introduces a risk of denial of service), but&#xA;if we have limits on cluster sizes then we can limit this.&#xA;  - This is the only way to get package RBF.&#xA;- How far along is all of this?&#xA;  - BIP331: the P2P part that allows different package types is moving&#xA;along and implementation is happening. This is done in a way that would&#xA;allow us to add different types of packages in future if we need them.&#xA;     - There are some improvements being made to core’s orphan pool,&#xA;because we need to make sure that peers can’t knock orphans out of your&#xA;pool that you may later need to retrieve as package ancestors.&#xA;     - We reserve some spots with tokens, and if you have a token we keep&#xA;some space for you.&#xA;  - V3 transactions: implemented on top of package relay, since they don’t&#xA;really make sense without it. This is an opt-in regime where you make&#xA;things easier to RBF.&#xA;  - Ephemeral Anchors: on top of package relay and V3.&#xA;  - Cluster Mempool: This is further out, but there’s been some progress.&#xA;Right now people are working on the linearization algorithm.&#xA;- If these changes don’t suit lightning’s use case, now is the time to&#xA;speak because it’s all being worked on.&#xA;  - In what way does it not fit LN, as it’s currently designed?&#xA;     - With the V3 paradigm, to stop all pinning vectors, HTLC transactions&#xA;will need to get anchors and you will have to drop ANYONECANPAY (to use&#xA;anchors instead).&#xA;     - Could we fix this with additional restrictions on V3 (or a V4)?&#xA;     - You could do something where V4 means that you can have no ancestors&#xA;and no descendants (in the mempool).&#xA;     - V3 restricts the child, you could also restrict the parent further.&#xA;     - You could have a heuristic on the number of inputs and outputs, but&#xA;anyone can pay or add inputs.&#xA;     - You could commit to the whole thing being some size by setting a&#xA;series of bits in sequence to mark maximum size but that would involve&#xA;running script (which is annoying).&#xA;     - The history of V3 was to allow a parent of any size because&#xA;commitment transactions can be any size.&#xA;- What about keeping HTLCs the way they are today?&#xA;  - A lot of other things are less pineapple, maybe that’s ok? It’s a step&#xA;forward.&#xA;  - The long term solution is to change relay policy to top of mempool. We&#xA;shouldn’t be doing things until the long term solution is clear, and we&#xA;shouldn’t change the protocol in a way that doesn’t fit with the long term.&#xA;  - We already do a lot of arbitrary things, if we were starting from day&#xA;one we wouldn’t do HTLCs with anchors (it’s too much bloat), and being&#xA;unable to RBF is more bloat because you overpay.&#xA;  - If the remote commitment is broadcast, you can spen the HTLC with V2.&#xA;You can’t add a V3 child (it won’t get in the mempool).&#xA;  - If you want to be consistent, even when it’s the remote commit you need&#xA;a presigned transaction, we should just use it as designed today.&#xA;- What is the “top of mempool” assumption?&#xA;  - If large transactions are at the top of your mempool, you (a miner)&#xA;want transactions with higher fee rates to increase your total revenue.&#xA;  - Today, you can’t really easily answer or reason about these questions.&#xA;  - We want to accept transactions in our mempool in a denial of service&#xA;resistant way that will strictly increase our miner fees.&#xA;  - If a transaction is at the top of your mempool, you may be more willing&#xA;to accept a replacement fee. If it’s way down in the mempool, you would&#xA;probably replace more slowly.&#xA;  - It’s conceivable that we could get here, but it’s years off.&#xA;     - Once we have this, pinning only happens with large transactions at&#xA;the bottom of the mempool. If you just slow roll these transactions, you&#xA;can just relay what you’ve got.&#xA;     - If somebody comes and replaces the bottom with something small&#xA;that’ll be mined in the next two blocks, you clearly want to accept that.&#xA;- Does cluster mempool fix rule 3?&#xA;  - No but they help.&#xA;  - Today when you get a transaction you don’t know which block it’s going&#xA;in - we don’t have a preference ordering.&#xA;  - Miners don’t make blocks as they go - with cluster mempool you can make&#xA;very fast block templates.&#xA;  - When talking about relay, you can ignore block making and just get a&#xA;very good estimate.&#xA;- The question is: when do we jump?&#xA;  - Wail until V3? Package Relay?&#xA;     - Package relay will help. You still negotiate fees but they don’t&#xA;matter as much.&#xA;     - We’d like to kill upate fee and have a magic number for fees, can we&#xA;do that when we get package relay?&#xA;     - Package relay is the hard part, V3 should be relatively easier after&#xA;that.&#xA;     - When we get V3 we can drop to zero fee.&#xA;- Is there a future where miners don’t care about policy at all?&#xA;  - Say, the block that I’m mining has V3 transactions. They’re just&#xA;maximizing fees so they’ll accept anything out of band, ignoring these new&#xA;policy rules.&#xA;  - Accelerators are already starting to emerge today - how are they doing&#xA;it?&#xA;  - We’re carving out a small space in which mempool rules work, and it’s&#xA;okay to work around it.&#xA;  - If a miner mines it, great - that’s what we want. The problem is not&#xA;getting mined (ie pinning).&#xA;- Figuring this out is not simple:&#xA;  - Today there are times where we accept replacements when we should&#xA;reject them and times where we reject replacements when we should accept&#xA;them.&#xA;  - There can be situations where miners will mine things that aren’t&#xA;incentive compatible (ie, not the best block template).&#xA;- What if somebody pays out of band not to mine?&#xA;- Ephemeral anchors are interesting, they never enter the UTXO set - if the&#xA;child gets evicted, you get evicted.&#xA;  - The first transaction being zero is optimal, want to be sure it’ll be&#xA;evicted (there are some mempool quirks).&#xA;  - It must be zero fee so that it will be evicted.&#xA;- Should we add trimmed HTLCs to the ephemeral anchor?&#xA;  - We don’t want to get above min relay fee because then we could hang&#xA;around in the mempool.&#xA;  - The eltoo implementation currently does this.&#xA;  - You can’t keep things in OP_TRUE because they’ll be taken.&#xA;  - You can also just put it in fees as before.&#xA;- More on cluster mempools:&#xA;  - It can be used for block selection, but it’s currently focused on&#xA;mempool selection.&#xA;  - You can simulate template selection with it.&#xA;  - When you’re mining, you have to pick from an ancestor downwards.&#xA;  - First we lineralize the transactions to flatten the structure&#xA;intelligently (by topology and fee rate).&#xA;  - Then we figure out the best ordering of this flat structure.&#xA;  - Each chunk in a cluster is less than 75 transactions, and a cluster has&#xA;chunks of transactions.&#xA;  - If you woul include a transaction with the ancestor, it goes in the&#xA;same chunk. Otherwise it goes in the next chunk.&#xA;  - Miners select the highest fee rate chunks, lower fee rate ones can&#xA;safely be evicted.&#xA;  - For replacement, you can just check replacement for a single cluster,&#xA;rechunk it and then resort the chunks.&#xA;     - It must beat the fee rate of the chunks to get in.&#xA;     - You can check how much it beats the chunk by, whether it would go in&#xA;the next block(s) and then decide.&#xA;  - Chunk ordering takes into account transaction size, beyond 25&#xA;transactions we just do ancestor set feerate.&#xA;  - One of the limits is going away, one is sticking around. We still have&#xA;sibling eviction issues.&#xA;  - Is one of the issues with chunks is that they’re limited by transaction&#xA;weight, 101 kilo-v-bytes, which is the maximum package size?&#xA;     - We should be okay, these limits are higher than present.&#xA;     - You can bound chunk size pretty easily.&#xA;  - If we get chunk fee rate replacement then you can do batch fee bumping&#xA;(eg, a bunch of ephemeral anchors that are all batched together).&#xA;- Are there long term policy implications for privacy for ephemeral anchors?&#xA;  - You’re essentially opting into transaction sponsors?&#xA;  - If everyone uses V3 eventually there’s no issue.&#xA;  - For right now, it would be nice if V3 is only for unilateral closes.&#xA;  - It’s also useful in a custodial wallet setting, where you have one team&#xA;creating on chain transactions and the other attaching fees. This is a&#xA;common accounting headache. People can also non-interactively fee bump.&#xA;&#xA;### Taproot&#xA;- Spec is still in draft right now, and the code is a bit ahead of the test&#xA;vectors.&#xA;- The biggest change that has been made is around anchors, which become&#xA;more complicated with taproot:&#xA;  - The revocation path on to_local takes the script path, so now we need&#xA;to reveal data for the anchor.&#xA;  - The downside is that revocation is more expensive - 32 more bytes in&#xA;the control block.&#xA;- The to_remote has a NUMS point, previously we just had multisig keys:&#xA;  - If you’re doing a rescan, you won’t know those keys.&#xA;  - Now you just need to know the NUMS point and you can always rescan.&#xA;  - The NUMS point itself is pretty verifiable, you just start with a&#xA;string and hash it.&#xA;  - It is constant, you just use it randomized with your key.&#xA;- The internal pubkey is constant on to_remote.&#xA;- Co-op close [broke into several different discussions]:&#xA;  - The idea here is to remove negotiation, since we’ve had disagreement&#xA;issues in the past.&#xA;  - In this version, the initiator just accepts the fee.&#xA;  - Do we want negotiation?&#xA;     - In the past we’ve had bugs with fee rate estimates that won’t budge.&#xA;  - Why don’t we just pick our fees and send two sets of signatures?&#xA;     - This would no longer be symmetric.&#xA;     - What if nobody wants to close first? We’d need to work through the&#xA;game theory.&#xA;     - The person who wants their funds has an incentive.&#xA;  - Why is RBF so hard for co-op close?&#xA;     - Closing should be marked as RBF, there’s no reason not to.&#xA;     - Just pick your fee rate, pay it and then come back and RBF if you’d&#xA;like. You can bump/broadcast as much as you like.&#xA;     - If we’re going to have something like that where we iterate, why&#xA;don’t we just do the simple version where we pick a fee and sign?&#xA;     - If you have to pay the whole fee, you have less incentive to sign.&#xA;  - Why is this linked to taproot work?&#xA;     - It needs to change anyway, and we need to add nonces.&#xA;  - What about, whoever wants to close sends a fee rate (paying the fees)&#xA;and the responder just sends a signature?&#xA;     - If you don’t have enough balance, you can’t close. But why do you&#xA;care anyway, you have no funds?&#xA;     - We can do this as many times as we want.&#xA;  - Shutdown is still useful to clear the air on the channel.&#xA;  - When you reconnect, you start a new interaction completely.&#xA;  - TL;DR:&#xA;     - Shutdown message stays.&#xA;     - You send a signature with a fee rate.&#xA;     - The remote party signs it.&#xA;     - If they disagree, you do it again.&#xA;     - It’s all RBF-able.&#xA;     - You must retransmit shutdown, and you must respond with a shutdown.&#xA;     - You can send nonces at any time:&#xA;     - In revoke and ack.&#xA;     - On channel re-establish.&#xA;- For taproot/musig2 we need nonces:&#xA;  - Today we store the commitment signature from the remote party. We don’t&#xA;need to store our own signature - we can sign at time of broadcast.&#xA;  - To be able to sign you need the verification nonce - you could remember&#xA;it, or you could use a counter:&#xA;     - Counter based:&#xA;     - We re-use shachain and then just use it to generate nonces.&#xA;     - Start with a seed, derive from that, use it to generate nonces.&#xA;     - This way you don’t need to remember state, since it can always be&#xA;generated from what you already have.&#xA;     - Why is this safe?&#xA;     - We never re-use nonces.&#xA;     - The remote party never sees your partial signature.&#xA;     - The message always stays the same (the dangerous re-use case is&#xA;using the same nonce for different messages).&#xA;     - If we used the same nonce for different messages we could leak our&#xA;key.&#xA;     - You can combine the sighash + nonce to make it unique - this also&#xA;binds more.&#xA;     - Remote party will only see the full signature on chain, never your&#xA;partial one.&#xA;  - Each party has sign and verify nonces, 4 total.&#xA;  - Co-op close only has 2 because it’s symmetric.&#xA;&#xA;### Gossip V1.5 vs V2&#xA;- How much do we care about script binding?&#xA;  - It it’s loose, it can be any script - you can advertise any UTXO.&#xA;  - You revel less information, just providing a full signature with the&#xA;full taproot public key.&#xA;  - If it’s tight, you have to provide two keys and then use the BIP 86&#xA;tweak to check that it’s a 2-of-2 multisig.&#xA;- Should we fully bind to the script, or just allow any taproot output?&#xA;  - Don’t see why we’d want additional overhead.&#xA;  - Any taproot output can be a channel - let people experiment.&#xA;  - We shouldn’t have cared in the first place, so it doesn’t matter what&#xA;it’s bound to.&#xA;  - It’s just there for anti-DOS, just need to prove that you can sign.&#xA;- Let every taproot output be a lightning channel, amen.&#xA;- We’re going to decouple:&#xA;  - You still need a UTXO but it doesn’t matter what it looks like.&#xA;  - This also allows other channel types in future.&#xA;  - We send:&#xA;     - UTXO: unspent and in the UTXO set&#xA;     - Two node pubkeys&#xA;     - One signature&#xA;- How much do we care about amount binding?&#xA;  - Today it is exact.&#xA;     - People use it for capacity graphs.&#xA;     - Graph go up.&#xA;     - We can watch the chain for spends when we know which UTXO to watch&#xA;per-channel.&#xA;  - Is there an impact on pathfinding if we over-advertize?&#xA;     - We use capacity to pathfind.&#xA;     - What’s the worst case if people lie? We don’t use them.&#xA;  - If we’ve already agreed that this can be a UTXO that isn’t a channel,&#xA;then it shouldn’t matter.&#xA;  - If you allow value magnification, we can use a single UTXO to claim for&#xA;multiple channels. Even in the most naive version (say 5x), you’re only&#xA;revealing 20% of your UTXOs.&#xA;  - How much leverage can we allow? The only limit is denial of service.&#xA;  - There’s the potential for a market for UTXOs.&#xA;- There’s a privacy trade-off:&#xA;  - If you one-to-one map them, then there’s no privacy gain.&#xA;  - Do we know that you get substantial privacy?&#xA;     - Even if you have two UTXOs and two channels, those UTXOs are now not&#xA;linked (because you can just use the first one to advertise).&#xA;     - This is only assuming that somebody implements it/ infrastructure is&#xA;built out.&#xA;     - People could create more elaborate things over time, even if&#xA;implementations do the “dumb” way.&#xA;- Gossip 1.5 (ie, with amount binding) fits in the current flow, V2 (ie,&#xA;without amount binding) has a very different scope.&#xA;  - It’s a big step, and you don’t truly know until you implement it.&#xA;  - What about things like: a very large node and a very small node, whose&#xA;announced UTXO do you use?&#xA;- We decided not to put UTXOs in node announcement, so we’d put it in&#xA;channel announcement:&#xA;  - Sometimes there’s a UTXO, sometimes there isn’t.&#xA;  - You look at a node’s previous channels to see if they still have&#xA;“quota”.&#xA;  - If you don’t have “quota” left, you have to include a signature TLV.&#xA;- With the goal of publicly announcing taproot channels, 1.5 gets us there&#xA;and is a much smaller code change.&#xA;- We’ve talked alot about capacity for pathfinding, but we haven’t really&#xA;touched on control valves like max HTLC:&#xA;  - Currently we don’t use these valves to tune our pathfinding, people&#xA;don’t use it.&#xA;  - If we get better here, we won’t need capacity.&#xA;  - This value is already &lt; 50% anyway.&#xA;- If we don’t un-bind amounts now, when will we do it?&#xA;  - It’s always a lower priority and everyone is busy.&#xA;  - If we allow overcommitting by some factor now, it’s not unrealistic&#xA;that it will allow some degree of privacy.&#xA;  - Between these features, we have opened the door to leasing UTXOs:&#xA;     - Before we do more over-commitment, let’s see if anybody uses it?&#xA;- We add channel capacity to the channel announcement with a feature bit:&#xA;  - If we turn the feature off, we are on-to-one mapped.&#xA;  - But a node can’t use the upgraded version until everyone is upgraded?&#xA;     - Our current “get everyone upgraded” cycle is 18 months (or a CVE).&#xA;     - If you’re upgrading to 2x multiplier, nobody on 1x will accept that&#xA;gossip.&#xA;     - People will not pay for privacy (via lost revenue of people not&#xA;seeing their gossip).&#xA;     - This is additive to defeating chain analysis.&#xA;     - Private UTXO management is already complicated, we don’t know the&#xA;ideal that we’re working towards.&#xA;- What about if we just set it to 2 today?&#xA;  - Is 2 qualitatively better than 1 (without script binding) today?&#xA;  - Will a marketplace magically emerge if we allow over-commitment?&#xA;- We don’t know the implications of setting a global multiplier for routing&#xA;or denial of service, and we don’t have a clear view of what privacy would&#xA;look like (other than “some” improvement).&#xA;- We agree that adding a multiplier doesn’t break the network.&#xA;  - People with a lower value will see a subnet when we upgrade.&#xA;- We’re going to go with gossip “1.75”:&#xA;  - Bind to amount but not script.&#xA;  - We include a TLV cut out that paves the way to overcommitment.&#xA;&#xA;### Multi-Sig Channel Parties&#xA;- There are a few paths we could take to get multi-sig for one channel&#xA;party:&#xA;  - Script: just do it on the script level for the UTXO, but it’s heavy&#xA;handed.&#xA;  - FROSTy: you end up having to do a bunch of things around fault&#xA;tolerance which require a more intense setup. You also may not want the&#xA;shachain to be known by all of the parties in the setup (we have an ugly&#xA;solution for this, we think).&#xA;  - Recursive musig:&#xA;- Context: you have one key in the party, but you actually want it to be&#xA;multiple keys under the hood. You don’t want any single party to know the&#xA;revocation secrets, so you have to each have a part and combine them.&#xA;- Ugliest solution: just create distinct values and store them.&#xA;- Less ugly solution uses multiple shachains:&#xA;  - Right now we have a shachan, and we reveal two leaves of it.&#xA;  - You have 8 shachains, and you XOR them all together.&#xA;     - Why do we need 8? That’ll serve a 5-of-7.&#xA;     - Maybe we need 21? 5 choose 7, we’re not sure.&#xA;  - How does this work with K-of-N?&#xA;     - Each party has a piece, and you can always combine them in different&#xA;combinations (of K pieces) to get to the secret you’re using.&#xA;&#xA;### PTLCs&#xA;- We can do PTLCs in two ways:&#xA;  - Regular musig&#xA;  - Adaptor signatures&#xA;- Do they work with trampoline as defined today?&#xA;  - The sender picks all the blinding factors today, so it’s fine.&#xA;- There’s a paper called: splitting locally while routing&#xA;interdimensionally:&#xA;  - You can let intermediate nodes do splitting because they know all the&#xA;local values.&#xA;  - Then can generate new blinding factors and split out to the next node.&#xA;  - Adaptor signatures could possibly be combined for PTLCs that get fanned&#xA;out then combined.&#xA;- There are a few options for redundant overpayment (ie, “stuckless”&#xA;payments):&#xA;  - Boomerang:&#xA;     - Preimages are coefficients of a polynomial, and you commit to the&#xA;polynomial itself.&#xA;     - If you have a degree P polynomial and you take P+1 shares, then you&#xA;can claim in the other direction.&#xA;     - Quite complex.&#xA;     - You have to agree on the number of splits in advance.&#xA;  - Spear:&#xA;     - H2TLC: there are two payment hashes per-HTLC, one if from the sender&#xA;and one is from the invoice.&#xA;     - When you send a payment, the sender only reveals the right number of&#xA;sender preimages.&#xA;     - This also gives us HTLC acknowledgement, which is nice.&#xA;     - You can concurrently split, and then spray and pray.&#xA;     - Interaction is required to get preimages.&#xA;- Do we want to add redundant overpayment with PTLCs?&#xA;  - We’ll introduce a communication requirement.&#xA;  - Do we need to decide that before we do PTLCs?&#xA;     - Can we go for the simplest possible option first and then add it?&#xA;     - For spear, yes - the intermediate nodes don’t know that it’s two&#xA;hashes.&#xA;     - We could build them out without thinking about overpayment, and then&#xA;mix in a sender secret so that we can claim a subset.&#xA;  - We’ll have more round trips, but you also have the ability to ACK HTLCs&#xA;that have arrived.&#xA;  - Spray and pray uses the same about as you would with our current “send&#xA;/ wait / send”, it’s just concurrently not serially.&#xA;- Is it a problem that HLTCs that aren’t settled don’t pay fees?&#xA;  - You’re paying the fastest routes.&#xA;  - Even if it’s random, you still get a constant factor of what we should&#xA;have gotten otherwise.&#xA;- It makes sense to use onion messages if it’s available to us.&#xA;- Are we getting payment acknowledgement? Seems so!&#xA;&#xA;## Day Two&#xA;&#xA;### Hybrid Approach to Channel Jamming&#xA;- We’ve been talking about jamming for 8 years, back and forth on the&#xA;mailing list.&#xA;- We’d like to find a way to move forward so that we can get something&#xA;done.&#xA;- Generally when we think about jamming, there are three “classes” of&#xA;mitigations:&#xA;  - Monetary: unconditional fees, implemented in various ways.&#xA;  - Reputation: locally assessed (global is terrible)&#xA;  - Scarce Resources: POW, stake, tokens.&#xA;- The problem is that none of these solutions work in isolation.&#xA;  - Monetary: the cost that will deter an attacker is unreasonable for an&#xA;honest user, and the cost that is reasonable for an honest user is too low&#xA;for an attacker.&#xA;  - Reputation: any system needs to define some threshold that is&#xA;considered good behavior, and an attacker can aim to fall just under it.&#xA;Eg: if you need a payment to resolve in 1 minute, you can fall just under&#xA;that bar.&#xA;  - Scarce resources: like with monetary, pricing doesn’t work out.&#xA;     - Paper: proof of work doesn’t work for email spam, as an example.&#xA;     - Since scarce resources can be purchased, they could be considered a&#xA;subset of monetary.&#xA;- There is no silver bullet for jamming mitigation.&#xA;- Combination of unconditional fees and reputation:&#xA;  - Good behavior grants access to more resources, bad behavior loses it.&#xA;  - If you want to fall just below that threshold, we close the gap with&#xA;unconditional fees.&#xA;- Looking a these three classes implemented in isolation, are there any&#xA;unresolved questions that people have - “what about this”?&#xA;  - Doesn’t POW get you around the cold start problem in reputation where&#xA;if you want to put money in to quickly bootstrap you can?&#xA;     - Since POW can be rented, it’s essentially a monetary solution - just&#xA;extra steps.&#xA;     - We run into the same pricing issues.&#xA;- Why these combinations?&#xA;  - Since scarce resources are essentially monetary, we think that&#xA;unconditional fees are the simplest possible monetary solution.&#xA;- Unconditional Fees:&#xA;  - As a sender, you’re building a route and losing money if it doesn’t go&#xA;through?&#xA;     - Yes, but they only need to be trivially small compared to success&#xA;case fee budgets.&#xA;     - You can also eventually succeed so long as you retry enough, even if&#xA;failure rates are very high.&#xA;  - How do you know that these fees will be small? The market could decide&#xA;otherwise.&#xA;     - Routing nodes still need to be competitive. If you put an&#xA;unconditional fee of 100x the success case, senders will choose to not send&#xA;through you because you have no incentive to forward.&#xA;     - We could also add an in-protocol limit or sender-side advisory.&#xA;  - With unconditional fees, a fast jamming attack is very clearly paid for.&#xA;- Reputation:&#xA;  - The easiest way to jam somebody today is to send a bunch of HTLCs&#xA;through them and hold them for two weeks. We’re focusing on reputation to&#xA;begin with, because in this case we can quite easily identify that people&#xA;are doing something wrong (at the extremes).&#xA;  - If you have a reputation score that blows up on failed attempts,&#xA;doesn’t that fix it without upfront fees?&#xA;     - We have to allow some natural rate of failure in the network.&#xA;     - An attacker can still aim to fall just below that failure threshold&#xA;and go through multiple channels to attack an individual channel.&#xA;     - THere isn’t any way to set a bar that an attacker can’t fall just&#xA;beneath.&#xA;     - Isn’t this the same for reputation? We have a suggestion for&#xA;reputation but all of them fail because they can be gamed below the bar.&#xA;  - If reputation matches the regular operation of nodes on the network,&#xA;you will naturally build reputation up over time.&#xA;     - If we do not match reputation accumulation to what normal nodes do,&#xA;then an attacker can take some other action to get more reputation than the&#xA;rest of the network. We don’t want attackers to be able to get ahead of&#xA;regular nodes.&#xA;     - Let’s say you get one point for success and one for failure, a&#xA;normal node will always have bad reputation. An attacker could then send 1&#xA;say payments all day long, pay a fee for it and gain reputation.&#xA;- Can you define jamming? Is it stuck HTLCs or a lot of 1 sat HTLCS&#xA;spamming up your DB?&#xA;  - Jamming is holding HTLCs to or streaming constant failed HTLCs to&#xA;prevent a channel from operating.&#xA;  - This can be achieved with slots or liquidity.&#xA;- Does the system still work if users are playing with reputation?&#xA;  - In the steady state, it doesn’t really matter whether a node has a good&#xA;reputation or not.&#xA;  - If users start to set reputation in a way that doesn’t reflect normal&#xA;operation of the network, it will only affect their ability to route when&#xA;under attack.&#xA;- Isn’t reputation monetary as well, as you can buy a whole node?&#xA;  - There is a connection, and yes in the extreme case you can buy an&#xA;entire identity.&#xA;  - Even if you do this, the resource bucketing doesn’t give you a “golden&#xA;ticket” to consume all slots/liquidity with good reputation, so you’re&#xA;still limited in what you can do.&#xA;- Can we learn anything from research elsewhere / the way things are done&#xA;on the internet?&#xA;  - A lot of our confidence that these solutions don’t work in isolation is&#xA;based on previous work looking at spam on the internet.&#xA;  - Lightning is also unique because it is a monetary network - we have&#xA;money built in, so we have different tools to use.&#xA;- To me, it seems like if the scarce resource that we’re trying to allocate&#xA;is HTLC slots and upfront fees you can pay me upfront fees for the worst&#xA;case (say two weeks) and then if it settles if 5 seconds you give it back?&#xA;  - The dream solution is to only pay for the amount of time that a HTLC is&#xA;held in flight.&#xA;  - The problem here is that there’s no way to prove time when things go&#xA;wrong, and any solution without a universal clock will fall back on&#xA;cooperation which breaks down in the case of an attack.&#xA;  - No honest user will be willing to pay the price for the worst case,&#xA;which gets us back to the pricing issue.&#xA;  - There’s also an incentives issue when the “rent” we pay for these two&#xA;weeks worst case is more than the forwarding fee, so a router may be&#xA;incentivized to just hang on to that amount and bank it.&#xA;  - We’ve talked about forwards and backwards fees extensively on the&#xA;mailing list:&#xA;     - They’re not large enough to be enforceable, so somebody always has&#xA;to give the money back off chain.&#xA;     - This means that we rely on cooperation for this refund.&#xA;     - The complexity of this type of system is very high, and we start to&#xA;open up new “non-cooperation” concerns - can we be attacked using this&#xA;mechanism itself?&#xA;     - Doesn’t an attacker need to be directly connected to you to steal in&#xA;the non-cooperative case?&#xA;     - At the end of the day, somebody ends up getting robbed when we can’t&#xA;pull the money from the source (attacker).&#xA;- Does everybody feel resolved on the statement that we need to take this&#xA;hybrid approach to clamp down on jamming? Are there any “what about&#xA;solution X” questions left for anyone? Nothing came up.&#xA;&#xA;### Reputation for Channel Jamming&#xA;- Resource bucketing allows us to limit the number of slots and amount of&#xA;liquidity that are available for nodes that do not have good reputation.&#xA;  - No reputation system is perfect, and we will always have nodes that&#xA;have low-to-no activity, or are new to the network that we can’t form&#xA;reputation scores for.&#xA;  - It would be a terrible outcome for lightning to just drop these HTLCs,&#xA;so we reserve some portion of resources for them.&#xA;- We have two buckets: protected and general (split 50/50 for the purposes&#xA;of explanation, but we’ll find more intelligent numbers with further&#xA;research):&#xA;  - In the normal operation of the network, it doesn’t matter if you get&#xA;into the protected slots. When everyone is using the network as usual,&#xA;things clear out quickly so the general bucket won’t fill up.&#xA;  - When the network comes under attack, an attacker will fill up slots and&#xA;liquidity in the general bucket. When this happens, only nodes with good&#xA;reputation will be able to use the protected slots, other HTLCs will be&#xA;dropped.&#xA;  - During an attack, nodes that don’t have a good reputation will&#xA;experience lower quality of service - we’ll gradually degrade.&#xA;- What do you mean by the steady state?&#xA;  - Nobody is doing anything malicious, payments are clearing out as usual&#xA;- not sitting on the channel using all 483 slots.&#xA;- We decide which bucket the HTLC goes into using two signals:&#xA;  - Reputation: whether the upstream node had good reputation with our&#xA;local node.&#xA;  - Endorsement: whether the upstream node has indicated that the HTLC is&#xA;expected to be honest (0 if uncertain, 1 if expected to be honest).&#xA;  - If reputation &amp;&amp; endorsement, then we’ll allow the HTLC into protected&#xA;slots and forward the HTLC on with endorsed=1.&#xA;  - We need reputation to add a local viewpoint to this endorsement signal&#xA;- otherwise we can just trivially be jammed if we just copy what the&#xA;incoming peer said.&#xA;  - We need endorsement to be able to propagate this signal over multiple&#xA;hops - once it drops, it’s dropped for good.&#xA;  - There’s a privacy questions for when senders set endorsed:&#xA;     - You can flip a coin or set the endorsed field for your payments at&#xA;the same proportion as you endorse forwards.&#xA;- We think about reputation in terms of the maximum amount of damage that&#xA;can be done by abusing it:&#xA;  - Longest CLTV that we allow in the future from current height 2016&#xA;blocks (~2 weeks): this it the longest that we can be slow jammed.&#xA;  - Total route length ~27 hops: this is the largest amplifying factor an&#xA;attacker can have.&#xA;- We use the two week period to calculate the node’s total routing revenue,&#xA;this is what we have to lose if we are jammed.&#xA;- We then look at a longer period, 10x the two week period to see what the&#xA;peer has forwarded us over that longer period.&#xA;- If they have forwarded us more over that longer period than what we have&#xA;to loose in the shorter period, then they have good reputation.&#xA;- This is the damage that is observable to us - there are values outside of&#xA;the protocol that are also affected by jamming:&#xA;  - Business reliability, joy of running a node, etc&#xA;  - We contend that these values are inherently unmeasurable to protocol&#xA;devs:&#xA;     - End users can’t easily put a value on them.&#xA;     - If we try to approximate them, users will likely just run the&#xA;defaults.&#xA;- One of the simplest attacks we can expect is an “about turn” where an&#xA;attacker behaves perfectly and then attacks:&#xA;  - So, once you have good reputation we can’t just give you full access to&#xA;protected slots.&#xA;- We want to reward behavior that we consider to be honest, so we consider&#xA;“effective” HTLC fees - the fee value that a HTLC has given us relative to&#xA;how long it took to resolve:&#xA;  - Resolution period: the amount of time that a HTLC can reasonably take&#xA;to resolve - based on MPP timeout / 1 minute.&#xA;  - We calculate opportunity cost for every minute after the first&#xA;“allowed” minute as the fees that we could have earned with that&#xA;liquidity/slot.&#xA;  - Reputation is only negatively affected if you endorsed the HTLC.&#xA;  - If you did not endorse, then you only gain reputation for fast success&#xA;(allowing bootstrapping).&#xA;- When do I get access to protected slots?&#xA;  - When you get good reputation, you can use protected slots for your&#xA;endorsed HTLCs but there is a cap on the number of in flight HLTCs that are&#xA;allowed.&#xA;  - We treat every HLTC as if it will resolve with the worst possible&#xA;outcome, and temporarily dock reputation until it resolves:&#xA;     - In the good case, it resolves quickly and you get your next HLTC&#xA;endorsed.&#xA;     - In the bad case, you don’t get any more HTLCs endorsed and you&#xA;reputation remains docked once it resolves (slowly).&#xA;- Wouldn’t a decaying average be easier to implement, rather than a sliding&#xA;window?&#xA;  - If you’re going to use large windows, then a day here or there doesn’t&#xA;matter so much.&#xA;- Have you thought about how to make this more visible to node operators?&#xA;  - In the steady state we don’t expect this to have any impact on routing&#xA;operations, so they’ll only need a high level view.&#xA;- Can you elaborate on slots vs liquidity for these buckets?&#xA;  - Since we have a proportional fee for HTLCs, this indirectly represents&#xA;liquidity: larger HTLCs will have larger fees so will be more “expensive”&#xA;to get endorsed.&#xA;- Where do we go from here?&#xA;  - We would like to dry run with an experimental endorsement field, and&#xA;ask volunteers to gather data for us.&#xA;  - Are there any objections to an experimental TLV?&#xA;     - No.&#xA;     - We could also test multiple endorsement signals / algorithms in&#xA;parallel.&#xA;- In your simulations, have you looked at the ability to segment off&#xA;attacks as they happen? To see how quickly an attacker&#39;s reputation drops&#xA;off, and that you have a protected path?&#xA;  - Not yet, but plan to.&#xA;- Do any of these assumptions change with trampoline? Don’t think it’s&#xA;related.&#xA;- Your reputation is at stake when you endorse, when do you decide to&#xA;endorse my own payments?&#xA;  - You do the things you were already doing to figure out a good route.&#xA;Paths that you think have good liquidity, and have had success with in the&#xA;past.&#xA;- What about redundant overpayment, some of your HTLCs are bound to fail?&#xA;  - Provided that they fail fast, it shouldn’t be a problem.&#xA;- Is it possible that the case where general slots are perpetually filled&#xA;by attackers becomes the steady state? And we can’t tell the difference&#xA;between a regular user and attacker.&#xA;  - This is where unconditional fees come in, if somebody wants to&#xA;perpetually fill up the general bucket they have to pay for it.&#xA;- Is there anything we’d like to see that will help us have move confidence&#xA;here?&#xA;  - What do you think is missing from the information presented?&#xA;     - We can simulate the steady state / create synthetic data, but can’t&#xA;simulate every attack. Would like to spend more time thinking through the&#xA;ways this could possibly be abused.&#xA;  - Would it help to run this on signet? Or scaling lightning?&#xA;     - Its a little easier to produce various profiles of activity on&#xA;regtest[.&#xA;&#xA;### Simplified Commitments&#xA;- Simplified commitments makes our state machine easier to think about.&#xA;  - Advertise option_simplified_commitment: once both peers upgrade, we can&#xA;just use it.&#xA;  - Simplify our state machine before we make any more changes to it.&#xA;- Right now Alice and Bob can have changes in flight at the same time:&#xA;  - Impossible to debug, though technically optimal.&#xA;  - Everyone is afraid of touching the state machine.&#xA;- We can simplify this by introducing turn taking:&#xA;  - First turn is taken by the lower pubkey.&#xA;  - Alice: update / commit.&#xA;  - Bob: revoke and ack.&#xA;  - If alice wants to go when it’s bob’s turn, she can just send a message.&#xA;  - Bob can ignore it, or yield and accept it.&#xA;  - This has been implemented in CLN for LNSymmetry&#xA;- It’s less code to not have the ignore message, but for real performance&#xA;we’ll want it. Don’t want to suck up all of that latency.&#xA;- The easiest way is to re-establish is to upgrade on re-establish.&#xA;  - If it wasn’t somebody’s turn, you can just do lowest pubkey.&#xA;  - If it was somebody’s turn, you can just resume it.&#xA;- We could also add REVOKE and NACK:&#xA;  - Right now we have no way to refuse updates.&#xA;  - Why do we want a NACK?&#xA;     - You currently have to express to the other side what they can put in&#xA;your channel because you can’t handle it if they give you something you&#xA;don&#39;t’ like (eg, a HTLC below min_htlc).&#xA;     - You can likely force a break by trying to send things that aren&#39;t&#xA;allowed, which is a robustness issue.&#xA;     - We just force close when we get things we don’t allow.&#xA;     - Could possibly trigger force closes.&#xA;- There’s an older proposal called fastball where you send a HTLC and&#xA;advise that you’re going to fail it.&#xA;  - If Alice gets it, she can reply with UNADD.&#xA;  - If you don’t get it in time,you just go through the regular cycle.&#xA;- When you get commitment signed, you could NACK it. This could mean you’re&#xA;failing the whole commitment, or just a few HTLCs.&#xA;  - You can’t fail a HTLC when you’ve sent commitment signed, so you need a&#xA;new cycle to clear it out.&#xA;  - What NACK says is: I’ve ignored all of your updates and I’m progressing&#xA;to the next commitment.&#xA;- Revoke and NACK is followed by commitment signed where you clear out all&#xA;the bad HTLCs, ending that set of updates.&#xA;- You have to NACk and then wait for another commitment signature, signing&#xA;for the same revocation number.&#xA;- Bob never has to hold a HTLC that he doesn&#39;t want from Alice on his&#xA;commitment.&#xA;- This is bad for latency, good for robustness.&#xA;  - Alice can send whatever she wants, and Bob has a way to reject it.&#xA;  - There are a whole lot of protocol violations that Alice can force a&#xA;force close with, now they can be NACKed.&#xA;- This is good news for remote signers where policy has been violate&#xA;because we have cases where policy has been violated and our only way right&#xA;now is the close the channel.&#xA;- You still want Alice to know Bob’s limits so that you can avoid endless&#xA;invalid HTLCs.&#xA;- Simplified commitment allows us to do things more easily in the protocol.&#xA;  - When we specced this all out, we didn’t foresee that update fee would&#xA;be so complicated, with this we know update fee will be correct.&#xA; - If we don’t do this, we have to change update fee?&#xA;     - Sender of the HTLC adds fee.&#xA;     - Or fixed fee.&#xA;- Even if we have zero fees, don’t we still have HTLC dust problems?&#xA;  - You can have a bit on update add that says the HTLC is dust.&#xA;  - You can’t be totally fee agnostic because you have to be able to&#xA;understand when to trim HLTCs.&#xA;- Even update fee aside, shouldn’t things just be simpler?&#xA;- Would a turn based protocol have implications for musig nonces?&#xA;  - If you’re taking a turn, it’s a session.&#xA;  - You’d need to have different nonces for different sessions.&#xA;- We should probably do this before we make another major change, it&#xA;simplifies things.&#xA;- Upgrade on re-establish is pretty neat because you can just tell them&#xA;what type you’d like.&#xA;  - This worked very well for CLN getting rid of static remote.&#xA;- What about parameter exchange?&#xA;  - There’s a version of splice that allows you to add new inputs and&#xA;outputs.&#xA;  - Splice no splice which means that you can only make a new commitment&#xA;transaction, no on-chain work.&#xA;  - Seems like you can get something like dynamic commitments with this,&#xA;and it’s a subset of splicing.&#xA;&#xA;## Day Three&#xA;&#xA;### Meta Spec Process&#xA;- Do we want to re-evaluate the concept of a “living document”?&#xA;  - It’s only going to get longer.&#xA;  - As we continue to update, we have two options:&#xA;     - Remove old text and replace entirely.&#xA;     - Do an extension and then one day replace.&#xA;- If implementing from scratch, what would you want to use?&#xA;  - Nobody is currently doing this.&#xA;  - By the time they finished, everything would have move.&#xA;- The protocol isn’t actually that modular:&#xA;  - Except for untouched BOLT-08, which can be read in isolation.&#xA;  - Other things have tentacles.&#xA;     - We should endeavor for things to be as modular as possible.&#xA;- Back in Adelaide we have a version with a set of features.&#xA;  - We have not re-visited that discussion.&#xA;  - Is it possible for us to come up with versions and hold ourselves&#xA;accountable to them?&#xA;  - If we’re going to start having different ideas of what lightning looks&#xA;like, versioning helps.&#xA;  - Versioning is not tied one-to-one to the protocol:&#xA;     - Features make it less clean because you have a grab bag of features&#xA;on top of any “base” version we decide on.&#xA;     - Does a version imply that we’re implementing in lock step?&#xA;  - If we do extraction, remove and cleanup, we could say that we’re on&#xA;version X with features A/B/C.&#xA;- How confident are we that we can pull things out? Only things that are&#xA;brand new will be easy to do this with.&#xA;- Keeping context in your head is hard, and jumping between documents&#xA;breaks up thought.&#xA;  - Should we fully copy the current document and copy it?&#xA;  - Not everything is a rewrite, some things are optional.&#xA;- What’s our design goal?&#xA;  - To be able to more easily speak about compatibility.&#xA;  - To have a readable document for implementation.&#xA;- A single living document works when we were all making a unified push,&#xA;now it makes less sense:&#xA;  - You can’t be compliant “by commit” because things are implemented in&#xA;different orders.&#xA;  - We can’t fix that with extensions, they’re hard to keep up to date?&#xA;     - RFCs work like this, they have replacement ones.&#xA;- BOLT 9 can act as a control bolt because it defines features.&#xA;- Extensions seem helpful:&#xA;  - Can contain more rationale.&#xA;  - You can have spaghetti and ravioli code, these could be raviolo&#xA;extensions.&#xA;  - If everything is an extension BOLT with minimal references, we avoid&#xA;the if-else-ey structure we have right now.&#xA;- For small things, we can just throw out the old stuff.&#xA;- If it were possible to modularize and have working groups, that would be&#xA;great but it seems like we’d tread on each other’s toes.&#xA;- We must avoid scenarios like vfmanprint:&#xA;  - The return value says “vfprintf returns -1 on error”&#xA;  - The next sentence says “this was true until version 2”&#xA;  - But nobody reads the next sentence.&#xA;- Cleanup PRs won’t be looked at, and need to be maintained as new stuff&#xA;gets in.&#xA;- Eg: legacy onion - we just looked at network use and removed when it was&#xA;unused:&#xA;  - Rip out how you generate them.&#xA;  - Rip out how you handle them.&#xA;- One of the issues with deleting old text is that existing software that&#xA;delete the old text is annoying when you run into interop issues on the old&#xA;spec version.&#xA;  - If there are real things to deal with on the network, we must keep them.&#xA;  - We must reference old commits so that people at least know what was&#xA;there and can do git archeology to find out what it used to be.&#xA;- We can remove some things today!&#xA;  - Static remote&#xA;  - Non-zero fee anchors&#xA;  - ANYSEGWIT is default (/compulsory)&#xA;  - Payment secrets / basic MPP.&#xA;- Should we have regular cleanups?&#xA;  - Even if we do, they need review.&#xA;- For now, let’s do wholesale replacement to avoid cleanup.&#xA;- The proposals folder is nice to know what’s touched by what changes.&#xA;- Rationale sections need improvement: sometimes they’re detailed,&#xA;sometimes vague.&#xA;- Once a feature becomes compulsory on the network we can possibly ignore&#xA;it.&#xA;- What about things that are neither bolts nor blips - like inbound fees?&#xA;  - Why does it need to be merged anywhere?&#xA;  - If it’s an implementation experiment, we can merge it once we’re&#xA;convinced it works.&#xA;  - If we reach a stage where we all agree it should be universally done,&#xA;then it should be a bolt.&#xA;     - This is a BLIP to BOLT path.&#xA;- Communication:&#xA;  - We’re not really using IRC anymore - bring it back!&#xA;  - We need a canonical medium, recommit to lightning-dev.&#xA;&#xA;### Async Payments/ Trampoline&#xA;- Blinded payments are a nice improvement for trampoline because you don’t&#xA;know where the recipient is.&#xA;- The high level idea is:&#xA;  - Light nodes only see a small part of the network that they are close&#xA;to.&#xA;  - Recipients only give a few trampolines in the network that they can be&#xA;reached via.&#xA;  - In the onion for the first trampoline, there will be an onion for the&#xA;second trampoline.&#xA;  - You just need to give a trampoline a blinded path and they can do the&#xA;rest.&#xA;- If you only have one trampoline, they can probably make a good guess&#xA;where the payment came from (it’s in the reachable neighborhood).&#xA;- Is there a new sync mode for trampoline gossip?&#xA;  - We’d now need radius-based gossip rather than block based.&#xA;  - The trust version is just getting this from a LSP.&#xA;  - In cold bootstrap, you’re probably going to open a channel so you ask&#xA;them for gossip.&#xA;- Can you split MPP over trampoline? Yes.&#xA;- Routing nodes can learn more about the network because they make their&#xA;own attempts.&#xA;-------------- next part --------------&#xA;An HTML attachment was scrubbed...&#xA;URL: &lt;http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20230719/ea8a0ec2/attachment-0001.html&gt;</html></oembed>