<oembed><type>rich</type><version>1.0</version><title>Anthony Towns [ARCHIVE] wrote</title><author_name>Anthony Towns [ARCHIVE] (npub17r…x9l2h)</author_name><author_url>https://yabu.me/npub17rld56k4365lfphyd8u8kwuejey5xcazdxptserx03wc4jc9g24stx9l2h</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>📅 Original date posted:2022-02-17&#xA;📝 Original message:On Thu, Feb 10, 2022 at 07:12:16PM -0500, Matt Corallo via bitcoin-dev wrote:&#xA;&gt; This is where *all* the complexity comes from. If our goal is to &#34;ensure a&#xA;&gt; bump increases a miner&#39;s overall revenue&#34; (thus not wasting relay for&#xA;&gt; everyone else), then we precisely *do* need&#xA;&gt; &gt; Special consideration for &#34;what should be in the next&#xA;&gt; &gt; block&#34; and/or the caching of block templates seems like an imposing&#xA;&gt; &gt; dependency&#xA;&gt; Whether a transaction increases a miner&#39;s revenue depends precisely on&#xA;&gt; whether the transaction (package) being replaced is in the next block - if&#xA;&gt; it is, you care about the absolute fee of the package and its replacement.&#xA;&#xA;On Thu, Feb 10, 2022 at 11:44:38PM +0000, darosior via bitcoin-dev wrote:&#xA;&gt; It&#39;s not that simple. As a miner, if i have less than 1vMB of transactions in my mempool. I don&#39;t want a 10sats/vb transaction paying 100000sats by a 100sats/vb transaction paying only 10000sats.&#xA;&#xA;Is it really true that miners do/should care about that?&#xA;&#xA;If you did this particular example, the miner would be losing 90k sats&#xA;in fees, which would be at most 1.44 *millionths* of a percent of the&#xA;block reward with the subsidy at 6.25BTC per block, even if there were&#xA;no other transactions in the mempool. Even cumulatively, 10sats/vb over&#xA;1MB versus 100sats/vb over 10kB is only a 1.44% loss of block revenue.&#xA;&#xA;I suspect the &#34;economically rational&#34; choice would be to happily trade&#xA;off that immediate loss against even a small chance of a simpler policy&#xA;encouraging higher adoption of bitcoin, _or_ a small chance of more&#xA;on-chain activity due to higher adoption of bitcoin protocols like&#xA;lightning and thus a lower chance of an empty mempool in future.&#xA;&#xA;If the network has an &#34;empty mempool&#34; (say less than 2MvB-10MvB of&#xA;backlog even if you have access to every valid 1+ sat/vB tx on any node&#xA;connected to the network), then I don&#39;t think you&#39;ll generally have txs&#xA;with fee rates greater than ~20 sat/vB (ie 20x the minimum fee rate),&#xA;which means your maximum loss is about 3% of block revenue, at least&#xA;while the block subsidy remains at 6.25BTC/block.&#xA;&#xA;Certainly those percentages can be expected to double every four years as&#xA;the block reward halves (assuming we don&#39;t also reduce the min relay fee&#xA;and block min tx fee), but I think for both miners and network stability,&#xA;it&#39;d be better to have the mempool backlog increase over time, which&#xA;would both mean there&#39;s no/less need to worry about the special case of&#xA;the mempool being empty, and give a better incentive for people to pay&#xA;higher fees for quicker confirmations.&#xA;&#xA;If we accept that logic (and assuming we had some additional policy&#xA;to prevent p2p relay spam due to replacement txs), we could make&#xA;the mempool accept policy for replacements just be (something like)&#xA;&#34;[package] feerate is greater than max(descendent fee rate)&#34;, which&#xA;seems like it&#39;d be pretty straightforward to deal with in general?&#xA;&#xA;&#xA;&#xA;Thinking about it a little more; I think the decision as to whether&#xA;you want to have a &#34;100kvB at 10sat/vb&#34; tx or a conflicting &#34;1kvB at&#xA;100sat/vb&#34; tx in your mempool if you&#39;re going to take into account&#xA;unrelated, lower fee rate txs that are also in the mempool makes block&#xA;building &#34;more&#34; of an NP-hard problem and makes the greedy solution&#xA;we&#39;ve currently got much more suboptimal -- if you really want to do that&#xA;optimally, I think you have to have a mempool that retains conflicting&#xA;txs and runs a dynamic programming solution to pick the best set, rather&#xA;than today&#39;s simple greedy algorithms both for building the block and&#xA;populating the mempool?&#xA;&#xA;For example, if you had two such replacements come through the network,&#xA;a miner could want to flip from initially accepting the first replacement,&#xA;to unaccepting it:&#xA;&#xA;Initial mempool: two big txs at 100k each, many small transactions at&#xA;15s/vB and 1s/vB&#xA;&#xA; [100kvB at 20s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB] [1000kvB at 1s/vB]&#xA;   -&gt; 0.148 BTC for 1MvB (100*20 + 850*15 + 50*1)&#xA;&#xA;Replacement for the 20s/vB tx paying a higher fee rate but lower total&#xA;fee; that&#39;s worth including:&#xA;&#xA; [10kvB at 100s/vB] [850kvB at 15s/vB] [100kvB at 12s/vB [1000kvB at 1s/vB]&#xA;   -&gt; 0.1499 BTC for 1MvB (10*100 + 850*15 + 100*12 + 40*1)&#xA;&#xA;Later, replacement for the 12s/vB tx comes in, also paying higher fee&#xA;rate but lower total fee. Worth including, but only if you revert the&#xA;original replacement:&#xA;&#xA; [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]&#xA;   -&gt; 0.16 BTC for 1MvB (150*20 + 850*15)&#xA;&#xA; [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [1000kvB at 1s/vB]&#xA;   -&gt; 0.1484 BTC for 1MvB (10*100 + 50*20 + 850*15 + 90*1)&#xA;&#xA;Algorithms/mempool policies you might have, and their results with&#xA;this example:&#xA;&#xA; * current RBF rules: reject both replacements because they don&#39;t&#xA;   increase the absolute fee, thus get the minimum block fees of&#xA;   0.148 BTC&#xA;&#xA; * reject RBF unless it increases the fee rate, and get 0.1484 BTC in&#xA;   fees&#xA;&#xA; * reject RBF if it&#39;s lower fee rate or immediately decreases the block&#xA;   reward: so, accept the first replacement, but reject the second,&#xA;   getting 0.1499 BTC&#xA;&#xA; * only discard a conflicting tx when it pays both a lower fee rate and&#xA;   lower absolute fees, and choose amongst conflicting txs optimally&#xA;   via some complicated tx allocation algorithm when generating a block,&#xA;   and get 0.16 BTC&#xA;&#xA;In this example, those techniques give 92.5%, 92.75%, 93.69% and 100% of&#xA;total possible fees you could collect; and 99.813%, 99.819%, 99.84% and&#xA;100% of the total possible block reward at 6.25BTC/block.&#xA;&#xA;Is there a plausible example where the difference isn&#39;t that marginal?&#xA;Seems like the simplest solution of just checking the (package/descendent)&#xA;fee rate increases works well enough here at least.&#xA;&#xA;If 90kvB of unrelated txs at 14s/vB were then added to the mempool, then&#xA;replacing both txs becomes (just barely) optimal, meaning the smartest&#xA;possible algorithm and the dumbest one of just considering the fee rate&#xA;produce the same result, while the others are worse:&#xA;&#xA; [10kvB at 100s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]&#xA;   -&gt; 0.1601 BTC for 1MvB&#xA;   (accepting both)&#xA;&#xA; [100kvB at 20s/vB] [50kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB]&#xA;   -&gt; 0.1575 BTC for 1MvB &#xA;   (accepting only the second replacement)&#xA;&#xA; [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]&#xA;   -&gt; 0.1551 BTC for 1MvB&#xA;   (first replacement only, optimal tx selection: 10*100, 850*15, 50*14, 100*12)&#xA;&#xA; [100kvB at 20s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]&#xA;   -&gt; 0.1545 BTC for 1MvB&#xA;   (accepting neither replacement)&#xA;&#xA; [10kvB at 100s/vB] [850kvB at 15s/vB] [90kvB at 14s/vB] [100kvB at 12s/vB]&#xA;   -&gt; 0.1506 BTC for 1MvB &#xA;   (first replacement only, greedy tx selection: 10*100, 850*15, 90*14, 50*1)&#xA;&#xA;Always accepting (package/descendent) fee rate increases removes the&#xA;possibility of pinning entirely, I think -- you still have the problem&#xA;where someone else might get a conflicting transaction confirmed first,&#xA;but they can&#39;t get a conflicting tx stuck in the mempool without&#xA;confirming if you&#39;re willing to pay enough to get it confirmed.&#xA;&#xA;&#xA;&#xA;Note that if we did have this policy, you could abuse it to cheaply drain&#xA;people&#39;s mempools: if there was a 300MB backlog, you could publish 2980&#xA;100kB txs paying a fee rate just below the next block fee, meaning you&#39;d&#xA;kick out the previous backlog and your transactions take up all but the&#xA;top 2MB of the mempool; if you then replace them all with perhaps 2980&#xA;100B txs paying a slightly higher fee rate, the default mempool will be&#xA;left with only 2.3MB, at an ultimate cost to you of only about 30% of a&#xA;block in fees, and you could then fill the mempool back up by spamming&#xA;300MB of ultra low fee rate txs.&#xA;&#xA;I think spam prevention at the outbound relay level isn&#39;t enough to&#xA;prevent that: an attacker could contact every public node and relay the&#xA;txs directly, clearing out the mempool of most public nodes directly. So&#xA;you&#39;d want some sort of spam prevention on inbound txs too?&#xA;&#xA;So I think you&#39;d need to carefully think about relay spam before making&#xA;this sort of change.  Also, if we had tx rebroadcast implemented then&#xA;having just a few nodes with large mempools might allow the network to&#xA;recover from this situation automatically.&#xA;&#xA;Cheers,&#xA;aj</html></oembed>