<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <updated>2023-06-09T12:13:59Z</updated>
  <generator>https://yabu.me</generator>

  <title>Nostr notes by Pieter Wuille [ARCHIVE]</title>
  <author>
    <name>Pieter Wuille [ARCHIVE]</name>
  </author>
  <link rel="self" type="application/atom+xml" href="https://yabu.me/npub1tjephawh7fdf6358jufuh5eyxwauzrjqa7qn50pglee4tayc2ntqcjtl6r.rss" />
  <link href="https://yabu.me/npub1tjephawh7fdf6358jufuh5eyxwauzrjqa7qn50pglee4tayc2ntqcjtl6r" />
  <id>https://yabu.me/npub1tjephawh7fdf6358jufuh5eyxwauzrjqa7qn50pglee4tayc2ntqcjtl6r</id>
  <icon></icon>
  <logo></logo>




  <entry>
    <id>https://yabu.me/nevent1qqsg8y6zjzuxaela5lvkx5vddtdvpmhzx0ympc5t3eh4ru9a23pf9uczypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dvnhk9jy</id>
    
      <title type="html">📅 Original date posted:2018-06-27 📝 Original message:On ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsg8y6zjzuxaela5lvkx5vddtdvpmhzx0ympc5t3eh4ru9a23pf9uczypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dvnhk9jy" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsy4dr6e8qaxjran0c3kwf9zqzn06naqqflkyep8pnnnhm990uy5aqhzak3j&#39;&gt;nevent1q…ak3j&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;📅 Original date posted:2018-06-27&lt;br/&gt;📝 Original message:On Wed, Jun 27, 2018, 07:04 matejcik &amp;lt;jan.matejek at satoshilabs.com&amp;gt; wrote:&lt;br/&gt;&lt;br/&gt;&amp;gt; hello,&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; On 26.6.2018 22:30, Pieter Wuille wrote:&lt;br/&gt;&amp;gt; &amp;gt;&amp;gt; (Moreover, as I wrote previously, the Combiner seems like a weirdly&lt;br/&gt;&amp;gt; &amp;gt;&amp;gt; placed role. I still don&amp;#39;t see its significance and why is it important&lt;br/&gt;&amp;gt; &amp;gt;&amp;gt; to correctly combine PSBTs by agents that don&amp;#39;t understand them. If you&lt;br/&gt;&amp;gt; &amp;gt;&amp;gt; have a usecase in mind, please explain.&lt;br/&gt;&amp;gt; &amp;gt;&lt;br/&gt;&amp;gt; &amp;gt; Forward compatibility with new script types. A transaction may spend&lt;br/&gt;&amp;gt; &amp;gt; inputs from different outputs, with different script types. Perhaps&lt;br/&gt;&amp;gt; &amp;gt; some of these are highly specialized things only implemented by some&lt;br/&gt;&amp;gt; &amp;gt; software (say HTLCs of a particular structure), in non-overlapping&lt;br/&gt;&amp;gt; &amp;gt; ways where no piece of software can handle all scripts involved in a&lt;br/&gt;&amp;gt; &amp;gt; single transaction. If Combiners cannot deal with unknown fields, they&lt;br/&gt;&amp;gt; &amp;gt; won&amp;#39;t be able to deal with unknown scripts.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; Record-based Combiners *can* deal with unknown fields. Either by&lt;br/&gt;&amp;gt; including both versions, or by including one selected at random. This is&lt;br/&gt;&amp;gt; the same in k-v model.&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;Yes, I wasn&amp;#39;t claiming otherwise. This was just a response to your question&lt;br/&gt;why it is important that Combiners can process unknown fields. It is not an&lt;br/&gt;argument in favor of one model or the other.&lt;br/&gt;&lt;br/&gt;&amp;gt; combining must be done independently by Combiner implementations for&lt;br/&gt;&amp;gt; &amp;gt; each script type involved. As this is easily avoided by adding a&lt;br/&gt;&amp;gt; &amp;gt; slight bit of structure (parts of the fields that need to be unique -&lt;br/&gt;&amp;gt; &amp;gt; &amp;#34;keys&amp;#34;), this seems the preferable option.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; IIUC, you&amp;#39;re proposing a &amp;#34;semi-smart Combiner&amp;#34; that understands and&lt;br/&gt;&amp;gt; processes some fields but not others? That doesn&amp;#39;t seem to change&lt;br/&gt;&amp;gt; things. Either the &amp;#34;dumb&amp;#34; combiner throws data away before the &amp;#34;smart&amp;#34;&lt;br/&gt;&amp;gt; one sees it, or it needs to include all of it anyway.&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;No, I&amp;#39;m exactly arguing against smartness in the Combiner. It should always&lt;br/&gt;be possible to implement a Combiner without any script specific logic.&lt;br/&gt;&lt;br/&gt;&amp;gt; No, a Combiner can pick any of the values in case different PSBTs have&lt;br/&gt;&amp;gt; &amp;gt; different values for the same key. That&amp;#39;s the point: by having a&lt;br/&gt;&amp;gt; &amp;gt; key-value structure the choice of fields can be made such that&lt;br/&gt;&amp;gt; &amp;gt; Combiners don&amp;#39;t need to care about the contents. Finalizers do need to&lt;br/&gt;&amp;gt; &amp;gt; understand the contents, but they only operate once at the end.&lt;br/&gt;&amp;gt; &amp;gt; Combiners may be involved in any PSBT passing from one entity to&lt;br/&gt;&amp;gt; &amp;gt; another.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; Yes. Combiners don&amp;#39;t need to care about the contents.&lt;br/&gt;&amp;gt; So why is it important that a Combiner properly de-duplicates the case&lt;br/&gt;&amp;gt; where keys are the same but values are different? This is a job that,&lt;br/&gt;&amp;gt; AFAICT so far, can be safely left to someone along the chain who&lt;br/&gt;&amp;gt; understands that particular record.&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;That&amp;#39;s because PSBTs can be copied, signed, and combined back together. A&lt;br/&gt;Combiner which does not deduplicate (at all) would end up having every&lt;br/&gt;original record present N times, one for each copy, a possibly large blowup.&lt;br/&gt;&lt;br/&gt;For all fields I can think of right now, that type of deduplication can be&lt;br/&gt;done through whole-record uniqueness.&lt;br/&gt;&lt;br/&gt;The question whether you need whole-record uniqueness or specified-length&lt;br/&gt;uniqueness (=what is offered by a key-value model) is a philosophical one&lt;br/&gt;(as I mentioned before). I have a preference for stronger invariants on the&lt;br/&gt;file format, so that it becomes illegal for a PSBT to contain multiple&lt;br/&gt;signatures for the same key for example, and implementations do not need to&lt;br/&gt;deal with the case where multiple are present.&lt;br/&gt;&lt;br/&gt;It seems that you consider the latter PSBT &amp;#34;invalid&amp;#34;. But it is well&lt;br/&gt;&amp;gt; formed and doesn&amp;#39;t contain duplicate records. A Finalizer, or a&lt;br/&gt;&amp;gt; different Combiner that understands field F, can as well have the rule&lt;br/&gt;&amp;gt; &amp;#34;throw away all but one&amp;#34; for this case.&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;It&amp;#39;s not about considering. We&amp;#39;re writing a specification. Either it is&lt;br/&gt;made invalid, or not.&lt;br/&gt;&lt;br/&gt;In a key-value model you can have dumb combiners that must pick one of the&lt;br/&gt;keys in case of duplication, and remove the necessity of dealing with&lt;br/&gt;duplication from all other implementations (which I consider to be a good&lt;br/&gt;thing). In a record-based model you cannot guarantee deduplication of&lt;br/&gt;records that permit repetition per type, because a dumb combiner cannot&lt;br/&gt;understand what part is supposed to be unique. As a result, a record-based&lt;br/&gt;model forces you to let all implementations deal with e.g. multiple partial&lt;br/&gt;signatures for a single key. This is a minor issue, but in my view shows&lt;br/&gt;how records are a less than perfect match for the problem at hand.&lt;br/&gt;&lt;br/&gt;To repeat and restate my central question:&lt;br/&gt;&amp;gt; Why is it important, that an agent which doesn&amp;#39;t understand a particular&lt;br/&gt;&amp;gt; field structure, can nevertheless make decisions about its inclusion or&lt;br/&gt;&amp;gt; omission from the result (based on a repeated prefix)?&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;Again, because otherwise you may need a separate Combiner for each type of&lt;br/&gt;script involved. That would be unfortunate, and is very easily avoided.&lt;br/&gt;&lt;br/&gt;Actually, I can imagine the opposite: having fields with same &amp;#34;key&amp;#34;&lt;br/&gt;&amp;gt; (identifying data), and wanting to combine their &amp;#34;values&amp;#34; intelligently&lt;br/&gt;&amp;gt; without losing any of the data. Say, two Signers producing separate&lt;br/&gt;&amp;gt; parts of a combined-signature under the same common public key?&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;That can always be avoided by using different identifying information as&lt;br/&gt;key for these fields. In your example, assuming you&amp;#39;re talking about some&lt;br/&gt;form of threshold signature scheme, every party has their own &amp;#34;shard&amp;#34; of&lt;br/&gt;the key, which still uniquely identifies the participant. If they have no&lt;br/&gt;data that is unique to the participant, they are clones, and don&amp;#39;t need to&lt;br/&gt;interact regardless.&lt;br/&gt;&lt;br/&gt;&amp;gt; In case of BIP32 derivation, computing the pubkeys is possibly&lt;br/&gt;&amp;gt; &amp;gt; expensive. A simple signer can choose to just sign with whatever keys&lt;br/&gt;&amp;gt; &amp;gt; are present, but they&amp;#39;re not the only way to implement a signer, and&lt;br/&gt;&amp;gt; &amp;gt; even less the only software interacting with this format. Others may&lt;br/&gt;&amp;gt; &amp;gt; want to use a matching approach to find keys that are relevant;&lt;br/&gt;&amp;gt; &amp;gt; without pubkeys in the format, they&amp;#39;re forced to perform derivations&lt;br/&gt;&amp;gt; &amp;gt; for all keys present.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; I&amp;#39;m going to search for relevant keys by comparing master fingerprint; I&lt;br/&gt;&amp;gt; would expect HWWs generally don&amp;#39;t have index based on leaf pubkeys.&lt;br/&gt;&amp;gt; OTOH, Signers with lots of keys probably aren&amp;#39;t resource-constrained and&lt;br/&gt;&amp;gt; can do the derivations in case of collisions.&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;Perhaps you want to avoid signing with keys that are already signed with?&lt;br/&gt;If you need to derive all the keys before even knowing what was already&lt;br/&gt;signed with, you&amp;#39;ve already performed 80% of the work.&lt;br/&gt;&lt;br/&gt;&amp;gt; If you take the records model, and then additionally drop the&lt;br/&gt;&amp;gt; &amp;gt; whole-record uniqueness constraint, yes, though that seems pushing it&lt;br/&gt;&amp;gt; &amp;gt; a bit by moving even more guarantees from the file format to&lt;br/&gt;&amp;gt; &amp;gt; application level code.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; The &amp;#34;file format&amp;#34; makes no guarantees, because the parsing code and&lt;br/&gt;&amp;gt; application code is the same anyway. You could say I&amp;#39;m proposing to&lt;br/&gt;&amp;gt; separate these concerns ;)&lt;br/&gt;&amp;gt;&lt;br/&gt;&lt;br/&gt;Of course a file format can make guarantees. If certain combinations of&lt;br/&gt;data in it do not satsify the specification, the file is illegal, and&lt;br/&gt;implementations do not need to deal with it. Stricter file formats are&lt;br/&gt;easier to deal with, because there are less edge cases to consider.&lt;br/&gt;&lt;br/&gt;To your point: proto v2 afaik has no way to declare &amp;#34;whole record&lt;br/&gt;uniqueness&amp;#34;, so either you drop that (which I think is unacceptable - see&lt;br/&gt;the copy/sign/combine argument above), or you deal with it in your&lt;br/&gt;application code.&lt;br/&gt;&lt;br/&gt;Cheers,&lt;br/&gt;&lt;br/&gt;-- &lt;br/&gt;Pieter&lt;br/&gt;-------------- next part --------------&lt;br/&gt;An HTML attachment was scrubbed...&lt;br/&gt;URL: &amp;lt;&lt;a href=&#34;http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20180627/fecd0346/attachment-0001.html&amp;gt&#34;&gt;http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20180627/fecd0346/attachment-0001.html&amp;gt&lt;/a&gt;;
    </content>
    <updated>2023-06-07T18:13:14Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqsfcg9t4m6pdm8duldtj909yzuhz6waqeksfjc8865lw5xthqt7yqszypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dvtlqr2z</id>
    
      <title type="html">📅 Original date posted:2018-06-26 📝 Original message:On ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqsfcg9t4m6pdm8duldtj909yzuhz6waqeksfjc8865lw5xthqt7yqszypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dvtlqr2z" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqsffwwcmtzylmzgtvqww9lujqx0s5rxw5fd88ae09z4l8x0lwj633glz4xlf&#39;&gt;nevent1q…4xlf&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;📅 Original date posted:2018-06-26&lt;br/&gt;📝 Original message:On Tue, Jun 26, 2018 at 8:33 AM, matejcik via bitcoin-dev&lt;br/&gt;&amp;lt;bitcoin-dev at lists.linuxfoundation.org&amp;gt; wrote:&lt;br/&gt;&amp;gt; I&amp;#39;m still going to argue against the key-value model though.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; It&amp;#39;s true that this is not significant in terms of space. But I&amp;#39;m more&lt;br/&gt;&amp;gt; concerned about human readability, i.e., confusing future implementers.&lt;br/&gt;&amp;gt; At this point, the key-value model is there &amp;#34;for historical reasons&amp;#34;,&lt;br/&gt;&amp;gt; except these aren&amp;#39;t valid even before finalizing the format. The&lt;br/&gt;&amp;gt; original rationale for using key-values seems to be gone (no key-based&lt;br/&gt;&amp;gt; lookups are necessary). As for combining and deduplication, whether key&lt;br/&gt;&amp;gt; data is present or not is now purely a stand-in for a &amp;#34;repeatable&amp;#34; flag.&lt;br/&gt;&amp;gt; We could just as easily say, e.g., that the high bit of &amp;#34;type&amp;#34; specifies&lt;br/&gt;&amp;gt; whether this record can be repeated.&lt;br/&gt;&lt;br/&gt;I understand this is a philosophical point, but to me it&amp;#39;s the&lt;br/&gt;opposite. The file conveys &amp;#34;the script is X&amp;#34;, &amp;#34;the signature for key X&lt;br/&gt;is Y&amp;#34;, &amp;#34;the derivation for key X is Y&amp;#34; - all extra metadata added to&lt;br/&gt;inputs of the form &amp;#34;the X is Y&amp;#34;. In a typed record model, you still&lt;br/&gt;have Xes, but they are restricted to a single number (the record&lt;br/&gt;type). In cases where that is insufficient, your solution is adding a&lt;br/&gt;repeatable flag to switch from &amp;#34;the first byte needs to be unique&amp;#34; to&lt;br/&gt;&amp;#34;the entire record needs to be unique&amp;#34;. Why just those two? It seems&lt;br/&gt;much more natural to have a length that directly tells you how many of&lt;br/&gt;the first bytes need to be unique (which brings you back to the&lt;br/&gt;key-value model).&lt;br/&gt;&lt;br/&gt;Since the redundant script hashes were removed by making the scripts&lt;br/&gt;per-input, I think the most compelling reason (size advantages) for a&lt;br/&gt;record based model is gone.&lt;br/&gt;&lt;br/&gt;&amp;gt; (Moreover, as I wrote previously, the Combiner seems like a weirdly&lt;br/&gt;&amp;gt; placed role. I still don&amp;#39;t see its significance and why is it important&lt;br/&gt;&amp;gt; to correctly combine PSBTs by agents that don&amp;#39;t understand them. If you&lt;br/&gt;&amp;gt; have a usecase in mind, please explain.&lt;br/&gt;&lt;br/&gt;Forward compatibility with new script types. A transaction may spend&lt;br/&gt;inputs from different outputs, with different script types. Perhaps&lt;br/&gt;some of these are highly specialized things only implemented by some&lt;br/&gt;software (say HTLCs of a particular structure), in non-overlapping&lt;br/&gt;ways where no piece of software can handle all scripts involved in a&lt;br/&gt;single transaction. If Combiners cannot deal with unknown fields, they&lt;br/&gt;won&amp;#39;t be able to deal with unknown scripts. That would mean that&lt;br/&gt;combining must be done independently by Combiner implementations for&lt;br/&gt;each script type involved. As this is easily avoided by adding a&lt;br/&gt;slight bit of structure (parts of the fields that need to be unique -&lt;br/&gt;&amp;#34;keys&amp;#34;), this seems the preferable option.&lt;br/&gt;&lt;br/&gt;&amp;gt; ISTM a Combiner could just as well combine based on whole-record&lt;br/&gt;&amp;gt; uniqueness, and leave the duplicate detection to the Finalizer. In case&lt;br/&gt;&amp;gt; the incoming PSBTs have incompatible unique fields, the Combiner would&lt;br/&gt;&amp;gt; have to fail anyway, so the Finalizer might as well do it. Perhaps it&lt;br/&gt;&amp;gt; would be good to leave out the Combiner role entirely?)&lt;br/&gt;&lt;br/&gt;No, a Combiner can pick any of the values in case different PSBTs have&lt;br/&gt;different values for the same key. That&amp;#39;s the point: by having a&lt;br/&gt;key-value structure the choice of fields can be made such that&lt;br/&gt;Combiners don&amp;#39;t need to care about the contents. Finalizers do need to&lt;br/&gt;understand the contents, but they only operate once at the end.&lt;br/&gt;Combiners may be involved in any PSBT passing from one entity to&lt;br/&gt;another.&lt;br/&gt;&lt;br/&gt;&amp;gt; There&amp;#39;s two remaining types where key data is used: BIP32 derivations&lt;br/&gt;&amp;gt; and partial signatures. In case of BIP32 derivation, the key data is&lt;br/&gt;&amp;gt; redundant ( pubkey = derive(value) ), so I&amp;#39;d argue we should leave that&lt;br/&gt;&amp;gt; out and save space. In case of partial signatures, it&amp;#39;s simple enough to&lt;br/&gt;&amp;gt; make the pubkey part of the value.&lt;br/&gt;&lt;br/&gt;In case of BIP32 derivation, computing the pubkeys is possibly&lt;br/&gt;expensive. A simple signer can choose to just sign with whatever keys&lt;br/&gt;are present, but they&amp;#39;re not the only way to implement a signer, and&lt;br/&gt;even less the only software interacting with this format. Others may&lt;br/&gt;want to use a matching approach to find keys that are relevant;&lt;br/&gt;without pubkeys in the format, they&amp;#39;re forced to perform derivations&lt;br/&gt;for all keys present.&lt;br/&gt;&lt;br/&gt;And yes, it&amp;#39;s simple enough to make the key part of the value&lt;br/&gt;everywhere, but in that case it becomes legal for a PSBT to contain&lt;br/&gt;multiple signatures for a key, for example, and all software needs to&lt;br/&gt;deal with that possibility. With a stronger uniqueness constraint,&lt;br/&gt;only Combiners need to deal with repetitions.&lt;br/&gt;&lt;br/&gt;&amp;gt; Thing is: BIP174 *is basically protobuf* (v2) as it stands. If I&amp;#39;m&lt;br/&gt;&amp;gt; succesful in convincing you to switch to a record set model, it&amp;#39;s going&lt;br/&gt;&amp;gt; to be &amp;#34;protobuf with different varint&amp;#34;.&lt;br/&gt;&lt;br/&gt;If you take the records model, and then additionally drop the&lt;br/&gt;whole-record uniqueness constraint, yes, though that seems pushing it&lt;br/&gt;a bit by moving even more guarantees from the file format to&lt;br/&gt;application level code. I&amp;#39;d like to hear opinions of other people who&lt;br/&gt;have worked on implementations about changing this.&lt;br/&gt;&lt;br/&gt;Cheers,&lt;br/&gt;&lt;br/&gt;-- &lt;br/&gt;Pieter
    </content>
    <updated>2023-06-07T18:13:13Z</updated>
  </entry>

  <entry>
    <id>https://yabu.me/nevent1qqs9k8lyy8lfl0enz3tphfr3u3j5xeqkh3ykmpvqhma8rd4vfwruphczypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dv94r4z6</id>
    
      <title type="html">📅 Original date posted:2018-06-22 📝 Original message:On ...</title>
    
    <link rel="alternate" href="https://yabu.me/nevent1qqs9k8lyy8lfl0enz3tphfr3u3j5xeqkh3ykmpvqhma8rd4vfwruphczypwtyxl46le9482xs7t38j7nysemhsgwgrhczw3u9rl8x405np2dv94r4z6" />
    <content type="html">
      In reply to &lt;a href=&#39;/nevent1qqs0gc63gfz3t7rq34n4jx8k30q9xwt6fkwwcwc4qa7dpvsm0tevsngpxgrdk&#39;&gt;nevent1q…grdk&lt;/a&gt;&lt;br/&gt;_________________________&lt;br/&gt;&lt;br/&gt;📅 Original date posted:2018-06-22&lt;br/&gt;📝 Original message:On Thu, Jun 21, 2018 at 12:56 PM, Peter D. Gray via bitcoin-dev&lt;br/&gt;&amp;lt;bitcoin-dev at lists.linuxfoundation.org&amp;gt; wrote:&lt;br/&gt;&amp;gt; I have personally implemented this spec on an embedded micro, as&lt;br/&gt;&amp;gt; the signer and finalizer roles, and written multiple parsers for&lt;br/&gt;&amp;gt; it as well. There is nothing wrong with it, and it perfectly meets&lt;br/&gt;&amp;gt; my needs as a hardware wallet.&lt;br/&gt;&lt;br/&gt;This is awesome to hear. We need to hear from people who have comments&lt;br/&gt;or issues they encounter while implementing, but also cases where&lt;br/&gt;things are fine as is.&lt;br/&gt;&lt;br/&gt;&amp;gt; So, there is a good proposal already spec&amp;#39;ed and implemented by&lt;br/&gt;&amp;gt; multiple parties. Andrew has been very patiently shepherding the PR&lt;br/&gt;&amp;gt; for over six months already.&lt;br/&gt;&amp;gt;&lt;br/&gt;&amp;gt; PSBT is something we need, and has been missing from the ecosystem&lt;br/&gt;&amp;gt; for a long time. Let&amp;#39;s push this out and start talking about future&lt;br/&gt;&amp;gt; versions after we learn from this one.&lt;br/&gt;&lt;br/&gt;I understand you find the suggestions being brought up in this thread&lt;br/&gt;to be bikeshedding over details, and I certainly agree that &amp;#34;changing&lt;br/&gt;X will gratuitously cause us more work&amp;#34; is a good reason not to make&lt;br/&gt;breaking changes to minutiae. However, at least abstractly speaking,&lt;br/&gt;it would be highly unfortunate if the fact that someone implemented a&lt;br/&gt;draft specification results in a vested interest against changes which&lt;br/&gt;may materially improve the standard.&lt;br/&gt;&lt;br/&gt;In practice, the process surrounding BIPs&amp;#39; production readiness is not&lt;br/&gt;nearly as clear as it could be, and there are plenty of BIPs actually&lt;br/&gt;deployed in production which are still marked as draft. So in reality,&lt;br/&gt;truth is that this thread is &amp;#34;late&amp;#34;, and also why I started the&lt;br/&gt;discussion by asking what the state of implementations was. As a&lt;br/&gt;result, the discussion should be &amp;#34;which changes are worth the hassle&amp;#34;,&lt;br/&gt;and not &amp;#34;what other ideas can we throw in&amp;#34; - and some of the things&lt;br/&gt;brought up are certainly the latter.&lt;br/&gt;&lt;br/&gt;So to get back to the question what changes are worth the hassle - I&lt;br/&gt;believe the per-input derivation paths suggested by matejcik may be&lt;br/&gt;one. As is written right now, I believe BIP174 requires Signers to&lt;br/&gt;pretty much always parse or template match the scripts involved. This&lt;br/&gt;means it is relatively hard to implement a Signer which is compatible&lt;br/&gt;with many types of scripts - including ones that haven&amp;#39;t been&lt;br/&gt;considered yet. However, if derivation paths are per-input, a signer&lt;br/&gt;can just produce partial signatures for all keys it has the master&lt;br/&gt;for. As long as the Finalizer understands the script type, this would&lt;br/&gt;mean that Signers will work with any script. My guess is that this&lt;br/&gt;would be especially relevant to devices where the Signer&lt;br/&gt;implementation is hard to change, like when it is implemented in a&lt;br/&gt;hardware signer directly.&lt;br/&gt;&lt;br/&gt;What do you think?&lt;br/&gt;&lt;br/&gt;Cheers,&lt;br/&gt;&lt;br/&gt;-- &lt;br/&gt;Pieter
    </content>
    <updated>2023-06-07T18:13:09Z</updated>
  </entry>

</feed>