<oembed><type>rich</type><version>1.0</version><title>VitorPamplona wrote</title><author_name>VitorPamplona (npub1gc…fnj5z)</author_name><author_url>https://yabu.me/npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>Everytime I ask an AI to make a statement &#34;better&#34;, without further instructions, the result is often a weaker, less precise, more ambiguous, fuzzier version. &#xA;&#xA;It begs the question of why. What is making the model think fuzzier is &#34;better&#34;? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to &#34;average&#34; words to the most common denominator?&#xA;&#xA;GM. </html></oembed>