<oembed><type>rich</type><version>1.0</version><title>OceanSlim wrote</title><author_name>OceanSlim (npub1zm…v7f60)</author_name><author_url>https://yabu.me/npub1zmc6qyqdfnllhnzzxr5wpepfpnzcf8q6m3jdveflmgruqvd3qa9sjv7f60</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>Well I can help you if you have questions. But running large local LLMs still won&#39;t be able to achieve what Large Language Models at data enters can deliver. nostr:npub1utx00neqgqln72j22kej3ux7803c2k986henvvha4thuwfkper4s7r50e8 has more experience building a rig specifically for this with 3 2070s if I remember right. He may have something to say on how well that can realistically perform. </html></oembed>