{"type":"rich","version":"1.0","title":"OceanSlim wrote","author_name":"OceanSlim (npub1zm…v7f60)","author_url":"https://yabu.me/npub1zmc6qyqdfnllhnzzxr5wpepfpnzcf8q6m3jdveflmgruqvd3qa9sjv7f60","provider_name":"njump","provider_url":"https://yabu.me","html":"Well I can help you if you have questions. But running large local LLMs still won't be able to achieve what Large Language Models at data enters can deliver. nostr:npub1utx00neqgqln72j22kej3ux7803c2k986henvvha4thuwfkper4s7r50e8 has more experience building a rig specifically for this with 3 2070s if I remember right. He may have something to say on how well that can realistically perform. "}
