{"type":"rich","version":"1.0","title":"TheGuySwann wrote","author_name":"TheGuySwann (npub1h8…6rpev)","author_url":"https://yabu.me/npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev","provider_name":"njump","provider_url":"https://yabu.me","html":"Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.\n\nOtherwise a local llama3.1 20B is solid if you have the RAM"}
