<oembed><type>rich</type><version>1.0</version><title>TheGuySwann wrote</title><author_name>TheGuySwann (npub1h8…6rpev)</author_name><author_url>https://yabu.me/npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev</author_url><provider_name>njump</provider_name><provider_url>https://yabu.me</provider_url><html>Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.&#xA;&#xA;Otherwise a local llama3.1 20B is solid if you have the RAM</html></oembed>