64 GB is going to get you the best of today. 128 GB is hard to justify, but it might give you a bit more runway if model sizes change. I'm not even sure how to predict that.
The latest LLama 3.2 models are fairly reasonably sized (1B to 11B for consumers) https://ollama.com/library/llama3.2
LLM + llm-ollama is a pretty nice combo plus the many other projects Simon writes about. Ollama can run hugging fact models too.