Join Nostr
2025-03-13 23:46:48 UTC

Mike Stone on Nostr: So, I did it. I hooked up the #HomeAssistant Voice to my #Ollama instance. As ianjs ...

So, I did it. I hooked up the #HomeAssistant Voice to my #Ollama instance. As suggested, it's much better at recognizing the intent of my requests. As [@chris_hayes](https://fosstodon.org/@chris_hayes ) suggested, I'm using the new #Gemma3 model. It now knows "How's the weather" and "What's the weather" are the same thing, and I get an answer for both. Responses are a little slower than without the LLM, but honestly it's pretty negligible. It's a very little bit slower again if I use local #Piper vs HA's cloud service.