so turns out higgsfield was too expensive for this, i usually use it for 5-9 sec videos
ββ
train LoRa for character consistency & specialized shots
& use stable video diffusion models or Wan 2.2 with vid2vid
this would be experimental, getting that desired output of resembling a waking life is harder than imagined
