Workflow : V2V - Just Talk - Prompt lip-synced voice and sounds to any silent video
Workflow: V2V - Just Talk - Prompt lip-synced voice and sounds to any silent video*
Add voice and sounds to your silent videos with lip-sync.
It has a few setting tweaks to play around with, such as facemask vs no facemask (how strict to adhere to the input video), as well as how strong influence the end of video should have. These settings will determine how much freedom the model has to change things. Too strict can look a bit unnatural.
Plus an extra feature of being able to also extend your silent video, since most such (from Wan etc) are probably short clips.
Using KJNodes LTXVAudioVideoMask and made for the split models on repro here
A little bit experimental, so might come updates to the workflow.. .but something to play around with for now ;-)
With extended video (optional part of the workflow)
that is not used it the workflow. It should just be empty, unless you want to use a lora.
Just select it and press CTRL + B to bypass the lora selector (alternatively remove the node)
Its there for user made nodes that can distort audio, and not important to the workflow at all ;-) more there for convenience. Will see if i can find a way for the loader to have blank so it doesnt error on no use, and update the workflow if so

