The workflow section contains workflows that i have personally used to create my AI generated videos. Or at least toyed around with. You need some basic knowledge about ComfyUI and/or stable-diffusion-webui by Automatic1111. How it works, and where to put the weights, loras etc.
LTX Video Image to Video
The zipfile contains the workflow in json and png format, plus the source image that was used to create the video. 1.5 Mb
Howto
Drag an initial image into the image node, adjust the prompt, and press queue. See red marked nodes. The rest should fit.
Description
An image to video ComfyUI workflow with LTX Video. It is basically the same workflow that you can find in the LTX custom node, but a bit rearranged and two little changes. So when you found the original workflow and you are happy with it, then there is no big need to download this workflow.
Note that i was not able to implement an upscaling into this workflow. I ran permanently into an OOM. So upscaling better happens in an extra step. You can find example upscaling workflows in my article about video upscaling in comfyui: https://www.tomgoodnoise.de/index.php/video-upscaling-in-comfyui/
Creation size can be freely chosen, as long as it is a power of 32. And i think it has a minimum size. But you want to render as big as possible anyways.
There are some collapsed Note nodes besides the important nodes. Click at them to expand them.
The Note nodes contain further informations. And in case of the models also links to the models, and where to put them.
Time
The example video rendered in amazingly 1:30 minutes plus some overhead for preparation. LTX video is fast.
Requirements
This workflow was generated with 16 gb vram. Minimum requirement is 12 gb vram. And you should not have fewer than 32 gb system ram.
CogvideoX Image to Video
The zipfile contains the workflow in json and png format, plus the source image that was used to create the video. 959 kb
Howto
Drag an initial image into the image node, adjust the prompt, and press queue. See red marked nodes. The rest should fit.
Description
An image to video ComfyUI workflow with CogVideoX. Tested with CogvideoX Fun 1.1 and 1.5. Note that the motion lora does not work with the Fun 1.5 model. Just with the 1.1 one.
This workflow also contains a CogVideoX motion lora for the camera movement. And you can also add further instructions in the prompt. CogVideoX relies at motion informations in text form.
It also has a very simple upscaling method implemented. I am at my journey to figure out a special upscaling workflow though. But for some it might still be useful. It is super fast compared to an upsampling by another ksampler.
CogVideo creation size is limited. The old version 1.1 is fixed to a 16:10 format. And 720×480 resolution. The new version 1.5 goes up to double size, but the motion lora that i use here does not work with it.
There are some collapsed Note nodes besides the important nodes. Click at them to expand them.
The Note nodes contain further informations. And in case of the models also links to the models, and where to put them.
Time
The example video rendered in 8 minutes plus some overhead for preparation, without the upscaling with CogvideoX Fun version 1.1. at an 4060 TI. Version 1.5 renders doube as fast. But the motion lora does not work. Upscaling is another 4 minutes then.
Requirements
This workflow was generated with 16 gb vram. Minimum requirement is 12 gb vram. You might get it to work with low vram settings. But i could not get CogVideoX to work with my old 3060 TI with just 8 gb vram.
Deforum Settings
The zipfile contains four example settings files for the Deforum extension in SD WebUI from Automatic1111 – 12 kb
Howto
Load one of the example settings file. Adjust prompt. Maybe add an initial image. Generate.
Description
Deforum is an addon for Stable Diffusion. It allows seed travelling, and was one of the first ways to generate animated content with Stable Diffusion.Which i have used for quite a while before i switched to Automatic1111
In the zipfile you will find four example setting files that i have used to generate my videos.
Time
Heavily depends of the settings and the size. Multiply the time to generate a single frame with the number of frames.
Requirements
These workflows were generated with 8 gb vram back in the days. I heard rumours that it even worked with 6 gb. It is an old technique from the early days.
You need SD WebUI from Automatic1111 or ForgeUI. I have used the Automatic111 solution. ForgeUI did not exist back in the days.
https://github.com/lllyasviel/stable-diffusion-webui-forge
https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases
I do NOT recommend to even try out the addon for ComfyUI. I did. And it killed my installation by messing around with the torch version. I had to start from scratch.
You need FFMpeg installed. https://www.ffmpeg.org/
And of course you need the Deforum addon, which is located here: I am not sure if this version of the addon also works in ForgeUI, but should. ForgeUI is a fork of the Automatic1111 version: https://github.com/deforum-art/sd-webui-deforum