
{"id":7295,"date":"2026-01-10T23:23:50","date_gmt":"2026-01-10T15:23:50","guid":{"rendered":"https:\/\/infernews.com\/?page_id=7295"},"modified":"2026-01-17T23:13:40","modified_gmt":"2026-01-17T15:13:40","slug":"comfyui-ltx-2-video","status":"publish","type":"page","link":"https:\/\/infernews.com\/blog\/comfyui-ltx-2-video\/","title":{"rendered":"ComfyUI  LTX-2 video"},"content":{"rendered":"<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"ComfyUI LTX2 + Custom Audio = Perfect Lip Sync &amp;amp; Motion! You Can Make Your Own MV!\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_AqyyLY_ajTQ\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAqyyLY_ajTQ%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/AqyyLY_ajTQ\" \/><meta itemprop=\"duration\" content=\"PT9M12S\" \/><meta itemprop=\"uploadDate\" content=\"2026-01-16T13:00:31Z\" \/><\/div><div id=\"lyte_AqyyLY_ajTQ\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAqyyLY_ajTQ%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">ComfyUI LTX2 + Custom Audio = Perfect Lip Sync &amp; Motion! You Can Make Your Own MV!<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/AqyyLY_ajTQ\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAqyyLY_ajTQ%2F0.jpg\" alt=\"ComfyUI LTX2 + Custom Audio = Perfect Lip Sync &amp;amp; Motion! You Can Make Your Own MV!\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"In this video, we dive deep into the latest capabilities of LTX2 with ComfyUI, showing you how to fully control AI-generated video using your own custom audio input. Unlike basic image-to-video setups, we\u2019re using video-to-video generation with LTX2 LoRA models and ControlNet\u2014paired with native audio conditioning via the LTX Audio VAE Encode node. This allows for synchronized facial expressions, natural lip movements, and dynamic motion that matches your voice or music track, all generated locally without extra plugins. This tutorial is perfect for AI creators, digital artists, indie filmmakers, and YouTubers who want to produce high-quality, expressive AI avatars or stylized music videos using open-source tools. Whether you&#039;re experimenting with talking head avatars, animated narrations, or full-blown AI music videos, this workflow gives you precise control over timing, motion, and emotional expression\u2014something static image-to-video simply can\u2019t deliver reliably. Why does this matter? Because LTX2 in ComfyUI now supports true audio-aware video generation, letting you go beyond generic prompts and create content that reacts to your audio. With features like audio latent encoding, frame-count syncing, Stage 1\/Stage 2 sampling, and distilled LoRA detailers, you get studio-level polish without cloud costs or proprietary software. If you\u2019ve been waiting for a local, open-source alternative to commercial AI video tools\u2014this is it. LTX-2 Model Card https:\/\/huggingface.co\/Lightricks\/LTX-2 ComfyUI-LTXVideo Custom Nodes https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo LTXV2 Models include GGUF https:\/\/huggingface.co\/Kijai\/LTXV2_comfy \ud83c\udd5b\ud83c\udd63\ud83c\udd67 Gemma 3 Model Loader (follow the Text Encoder from LTX Team) https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo?tab=readme-ov-file#required-models Gemma 3 (For high VRam) https:\/\/huggingface.co\/google\/gemma-3-12b-it-qat-q4_0-unquantized Gemma-3-12b-it-bnb-4bit (For low VRam) https:\/\/huggingface.co\/unsloth\/gemma-3-12b-it-bnb-4bit Comfy-Org\/ltx-2 - gemma_3_12B_it.safetensors https:\/\/huggingface.co\/Comfy-Org\/ltx-2\/tree\/main\/split_files\/text_encoders ComfyUI-VoxCPMTTS https:\/\/github.com\/1038lab\/ComfyUI-VoxCPMTTS LTX2 V2V Controlnet + Custom Audio Workflow: https:\/\/www.patreon.com\/posts\/148361592?utm_source=youtube&amp;utm_medium=video&amp;utm_campaign=20260116 -------------------------------------------------------------------------------------------------------------------------------- Local Workstation GPU : https:\/\/amzn.to\/3XfXsAO -------------------------------------------------------------------------------------------------------------------------------- If You Like tutorial like this, You Can Support Our Work In Patreon: https:\/\/www.patreon.com\/c\/aifuturetech\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"New #1 open-source AI video generator is here! Fast + 4K + audio + low vram\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_I_b2QN-B1W0\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FI_b2QN-B1W0%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/I_b2QN-B1W0\" \/><meta itemprop=\"duration\" content=\"PT38M57S\" \/><meta itemprop=\"uploadDate\" content=\"2026-01-08T03:14:39Z\" \/><\/div><div id=\"lyte_I_b2QN-B1W0\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FI_b2QN-B1W0%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">New #1 open-source AI video generator is here! Fast + 4K + audio + low vram<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/I_b2QN-B1W0\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FI_b2QN-B1W0%2F0.jpg\" alt=\"New #1 open-source AI video generator is here! Fast + 4K + audio + low vram\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"LTX-2 full installation tutorial. How to use LTX-2 in ComfyUI with low vram. Best open source video generator. #ai #aitools #aivideo #veo #sora LTX-2 official repo: https:\/\/github.com\/Lightricks\/LTX-2 ComfyUI LTX-2 - download nodes here: https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo\/ Download LTX-2 models here: https:\/\/huggingface.co\/Lightricks\/LTX-2\/tree\/main Use this Gemma version instead - it\u2019s WAY smaller https:\/\/huggingface.co\/unsloth\/gemma-3-12b-it-bnb-4bit\/tree\/main LTX-2 loras: https:\/\/huggingface.co\/collections\/Lightricks\/ltx-2 If you don\u2019t have ComfyUI, see this tutorial first https:\/\/youtu.be\/g74Cq9Ip2ik 0:00 LTX-2 intro 0:44 LTX-2 open source specs 3:14 How to install LTX-2 in ComfyUI 6:44 Full vs distilled workflows 8:06 Distilled lora 9:15 Downloading models 13:31 Text to video workflow 15:15 How to use on low or no VRAM 18:45 How to use loras 21:05 Image to video 25:41 How to use controlnet 34:55 V2V detailer Newsletter: https:\/\/aisearch.substack.com\/ Find AI tools &amp; jobs: https:\/\/ai-search.io\/ Support: https:\/\/ko-fi.com\/aisearch Here&#039;s my equipment, in case you&#039;re wondering: Lenovo Thinkbook: https:\/\/amzn.to\/4jWeKwH Dell Precision 5690: https:\/\/www.dell.com\/en-us\/dt\/ai-technologies\/index.htm?utm_source=AISearchTools&amp;utm_medium=youtube&amp;utm_campaign=precisionai#tab0=0 GPU: Nvidia RTX 5000 Ada https:\/\/nvda.ws\/3zfqGqS Mic: Shure SM7B https:\/\/amzn.to\/3DErjt1 Audio interface: Scarlett Solo https:\/\/amzn.to\/3qELMeu\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_3FlHlTEFzd8\"><div id=\"lyte_3FlHlTEFzd8\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=%2F%2Fi.ytimg.com%2Fvi%2F3FlHlTEFzd8%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\"><\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/3FlHlTEFzd8\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F3FlHlTEFzd8%2F0.jpg\" alt=\"YouTube video thumbnail\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"Master LTX Video Upscaling: 2 Workflows for High-Res AI Video Enhancement\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_VVnnHWIb3EA\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVVnnHWIb3EA%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/VVnnHWIb3EA\" \/><meta itemprop=\"duration\" content=\"PT9M39S\" \/><meta itemprop=\"uploadDate\" content=\"2026-01-13T02:16:17Z\" \/><\/div><meta itemprop=\"accessibilityFeature\" content=\"captions\" \/><div id=\"lyte_VVnnHWIb3EA\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVVnnHWIb3EA%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">Master LTX Video Upscaling: 2 Workflows for High-Res AI Video Enhancement<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/VVnnHWIb3EA\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVVnnHWIb3EA%2F0.jpg\" alt=\"Master LTX Video Upscaling: 2 Workflows for High-Res AI Video Enhancement\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"In this video, we explore a powerful use case for LTX Video (LTX Two): Video Upscaling and Enhancement. We know LTX generates video, but can it fix your existing low-res clips? I have created two ComfyUI workflows to test this: a Simplified Version for quick results, and an Advanced Version that uses a &quot;fixed&quot; first frame for superior quality. We also tackle the common issue of identity drift in generative upscaling and how to solve it using Qwen Image Edit. Key Topics Covered: \ud83d\udd25 Two Workflows: Comparing the &quot;Simplified&quot; approach vs. the &quot;Advanced First-Frame Guided&quot; approach. \ud83d\udee0\ufe0f Node Breakdown: Using Kijai\u2019s optimized nodes, loading Transformers, VAEs, and the Embedding Connector properly. \ud83c\udfb5 Audio &amp; FPS Secrets: Why 25fps is crucial and how to use an &quot;Empty Latent&quot; trick to prevent audio errors in ComfyUI. \ud83e\udd16 Advanced Technique: Improving consistency by pre-upscaling the reference frame with Qwen Image Edit Plus. \u26a0\ufe0f Real Testing: Analyzing pros and cons, including text hallucination and slight facial changes during the upscale. #runninghub #LTXVideo #ComfyUI #AIUpscaling #VideoEnhancement #GenerativeAI #StableDiffusion #Kijai #WorkflowTutorial #Qwen #HighResVideo #AIVideo ------------------------------------------------------------------------------------------------------------------------ ltx2-kijai model download https:\/\/huggingface.co\/Kijai\/LTXV2_comfy\/tree\/main Qwen-Image-Edit-2511-Upscale2K https:\/\/huggingface.co\/valiantcat\/Qwen-Image-Edit-2511-Upscale2K online workflow 1\u3001video_ltx2_i2v_HD_upscale_basic Experience link: https:\/\/www.runninghub.ai\/post\/2010357744250920961\/?inviteCode=kol02-rh021 2\u3001video_ltx2_i2v_upscale_advanced Experience link: https:\/\/www.runninghub.ai\/post\/2010380854765297665\/?inviteCode=kol02-rh021 3\u3001qwen_image_edit_plus_upscale4k Experience link: https:\/\/www.runninghub.ai\/post\/1977064282565300225\/?inviteCode=kol02-rh021 Runninghub Fan Benefit: Register to get 1,000 RHB https:\/\/www.runninghub.ai\/?utm_source=kol02-RH021 workflow and prompt engineering download\uff1a https:\/\/github.com\/amao2001\/ganloss-latent-space\/tree\/main\/workflow\/2026-01-13ltx2upsacle ------------------------------------------------------------------------------------------------------------------------\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"LTX-2 Simplified Workflow \ud83d\udd25 Distilled Checkpoints or Separated VAE &amp;amp; Transformer?\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_-js3Lnq3Ip4\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F-js3Lnq3Ip4%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/-js3Lnq3Ip4\" \/><meta itemprop=\"duration\" content=\"PT11M34S\" \/><meta itemprop=\"uploadDate\" content=\"2026-01-12T21:33:34Z\" \/><\/div><meta itemprop=\"accessibilityFeature\" content=\"captions\" \/><div id=\"lyte_-js3Lnq3Ip4\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F-js3Lnq3Ip4%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">LTX-2 Simplified Workflow \ud83d\udd25 Distilled Checkpoints or Separated VAE &amp; Transformer?<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/-js3Lnq3Ip4\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F-js3Lnq3Ip4%2F0.jpg\" alt=\"LTX-2 Simplified Workflow \ud83d\udd25 Distilled Checkpoints or Separated VAE &amp;amp; Transformer?\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"\ud83c\udf1f LTX-2 for Beginners with Simple Workflow \ud83c\udf1f In this video, I\u2019ve simplified the entire LTX-2 workflow to make it cleaner, easier to read, and easier to use \u2014 especially for local setups and experimentation \ud83d\ude80 \ud83d\udd39 What\u2019s covered in this video: \u2705 A clean and simplified LTX-2 workflow \u2705 Explanation of all released LTX-2 checkpoints \u2705 Difference between normal vs distilled checkpoints \u2705 How distilled models generate results in just 8 sampling steps instead of the usual 20 \u2705 How to correctly use these separated components in a workflow \u26a0\ufe0f A critical bug that can silently increase video generation time by up to 10\u00d7 \ud83d\udcca Test results, sample prompts, and the exact workflow used to generate them This video is aimed at anyone who wants faster video generation, better understanding of LTX internals, and a more maintainable workflow without unnecessary complexity. If you\u2019re experimenting with LTX locally or optimizing inference speed, this will save you a lot of time. \ud83d\udcbb Hardware Used: i5 14400F, 64GB RAM, RTX 4060 TI 16GB #comfyui #aivideo #aiimage #aitools \ud83d\udd0d Resources: Links to files, prompts and workflow: https:\/\/drive.google.com\/drive\/folders\/14TCWLkma9tOeePBrCop0UrZX8f3Gf6C8?usp=sharing Image, Prompts and Result can be downloaded from Discord. https:\/\/discord.com\/channels\/1414348074477949022\/1460375927916728473 Previous Video - check if you are beginner: https:\/\/youtu.be\/W-M81QUfr9E Generate Long Videos Using Wan 2.2 https:\/\/youtu.be\/jJ6Gk1x_rT8 https:\/\/youtu.be\/ZctT0jxMk7o \ud83d\udce5 Get Started with Comfy UI: https:\/\/youtu.be\/grzK5mBitzs \ud83d\udcac Drop a comment below! \ud83d\udd14 Stay Updated: Stay connected to find the latest AI applications! Happy creating! \ud83c\udfa8\u2728 Playlist For ComfyUI - Stable Diffusion https:\/\/www.youtube.com\/playlist?list=PLPFN04WspxqvpVeMmJ1vHqG-uf4C4dIb0 Playlist For Krita AI Diffusion https:\/\/www.youtube.com\/playlist?list=PLPFN04WspxqvFhJDIXvIDZ3yveShMvgss Playlist For Fooocus AI - Stable Diffusion https:\/\/www.youtube.com\/playlist?list=PLPFN04WspxqsslRSpiLmwGR8QTpDYNv7z Playlist For Forge UI https:\/\/www.youtube.com\/playlist?list=PLPFN04WspxqvpZmxPPjkxV-BkuOy2FF2i\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"LTX-2 - 20 seconds Video + Audio | Advanced ComfyUI Workflow | T2V+I2V | GGUF &amp;amp; Safetensors\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_9ZWAuAX6I0A\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F9ZWAuAX6I0A%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/9ZWAuAX6I0A\" \/><meta itemprop=\"duration\" content=\"PT11M27S\" \/><meta itemprop=\"uploadDate\" content=\"2026-01-10T14:08:45Z\" \/><\/div><div id=\"lyte_9ZWAuAX6I0A\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F9ZWAuAX6I0A%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">LTX-2 - 20 seconds Video + Audio | Advanced ComfyUI Workflow | T2V+I2V | GGUF &amp; Safetensors<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/9ZWAuAX6I0A\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F9ZWAuAX6I0A%2F0.jpg\" alt=\"LTX-2 - 20 seconds Video + Audio | Advanced ComfyUI Workflow | T2V+I2V | GGUF &amp;amp; Safetensors\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"#LTX-2 video with audio | Advanced ComfyUI Workflow | T2V + I2V | GGUF &amp; Safetensors | 20s @ 50FPS | Part 1 In this video, I introduce *LTX-2*, a powerful open-source DiT-based audio-video generation model capable of generating synchronized video and audio* in a single pipeline \u2014 all running locally. \ud835\udde6\ud835\uddee\ud835\uddfa\ud835\uddfd\ud835\uddf9\ud835\uddf2 \ud835\uddfc\ud835\ude02\ud835\ude01\ud835\uddfd\ud835\ude02\ud835\ude01: https:\/\/www.youtube.com\/watch?v=dBm85QrzphI This is *Part 1* of a multi-part series where I showcase an *advanced ComfyUI workflow* that goes far beyond the default setup. ### \ud83d\udd25 What LTX-2 Can Do - Generate *up to 20 seconds of video at 50 FPS* - Produce *audio and video together* in a single model - Scale output up to *4K resolution* using LTX upscaling models - Designed for *local execution* with open weights ### \ud83d\ude80 What This Video Covers - An *advanced modular ComfyUI workflow* for LTX-2 - Support for *both safetensors and GGUF* - *Text-to-Video (T2V)* and *Image-to-Video (I2V)* in one unified workflow - Model split into: - Diffusion Model - Text Encoders - Video VAE - Audio VAE - Support for: - Full Dev model - Distilled model - Distilled LoRA on Dev model - Two-stage sampling: - Base generation at half resolution - 2\u00d7 latent upscaling for final output ### \u2699\ufe0f Model Variants Explained - *Dev Model* - CFG: 4 - Steps: 20 - *Distilled Model \/ Distilled LoRA* - CFG: 1 - Steps: 8 ### \ud83e\udde0 Hardware Test Setup - GPU: RTX 3060 (12GB VRAM) - Resolution: 720p - Video Length: 20 seconds - FPS: 25 - System RAM: 48GB ### \u26a0\ufe0f Important Note About I2V (Low VRAM GPUs) Text-to-Video runs fine on low-VRAM GPUs, but *Image-to-Video (I2V) is extremely VRAM intensive*. If you are using: - *I2V* - *GPU with \u2264 12GB VRAM* You *must* start ComfyUI with reserved VRAM to avoid out-of-memory errors. \u25b6\ufe0f *ComfyUI Command Line for I2V on Low VRAM* python main.py --lowvram --reserve-vram 10 - `10` means *10 GB of VRAM reserved specifically for latents* - Recommended for *I2V on GPUs with \u2264 12GB VRAM* - If using *shorter video length* or *lower resolution*, this value can be reduced accordingly You would need to use this custom node for GGUF support. https:\/\/github.com\/vantagewithai\/Vantage-Nodes \ud83d\udcc2 *Installation* Clone into your ComfyUI `custom_nodes` directory: cd ComfyUI\/custom_nodes git clone https:\/\/github.com\/vantagewithai\/Vantage-Nodes.git pip install -r requirements.txt Restart ComfyUI after installation. *Model links* *Models (dev)* - BF16: https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/model\/ltx-2-19b-dev-model.safetensors?download=true - FP8: https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/model\/ltx-2-19b-dev-model-fp8.safetensors?download=true - GGUF: https:\/\/huggingface.co\/vantagewithai\/LTX-2-GGUF\/tree\/main\/dev *Models (distilled)* - BF16: https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/model\/ltx-2-19b-distilled-model.safetensors?download=true - FP8: https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/model\/ltx-2-19b-distilled-model-fp8.safetensors?download=true - GGUF: https:\/\/huggingface.co\/vantagewithai\/LTX-2-GGUF\/tree\/main\/distilled *text_encoders* - https:\/\/huggingface.co\/Comfy-Org\/ltx-2\/resolve\/main\/split_files\/text_encoders\/gemma_3_12B_it.safetensors - https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/text_encoder\/ltx-2-19b-text_encoder.safetensors?download=true *vae* - https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/vae\/ltx-2-19b-VAE.safetensors?download=true - https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/audio_vae\/ltx-2-19b-audio_vae.safetensors?download=true *loras* - https:\/\/huggingface.co\/Lightricks\/LTX-2-19b-LoRA-Camera-Control-Dolly-Left\/resolve\/main\/ltx-2-19b-lora-camera-control-dolly-left.safetensors - https:\/\/huggingface.co\/Lightricks\/LTX-2\/resolve\/main\/ltx-2-19b-distilled-lora-384.safetensors *latent_upscale_models* - https:\/\/huggingface.co\/Lightricks\/LTX-2\/resolve\/main\/ltx-2-spatial-upscaler-x2-1.0.safetensors *Workflow download* - https:\/\/huggingface.co\/vantagewithai\/LTX-2-Split\/resolve\/main\/Vantage-LTX2-Advanced-Workflow-GGUF-Support.json?download=true *Model Storage Location* \ud83d\udcc2 ComfyUI\/ \u251c\u2500\u2500 \ud83d\udcc2 models\/ \u2502 \u251c\u2500\u2500 \ud83d\udcc2 checkpoints\/ \u2502 \u2502 \u2514\u2500\u2500 ltx-2-19b-audio_vae.safetensors \u2502 \u251c\u2500\u2500 \ud83d\udcc2 diffusion_models\/ \u2502 \u2502 \u2514\u2500\u2500 ltx-2-19b-dev-model.safetensors \u2502 \u251c\u2500\u2500 \ud83d\udcc2 text_encoders\/ \u2502 \u2502 \u251c\u2500\u2500 gemma_3_12B_it.safetensors \u2502 \u2502 \u2514\u2500\u2500 ltx-2-19b-text_encoder.safetensors \u2502 \u251c\u2500\u2500 \ud83d\udcc2 vae\/ \u2502 \u2502 \u251c\u2500\u2500 ltx-2-19b-VAE.safetensors \u2502 \u251c\u2500\u2500 \ud83d\udcc2 loras\/ \u2502 \u2502 \u251c\u2500\u2500 ltx-2-19b-lora-camera-control-dolly-left.safetensors \u2502 \u2502 \u2514\u2500\u2500 ltx-2-19b-distilled-lora-384.safetensors \u2502 \u2514\u2500\u2500 \ud83d\udcc2 latent_upscale_models\/ \u2502 \u2514\u2500\u2500 ltx-2-spatial-upscaler-x2-1.0.safetensors \ud83d\udd1c *Coming in Part 2* - *Depth map support* - *ControlNet integration* - Advanced motion and structure control If this helped you: - \ud83d\udc4d Like the video - \ud83d\udd14 Subscribe for Part 2 - \ud83d\udcac Leave a comment with your setup or questions Thanks for watching!\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"googlesitekit_rrm_CAowvqSiDA:productID":"","footnotes":""},"class_list":["post-7295","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/7295","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/comments?post=7295"}],"version-history":[{"count":0,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/7295\/revisions"}],"wp:attachment":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/media?parent=7295"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}