
{"id":4937,"date":"2025-03-13T22:55:56","date_gmt":"2025-03-13T14:55:56","guid":{"rendered":"https:\/\/infernews.com\/?page_id=4937"},"modified":"2025-03-13T23:04:17","modified_gmt":"2025-03-13T15:04:17","slug":"windows-wsl-%e6%b7%b7%e5%85%83%e8%a6%96%e9%a0%bb-lora-%e8%a8%93%e7%b7%b4","status":"publish","type":"page","link":"https:\/\/infernews.com\/blog\/windows-wsl-%e6%b7%b7%e5%85%83%e8%a6%96%e9%a0%bb-lora-%e8%a8%93%e7%b7%b4\/","title":{"rendered":"Windows wsl \u6df7\u5143\u8996\u983b LoRA \u8a13\u7df4"},"content":{"rendered":"\n<p>Hunyuan \u5f71\u7247\u7684 LoRA \u8a13\u7df4\u6d41\u7a0b\u4e3b\u8981\u5305\u542b\u4ee5\u4e0b\u5e7e\u500b\u968e\u6bb5\uff1a<\/p>\n\n\n\n<p>\u9996\u5148\u662f <strong>\u521d\u59cb\u8a2d\u5b9a (Initial Setup)<\/strong>\u3002\u9019\u500b\u968e\u6bb5\u9700\u8981\u5728 Windows \u74b0\u5883\u4e2d\u8a2d\u5b9a <strong>WSL (Windows Subsystem for Linux)<\/strong>\u3002\u60a8\u9700\u8981\u4ee5\u7ba1\u7406\u54e1\u6b0a\u9650\u958b\u555f PowerShell \u4e26\u57f7\u884c <code>wsl --install<\/code> \u6307\u4ee4\u3002\u5b89\u88dd\u5b8c\u6210\u5f8c\uff0c<strong>\u8acb\u91cd\u65b0\u555f\u52d5\u60a8\u7684\u96fb\u8166<\/strong>\u3002<\/p>\n\n\n\n<p>\u91cd\u65b0\u555f\u52d5\u5f8c\uff0cUbuntu \u6703\u81ea\u52d5\u555f\u52d5\u4e26\u8981\u6c42\u60a8\u5efa\u7acb\u4f7f\u7528\u8005\u540d\u7a31\u548c\u8a2d\u5b9a\u5bc6\u78bc\u3002\u8acb\u52d9\u5fc5\u8a18\u4f4f\u9019\u4e9b\u6191\u8b49\uff0c\u56e0\u70ba\u5f8c\u7e8c\u6703\u7528\u5230 <code>sudo<\/code> \u6307\u4ee4\u3002\u63a5\u8457\uff0c\u60a8\u9700\u8981\u5728 Ubuntu \u74b0\u5883\u4e2d\u9032\u884c\u4e00\u4e9b<strong>\u521d\u59cb\u8a2d\u5b9a (Ubuntu Initial Setup)<\/strong>\u3002\u9019\u5305\u62ec\u66f4\u65b0\u5957\u4ef6\u5217\u8868 (<code>sudo apt update<\/code>) \u548c\u5b8c\u6574\u5347\u7d1a\u5957\u4ef6 (<code>sudo apt full-upgrade -y<\/code>)\uff0c\u4ee5\u53ca\u5b89\u88dd<strong>\u57fa\u672c\u76f8\u4f9d\u6027\u5957\u4ef6<\/strong>\uff0c\u4f8b\u5982 <code>git-lfs<\/code>\u3001<code>wget<\/code>\u3001<code>python3-dev<\/code> \u548c <code>build-essential<\/code> (<code>sudo apt-get install -y git-lfs wget python3-dev build-essential<\/code>)\u3002<\/p>\n\n\n\n<p>\u70ba\u4e86\u8b93 WSL \u64c1\u6709\u66f4\u591a\u5171\u4eab\u8a18\u61b6\u9ad4\uff0c\u7279\u5225\u662f\u5728\u9032\u884c\u5927\u578b\u8a13\u7df4\u6216\u5305\u542b\u5f71\u7247\u7684\u8a13\u7df4\u6642\uff0c\u60a8\u9700\u8981\u5728 Windows \u7684 <code>C:\\Users\\*\u60a8\u7684\u4f7f\u7528\u8005\u540d\u7a31*<\/code> \u76ee\u9304\u4e0b\u5efa\u7acb\u4e00\u500b\u540d\u70ba <strong><code>.wslconfig<\/code><\/strong> \u7684\u6a94\u6848\u3002\u5728\u9019\u500b\u6a94\u6848\u4e2d\uff0c\u60a8\u53ef\u4ee5\u8a2d\u5b9a\u5206\u914d\u7d66 WSL \u7684\u8a18\u61b6\u9ad4\u548c\u4ea4\u63db\u7a7a\u9593\u5927\u5c0f\u3002\u4f86\u6e90\u63d0\u4f9b\u4e86\u4e00\u500b\u7bc4\u4f8b\uff0c\u5c0d\u65bc\u64c1\u6709 64GB RAM \u7684\u7cfb\u7d71\uff0c\u5efa\u8b70\u5c07\u8a18\u61b6\u9ad4\u8a2d\u5b9a\u70ba 48GB\uff0c\u4ea4\u63db\u7a7a\u9593\u8a2d\u5b9a\u70ba 64GB\u3002<\/p>\n\n\n\n<p>\u63a5\u4e0b\u4f86\u9700\u8981<strong>\u9a57\u8b49 NVIDIA \u7684\u8a2d\u5b9a (Verify NVIDIA Setup)<\/strong>\u3002\u5728 Ubuntu \u7d42\u7aef\u6a5f\u4e2d\u57f7\u884c <code>nvidia-smi<\/code> \u6307\u4ee4\uff0c\u61c9\u8a72\u6703\u986f\u793a\u60a8\u7684 GPU \u548c\u9a45\u52d5\u7a0b\u5f0f\u7248\u672c\u3002\u5982\u679c\u5931\u6557\uff0c\u60a8\u53ef\u80fd\u9700\u8981\u70ba WSL \u5b89\u88dd NVIDIA CUDA \u9a45\u52d5\u7a0b\u5f0f\uff0c\u60a8\u53ef\u4ee5\u5f9e\u63d0\u4f9b\u7684\u9023\u7d50\u4e0b\u8f09\u3002<\/p>\n\n\n\n<p>\u4e4b\u5f8c\u662f<strong>Miniconda \u7684\u5b89\u88dd (Miniconda Installation)<\/strong>\u3002\u60a8\u53ef\u4ee5\u4f7f\u7528 <code>wget<\/code> \u4e0b\u8f09 Miniconda \u7684\u5b89\u88dd\u8173\u672c\uff0c\u7136\u5f8c\u4f7f\u7528 <code>bash<\/code> \u57f7\u884c\u4e26\u8a2d\u5b9a\u74b0\u5883\u8b8a\u6578\u3002<\/p>\n\n\n\n<p>\u7136\u5f8c\uff0c\u60a8\u9700\u8981<strong>\u8907\u88fd\u8a13\u7df4\u5132\u5b58\u5eab (Clone Training Repository)<\/strong>\u3002\u4f7f\u7528 <code>git clone --recurse-submodules https:\/\/github.com\/tdrussell\/diffusion-pipe<\/code> \u6307\u4ee4\u5c07\u5132\u5b58\u5eab\u8907\u88fd\u5230\u60a8\u7684 WSL \u74b0\u5883\u4e2d\uff0c\u7136\u5f8c\u4f7f\u7528 <code>cd diffusion-pipe<\/code> \u5207\u63db\u5230\u8a72\u76ee\u9304\u3002<\/p>\n\n\n\n<p>\u63a5\u8457\u662f<strong>\u8a2d\u5b9a Python \u74b0\u5883 (Setup Python Environment)<\/strong>\u3002\u9996\u5148\uff0c\u4f7f\u7528 <code>conda create -n diffusion-pipe python=3.12<\/code> \u5efa\u7acb\u4e00\u500b\u540d\u70ba <code>diffusion-pipe<\/code> \u7684 conda \u74b0\u5883\uff0c\u7136\u5f8c\u4f7f\u7528 <code>conda activate diffusion-pipe<\/code> \u555f\u7528\u5b83\u3002<strong>\u5728\u5b89\u88dd <code>requirements.txt<\/code> \u4e4b\u524d\uff0c\u8acb\u52d9\u5fc5\u5148\u5b89\u88dd PyTorch \u548c torchaudio<\/strong>\uff0c\u4f86\u6e90\u63d0\u4f9b\u4e86\u4ed6\u5011\u6e2c\u8a66\u53ef\u884c\u7684\u7248\u672c\u548c\u4e0b\u8f09\u9023\u7d50\u3002\u6700\u5f8c\uff0c\u4f7f\u7528 <code>pip install -r requirements.txt<\/code> \u5b89\u88dd\u5176\u9918\u7684\u4f9d\u8cf4\u5957\u4ef6\u3002\u4f86\u6e90\u4e5f\u63d0\u5230\u4e86\u5728\u5b89\u88dd DeepSpeed \u6216\u5176\u4ed6\u5957\u4ef6\u6642\u53ef\u80fd\u9047\u5230\u7684 CUDA \u7de8\u8b6f\u932f\u8aa4\uff0c\u4e26\u5efa\u8b70\u53ef\u80fd\u9700\u8981\u900f\u904e <code>apt<\/code> \u5b89\u88dd <code>nvidia-cuda-toolkit<\/code>\u3002<\/p>\n\n\n\n<p>\u5b8c\u6210\u74b0\u5883\u8a2d\u5b9a\u5f8c\uff0c\u60a8\u9700\u8981<strong>\u4e0b\u8f09\u548c\u7d44\u7e54\u6a21\u578b (Download and Organize Models)<\/strong>\u3002\u5728 <code>diffusion-pipe<\/code> \u76ee\u9304\u4e0b\u5efa\u7acb <code>models\/{hunyuan,clip,llm}<\/code> \u76ee\u9304\u3002\u7136\u5f8c\uff0c\u4f7f\u7528 <code>wget<\/code> \u4e0b\u8f09 HunyuanVideo \u7684\u76f8\u95dc\u6a94\u6848\uff0c\u4f7f\u7528 <code>git clone<\/code> \u4e0b\u8f09 CLIP \u6a21\u578b\u548c LLM\u3002<\/p>\n\n\n\n<p>\u4e0b\u4e00\u6b65\u662f\u8a2d\u5b9a<strong>\u8a2d\u5b9a\u6a94 (Configuration Files)<\/strong>\u3002\u60a8\u9700\u8981\u5728 <code>diffusion-pipe<\/code> \u4e3b\u76ee\u9304\u4e0b\u5efa\u7acb\u5169\u500b\u6a94\u6848\uff1a<code>config.toml<\/code> \u548c <code>dataset.toml<\/code>\u3002\u4f86\u6e90\u63d0\u4f9b\u4e86\u9019\u5169\u500b\u6a94\u6848\u7684\u7bc4\u4f8b\u5167\u5bb9\uff08\u4ee5 <code>.txt<\/code> \u9644\u4ef6\u5f62\u5f0f\u63d0\u4f9b\uff0c\u9700\u8981\u91cd\u65b0\u547d\u540d\u70ba <code>.toml<\/code>\uff09\u3002<code>config.toml<\/code> \u5305\u542b\u8a13\u7df4\u7684\u5404\u7a2e\u8a2d\u5b9a\uff0c\u4f8b\u5982\u8f38\u51fa\u76ee\u9304\u3001epochs \u6578\u3001batch size\u3001\u5b78\u7fd2\u7387\u7b49\u7b49\u3002<code>dataset.toml<\/code> \u5247\u5305\u542b\u8cc7\u6599\u96c6\u7684\u8a2d\u5b9a\uff0c\u4f8b\u5982\u89e3\u6790\u5ea6\u3001\u9577\u5bec\u6bd4 bucketing\u3001\u5e40\u6578 buckets \u4ee5\u53ca\u8cc7\u6599\u96c6\u7684\u8def\u5f91\u548c\u91cd\u8907\u6b21\u6578\u3002<strong>\u8acb\u52d9\u5fc5\u6839\u64da\u60a8\u7684\u9700\u6c42\u8abf\u6574\u9019\u4e9b\u8a2d\u5b9a<\/strong>\u3002\u7279\u5225\u662f\u5982\u679c\u60a8\u4f7f\u7528\u96d9 GPU\uff0c\u9700\u8981\u5c07 <code>config.toml<\/code> \u4e2d\u7684 <code>pipeline_stages<\/code> \u8a2d\u5b9a\u70ba 2\u3002<\/p>\n\n\n\n<p>\u5728\u6e96\u5099\u8a13\u7df4\u8cc7\u6599\u65b9\u9762 (<strong>Preparing Training Data<\/strong>)\uff0c\u60a8\u9700\u8981\u5728 <code>~\/training_data\/images<\/code> \u76ee\u9304\u4e0b\u653e\u7f6e\u60a8\u7684\u8a13\u7df4\u5716\u7247\u3002\u5c0d\u65bc LoRA \u8a13\u7df4\uff0c\u5efa\u8b70\u4f7f\u7528 20-50 \u5f35\u4e0d\u540c\u7684\u5716\u7247\u3002\u60a8\u53ef\u4ee5\u9078\u64c7\u6027\u5730\u70ba\u6bcf\u5f35\u5716\u7247\u5efa\u7acb\u4e00\u500b\u540c\u540d\u7684 <code>.txt<\/code> \u6a94\u6848\u4f86\u63d0\u4f9b\u63d0\u793a\u8a5e\u3002\u5982\u679c\u60a8\u8981\u4f7f\u7528\u5f71\u7247\u9032\u884c\u8a13\u7df4\uff0c\u5247\u9700\u8981\u5c07\u5f71\u7247\u653e\u7f6e\u5728\u76f8\u540c\u7684\u76ee\u9304\u4e0b\uff0c\u4e26\u4e14\u5f71\u7247\u7684\u5e40\u6578\u5fc5\u9808\u662f\u7cbe\u78ba\u7684\uff0c\u4e14\u5e40\u7387\u70ba\u6bcf\u79d2 24 \u5e40\u3002\u9019\u500b\u5e40\u6578\u9700\u8981\u8207 <code>dataset.toml<\/code> \u6a94\u6848\u4e2d\u7684\u8a2d\u5b9a\u76f8\u7b26\u3002<\/p>\n\n\n\n<p>\u6e96\u5099\u597d\u8cc7\u6599\u548c\u8a2d\u5b9a\u6a94\u5f8c\uff0c\u5c31\u53ef\u4ee5\u958b\u59cb<strong>\u8a13\u7df4 (Training)<\/strong> \u4e86\u3002\u5c0d\u65bc\u55ae GPU \u8a13\u7df4\uff0c\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u6307\u4ee4\uff1a<\/p>\n\n\n\n<div class=\"codecopy-container\">\n                <div class=\"codecopy-header\">\n                    \n                    <button class=\"codecopy-btn\" data-target=\"codecopy-69f8c2544f79b\" title=\"Copy code to clipboard\">\n                <svg width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n                    <path d=\"M16 1H4C2.9 1 2 1.9 2 3V17H4V3H16V1ZM19 5H8C6.9 5 6 5.9 6 7V21C6 22.1 6.9 23 8 23H19C20.1 23 21 22.1 21 21V7C21 5.9 20.1 5 19 5ZM19 21H8V7H19V21Z\" fill=\"currentColor\"\/>\n                <\/svg>\n                <span class=\"codecopy-text\">Copy<\/span>\n            <\/button>\n                <\/div>\n                <pre class=\"line-numbers\" id=\"codecopy-69f8c2544f79b\"><code class=\"language-text\">NCCL_P2P_DISABLE=\"1\" NCCL_IB_DISABLE=\"1\" deepspeed --num_gpus=1 train.py --deepspeed --config config.toml\n<\/code><\/pre>\n            <\/div>\n\n\n\n<p>\u5c0d\u65bc\u96d9 GPU \u8a13\u7df4\uff0c\u53ef\u80fd\u9700\u8981\u52a0\u5165 <code>NCCL_SHM_DISABLE=1<\/code>\uff0c\u6307\u4ee4\u5982\u4e0b\uff1a<\/p>\n\n\n\n<div class=\"codecopy-container\">\n                <div class=\"codecopy-header\">\n                    \n                    <button class=\"codecopy-btn\" data-target=\"codecopy-69f8c2544f845\" title=\"Copy code to clipboard\">\n                <svg width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n                    <path d=\"M16 1H4C2.9 1 2 1.9 2 3V17H4V3H16V1ZM19 5H8C6.9 5 6 5.9 6 7V21C6 22.1 6.9 23 8 23H19C20.1 23 21 22.1 21 21V7C21 5.9 20.1 5 19 5ZM19 21H8V7H19V21Z\" fill=\"currentColor\"\/>\n                <\/svg>\n                <span class=\"codecopy-text\">Copy<\/span>\n            <\/button>\n                <\/div>\n                <pre class=\"line-numbers\" id=\"codecopy-69f8c2544f845\"><code class=\"language-text\">NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 NCCL_SHM_DISABLE=1 deepspeed --num_gpus=2 train.py --deepspeed --config config.toml\n<\/code><\/pre>\n            <\/div>\n\n\n\n<p>\u96d9 GPU \u7684\u8a2d\u5b9a\u53ef\u80fd\u56e0\u74b0\u5883\u800c\u7570\u3002<\/p>\n\n\n\n<p>\u5728\u8a13\u7df4\u904e\u7a0b\u4e2d\uff0c\u60a8\u53ef\u4ee5\u9032\u884c<strong>\u76e3\u63a7 (Monitoring Training)<\/strong>\u3002\u60a8\u53ef\u4ee5\u5728 Windows \u7d42\u7aef\u6a5f\u4e2d\u4f7f\u7528 <code>nvidia-smi<\/code> \u6307\u4ee4\u4f86\u76e3\u63a7 GPU \u7684\u4f7f\u7528\u60c5\u6cc1\u3002\u60a8\u4e5f\u53ef\u4ee5\u4f7f\u7528 <strong>TensorBoard<\/strong> \u4f86\u76e3\u63a7\u640d\u5931\u3002\u9996\u5148\u9700\u8981\u5728 WSL \u4e2d\u5b89\u88dd TensorBoard (<code>pip install tensorboard<\/code>)\uff0c\u7136\u5f8c\u627e\u5230\u60a8\u7684\u8a13\u7df4\u8f38\u51fa\u76ee\u9304\uff0c\u4e26\u57f7\u884c <code>tensorboard --logdir \/root\/training_output\/<\/code> (\u8acb\u5c07 <code>\/root\/training_output\/<\/code> \u66ff\u63db\u70ba\u60a8\u7684\u5be6\u969b\u8def\u5f91)\u3002\u4e4b\u5f8c\uff0c\u60a8\u53ef\u4ee5\u5728\u700f\u89bd\u5668\u4e2d\u900f\u904e <code>http:\/\/localhost:6006<\/code> \u4f86\u67e5\u770b\u76e3\u63a7\u7d50\u679c\u3002<\/p>\n\n\n\n<p>\u5982\u679c\u8a13\u7df4\u4e2d\u65b7\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528 <code>--resume_from_checkpoint<\/code> \u6a19\u8a8c\u4f86<strong>\u5f9e\u6aa2\u67e5\u9ede\u6062\u5fa9 (Resuming from checkpoint)<\/strong>\u3002\u4f8b\u5982\uff1a<\/p>\n\n\n\n<div class=\"codecopy-container\">\n                <div class=\"codecopy-header\">\n                    \n                    <button class=\"codecopy-btn\" data-target=\"codecopy-69f8c2544f849\" title=\"Copy code to clipboard\">\n                <svg width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n                    <path d=\"M16 1H4C2.9 1 2 1.9 2 3V17H4V3H16V1ZM19 5H8C6.9 5 6 5.9 6 7V21C6 22.1 6.9 23 8 23H19C20.1 23 21 22.1 21 21V7C21 5.9 20.1 5 19 5ZM19 21H8V7H19V21Z\" fill=\"currentColor\"\/>\n                <\/svg>\n                <span class=\"codecopy-text\">Copy<\/span>\n            <\/button>\n                <\/div>\n                <pre class=\"line-numbers\" id=\"codecopy-69f8c2544f849\"><code class=\"language-text\">NCCL_P2P_DISABLE=\"1\" NCCL_IB_DISABLE=\"1\" deepspeed --num_gpus=1 train.py --deepspeed --config config.toml --resume_from_checkpoint\n<\/code><\/pre>\n            <\/div>\n\n\n\n<p>\u5efa\u8b70\u5982\u679c GPU \u901f\u5ea6\u8f03\u6162\uff0c\u53ef\u4ee5\u8003\u616e\u66f4\u983b\u7e41\u5730\u9032\u884c\u6aa2\u67e5\u9ede\u4fdd\u5b58\u3002<\/p>\n\n\n\n<p>\u8a13\u7df4\u5b8c\u6210\u5f8c\uff0c\u60a8\u53ef\u4ee5\u5728 <code>config.toml<\/code> \u4e2d\u8a2d\u5b9a\u7684 <code>output_dir<\/code> \u76ee\u9304\u4e2d\u627e\u5230\u8a13\u7df4\u8f38\u51fa\u7684\u6a94\u6848\u3002\u5728\u6700\u65b0\u7684 epoch \u8cc7\u6599\u593e\u4e2d\uff0c\u60a8\u6703\u627e\u5230 <strong><code>adapter.safetensors<\/code><\/strong> \u6a94\u6848\uff0c\u9019\u5c31\u662f\u60a8\u8a13\u7df4\u597d\u7684 LoRA \u6a21\u578b\u3002\u5efa\u8b70\u5c07\u5176\u91cd\u65b0\u547d\u540d\u70ba\u66f4\u5177\u63cf\u8ff0\u6027\u7684\u540d\u7a31\uff0c\u4f8b\u5982\u5305\u542b\u89f8\u767c\u8a5e\u548c epoch \u6578\u3002<\/p>\n\n\n\n<p>\u6700\u5f8c\uff0c\u5982\u679c\u60a8\u60f3\u5728 ComfyUI \u4e2d\u4f7f\u7528\u9019\u500b\u8a13\u7df4\u597d\u7684 LoRA\uff0c\u60a8\u9700\u8981\u5c07\u5176\u8907\u88fd\u5230 ComfyUI \u7684 <code>loras<\/code> \u8cc7\u6599\u593e\u4e2d\u3002\u540c\u6642\uff0c\u60a8\u9700\u8981\u5b89\u88dd <strong>HunyuanVideoWrapper<\/strong> \u7bc0\u9ede\uff0c\u4e26\u5728 ComfyUI \u4e2d\u4f7f\u7528 &#8220;HunyuanVideo Lora Select&#8221; \u7bc0\u9ede\u4f86\u8f09\u5165\u60a8\u7684 LoRA \u6a21\u578b\u3002\u60a8\u53ef\u4ee5\u5617\u8a66\u4e0d\u540c\u7684 epoch \u8f38\u51fa\u4f86\u627e\u5230\u6700\u9069\u5408\u60a8\u8cc7\u6599\u96c6\u7684\u6a21\u578b\u3002<\/p>\n\n\n\n<p>\u7e3d\u7d50\u4f86\u8aaa\uff0cHunyuan \u5f71\u7247\u7684 LoRA \u8a13\u7df4\u6d41\u7a0b\u662f\u4e00\u500b\u76f8\u5c0d\u8907\u96dc\u7684\u904e\u7a0b\uff0c\u9700\u8981\u4ed4\u7d30\u5730\u8a2d\u5b9a WSL \u74b0\u5883\u3001\u5b89\u88dd\u76f8\u4f9d\u5957\u4ef6\u3001\u6e96\u5099\u8cc7\u6599\u3001\u914d\u7f6e\u8a2d\u5b9a\u6a94\u4ee5\u53ca\u57f7\u884c\u8a13\u7df4\u8173\u672c\u3002\u76e3\u63a7\u8a13\u7df4\u9032\u5ea6\u548c\u4e86\u89e3\u5982\u4f55\u6062\u5fa9\u8a13\u7df4\u4e5f\u76f8\u7576\u91cd\u8981\u3002\u6700\u5f8c\uff0c\u60a8\u9700\u8981\u77e5\u9053\u5982\u4f55\u5728\u60a8\u7684\u76ee\u6a19\u61c9\u7528\u7a0b\u5f0f\uff08\u4f8b\u5982 ComfyUI\uff09\u4e2d\u4f7f\u7528\u8a13\u7df4\u597d\u7684 LoRA \u6a21\u578b\u3002<\/p>\n\n\n\n<p>dataset.toml<\/p>\n\n\n\n<div class=\"codecopy-container\">\n                <div class=\"codecopy-header\">\n                    \n                    <button class=\"codecopy-btn\" data-target=\"codecopy-69f8c2544f84e\" title=\"Copy code to clipboard\">\n                <svg width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n                    <path d=\"M16 1H4C2.9 1 2 1.9 2 3V17H4V3H16V1ZM19 5H8C6.9 5 6 5.9 6 7V21C6 22.1 6.9 23 8 23H19C20.1 23 21 22.1 21 21V7C21 5.9 20.1 5 19 5ZM19 21H8V7H19V21Z\" fill=\"currentColor\"\/>\n                <\/svg>\n                <span class=\"codecopy-text\">Copy<\/span>\n            <\/button>\n                <\/div>\n                <pre class=\"line-numbers\" id=\"codecopy-69f8c2544f84e\"><code class=\"language-text\"># Resolution settings.\n# Can adjust this to 1024 for image training, especially on 24gb cards.\nresolutions = &#91;512]\n\n# Aspect ratio bucketing settings\nenable_ar_bucket = true\nmin_ar = 0.5\nmax_ar = 2.0\nnum_ar_buckets = 7\n\n# Frame buckets (1 is for images)\nframe_buckets = &#91;1]\n\n&#91;&#91;directory]]\n# Set this to where your dataset is\npath = '~\/training_data\/images'\n# Reduce as necessary\nnum_repeats = 5\n\n<\/code><\/pre>\n            <\/div>\n\n\n\n<p>config.toml<\/p>\n\n\n\n<div class=\"codecopy-container\">\n                <div class=\"codecopy-header\">\n                    \n                    <button class=\"codecopy-btn\" data-target=\"codecopy-69f8c2544f873\" title=\"Copy code to clipboard\">\n                <svg width=\"16\" height=\"16\" viewBox=\"0 0 24 24\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\">\n                    <path d=\"M16 1H4C2.9 1 2 1.9 2 3V17H4V3H16V1ZM19 5H8C6.9 5 6 5.9 6 7V21C6 22.1 6.9 23 8 23H19C20.1 23 21 22.1 21 21V7C21 5.9 20.1 5 19 5ZM19 21H8V7H19V21Z\" fill=\"currentColor\"\/>\n                <\/svg>\n                <span class=\"codecopy-text\">Copy<\/span>\n            <\/button>\n                <\/div>\n                <pre class=\"line-numbers\" id=\"codecopy-69f8c2544f873\"><code class=\"language-text\">Guide: Training a LoRA for Hunyuan video on Windows\n\nVersion 0.2\n\nFeedback welcome. I already had a wsl environment set up so some steps may be incorrect!\n\nIt's fairly complex, so wait for a native windows tool if this seems too difficult. I haven't experimented a lot with the training settings, but these worked for me for a style LoRA.\n\nThis is aimed at using images to train the LoRA, small modifications will be needed to train with video\n\n## Initial Setup\n\n### 1. WSL Setup\n\n# In Windows PowerShell (Admin)\n\nwsl --install\n\nAfter installation completes:\n\n1. Restart your computer\n\n2. Ubuntu will automatically start and ask you to:\n\n\u00a0 \u00a0- Create a username\n\n\u00a0 \u00a0- Set a password\n\n\u00a0 \u00a0Remember these credentials as you'll need them for sudo commands.\n\n\n### 2. Ubuntu Initial Setup\n\n# Update package lists\n\nsudo apt update\nsudo apt full-upgrade -y\n\n# Install basic dependencies\n\nsudo apt-get install -y git-lfs wget python3-dev build-essential\n\n### 3. Verify NVIDIA Setup\n\n# Check NVIDIA drivers are working\n\nnvidia-smi\n\n# Should show your GPU(s) and driver version\n\n# If this fails, you may need to install the NVIDIA CUDA driver for WSL:\n\n# Download from: https:\/\/developer.nvidia.com\/cuda\/wsl\n\n### 4. Miniconda Installation\n\nwget https:\/\/repo.anaconda.com\/miniconda\/Miniconda3-latest-Linux-x86_64.sh\nbash Miniconda3-latest-Linux-x86_64.sh -b\nsource ~\/.bashrc\n\n### 5. Clone Training Repository\n\ngit clone --recurse-submodules https:\/\/github.com\/tdrussell\/diffusion-pipe\ncd diffusion-pipe\n\n### 6. Setup Python Environment\n\nconda create -n diffusion-pipe python=3.12\nconda activate diffusion-pipe\n\n# Install PyTorch. Make sure to do this before installing requirements.txt. These two steps have the potential for the most issues. These are the versions that worked for me but YMMV\n\npip install torch==2.4.1 torchvision==0.19.1 --index-url https:\/\/download.pytorch.org\/whl\/cu121\npip install torchaudio==2.4.1+cu121 --index-url https:\/\/download.pytorch.org\/whl\/cu121\n\n# Install requirements\n\npip install -r requirements.txt\n\n# Issues:\n\n- If you encounter CUDA compilation errors during pip install of DeepSpeed or other packages, you may need to install nvidia-cuda-toolkit via apt\n\n- Solve other pip\/torch errors with your favorite LLM\n\n\n## Accessing Files in Windows\n\nYou can access your WSL files in Windows File Explorer by navigating to this directory (Ubuntu folder may differ in name):\n\n\\\\wsl$\\Ubuntu\\home\\yourusername\\diffusion-pipe\\\n\nReplace 'yourusername' with the username you created during WSL setup.\n\nThis allows you to easily transfer images to your training folder and copy the finished LoRA to ComfyUI.\n\n## \u00a0Download and Organize Models\n\nIf you have the existing files, copy them from windows to these folders.\n\nOtherwise:\n\ncd ~\/diffusion-pipe\nmkdir -p models\/{hunyuan,clip,llm}\n\n\n# Download HunyuanVideo files\n\nwget https:\/\/huggingface.co\/Kijai\/HunyuanVideo_comfy\/resolve\/main\/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors -P ~\/diffusion-pipe\/models\/hunyuan\/\nwget https:\/\/huggingface.co\/Kijai\/HunyuanVideo_comfy\/resolve\/main\/hunyuan_video_vae_bf16.safetensors -P ~\/diffusion-pipe\/models\/hunyuan\/\n\n# Download CLIP model\n\ngit clone https:\/\/huggingface.co\/openai\/clip-vit-large-patch14 models\/clip\n\n# Download LLM\n\ngit clone https:\/\/huggingface.co\/Kijai\/llava-llama-3-8b-text-encoder-tokenizer models\/llm\n\n## Configuration Files\n\nCreate two configuration files in the main directory (diffusion-pipe) config.toml and dataset.toml. These are in the attachments (rename as .toml)\n\n### 1. Training Configuration (config.toml)\n\nExample config.toml (adjust as necessary):\n\n# Dataset config file.\noutput_dir = '~\/training_output'\ndataset = 'dataset.toml'\n\n# Training settings\nepochs = 50\nmicro_batch_size_per_gpu = 1\npipeline_stages = 1\ngradient_accumulation_steps = 4\ngradient_clipping = 1.0\nwarmup_steps = 100\n\n# eval settings\neval_every_n_epochs = 5\neval_before_first_step = true\neval_micro_batch_size_per_gpu = 1\neval_gradient_accumulation_steps = 1\n\n# misc settings\nsave_every_n_epochs = 5\ncheckpoint_every_n_minutes = 30\nactivation_checkpointing = true\npartition_method = 'parameters'\nsave_dtype = 'bfloat16'\ncaching_batch_size = 1\nsteps_per_print = 1\nvideo_clip_mode = 'single_middle'\n\n&#91;model]\ntype = 'hunyuan-video'\ntransformer_path = '~\/diffusion-pipe\/models\/hunyuan\/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors'\nvae_path = '~\/diffusion-pipe\/models\/hunyuan\/hunyuan_video_vae_bf16.safetensors'\nllm_path = '~\/diffusion-pipe\/models\/llm'\nclip_path = '~\/diffusion-pipe\/models\/clip'\ndtype = 'bfloat16'\ntransformer_dtype = 'float8'\ntimestep_sample_method = 'logit_normal'\n\n&#91;adapter]\ntype = 'lora'\nrank = 64\ndtype = 'bfloat16'\n\n&#91;optimizer]\ntype = 'adamw_optimi'\nlr = 5e-5\nbetas = &#91;0.9, 0.99]\nweight_decay = 0.02\neps = 1e-8\n\n### 2. Dataset Configuration (dataset.toml)\n\n# Resolution settings.\n# Can adjust this to 1024 for image training, especially on 24gb cards.\nresolutions = &#91;512]\n\n# Aspect ratio bucketing settings\nenable_ar_bucket = true\nmin_ar = 0.5\nmax_ar = 2.0\nnum_ar_buckets = 7\n\n# Frame buckets (1 is for images)\nframe_buckets = &#91;1]\n\n&#91;&#91;directory]]\n# Set this to where your dataset is\npath = '~\/training_data\/images'\n# Reduce as necessary\nnum_repeats = 5\n\n## Preparing Training Data\n\n1. Create dataset directory:\n\nmkdir -p ~\/training_data\/images\n\n2. Place training images in the directory:\n\n- LoRA: 20-50 diverse images\n\n- Optional: Create matching .txt files with prompts (same name as image file)\n\n\nExample structure:\n\n~\/training_data\/images\n\u251c\u2500\u2500 image1.png\n\u251c\u2500\u2500 image1.txt \u00a0# Optional prompt file\n\u251c\u2500\u2500 image2.png\n\u251c\u2500\u2500 image2.txt\n\n## Training\n\nLaunch training with this command:\n\nNCCL_P2P_DISABLE=\"1\" NCCL_IB_DISABLE=\"1\" deepspeed --num_gpus=1 train.py --deepspeed --config config.toml\n\n## Monitoring Training\n\n- Monitoring GPU usage in a windows terminal:\n\nnvidia-smi --query-gpu=timestamp,name,temperature.gpu,utilization.gpu,memory.used,memory.total --format=csv -l 5\n\n- Training outputs will be saved in the directory specified by output_dir in your config\n\n## Using the Trained LoRA\n\n1. After training completes, find your LoRA file:\n\n- Navigate to training output directory in Windows:\n\n\\\\wsl$\\Ubuntu\\home\\yourusername\\training_output\n\n- Look for the latest epoch folder\n\n- Find the adapter.safetensors file\n\n\n2. Using with ComfyUI:\n\n- Copy and rename the adapter.safetensors (to something descriptive) to your ComfyUI loras folder\n\n- Make sure you have the HunyuanVideoWrapper node installed https:\/\/github.com\/kijai\/ComfyUI-HunyuanVideoWrapper\n\n- Use the \"HunyuanVideo Lora Select\" node to load it\n\n- Experiment with different epochs to find the ideal number for your dataset\n\n<\/code><\/pre>\n            <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Hunyuan \u5f71\u7247\u7684 LoRA \u8a13\u7df4\u6d41\u7a0b\u4e3b\u8981\u5305\u542b\u4ee5\u4e0b\u5e7e\u500b\u968e\u6bb5\uff1a \u9996\u5148\u662f \u521d\u59cb\u8a2d\u5b9a (Initial Setup)\u3002\u9019\u500b\u968e\u6bb5\u9700\u8981\u5728 Windows \u74b0\u5883\u4e2d\u8a2d\u5b9a WSL (Windows Subsystem for Linux)\u3002\u60a8\u9700\u8981\u4ee5\u7ba1\u7406\u54e1\u6b0a\u9650\u958b\u555f PowerShell \u4e26\u57f7\u884c wsl &#8211;install \u6307\u4ee4\u3002\u5b89\u88dd\u5b8c\u6210\u5f8c\uff0c\u8acb\u91cd\u65b0\u555f\u52d5\u60a8\u7684\u96fb\u8166\u3002 \u91cd\u65b0\u555f\u52d5\u5f8c\uff0cUbuntu \u6703\u81ea\u52d5\u555f\u52d5\u4e26\u8981\u6c42\u60a8\u5efa\u7acb\u4f7f\u7528\u8005\u540d\u7a31\u548c\u8a2d\u5b9a\u5bc6\u78bc\u3002\u8acb\u52d9\u5fc5\u8a18\u4f4f\u9019\u4e9b\u6191\u8b49\uff0c\u56e0\u70ba\u5f8c\u7e8c\u6703\u7528\u5230 sudo \u6307\u4ee4\u3002\u63a5\u8457\uff0c\u60a8\u9700\u8981\u5728 Ubuntu \u74b0\u5883\u4e2d\u9032\u884c\u4e00\u4e9b\u521d\u59cb\u8a2d\u5b9a (Ubuntu Initial Setup)\u3002\u9019\u5305\u62ec\u66f4\u65b0\u5957\u4ef6\u5217\u8868 (sudo apt update) \u548c\u5b8c\u6574\u5347\u7d1a\u5957\u4ef6 (sudo apt full-upgrade -y)\uff0c\u4ee5\u53ca\u5b89\u88dd\u57fa\u672c\u76f8\u4f9d\u6027\u5957\u4ef6\uff0c\u4f8b\u5982 git-lfs\u3001wget\u3001python3-dev \u548c build-essential (sudo apt-get install -y git-lfs wget python3-dev build-essential)\u3002 \u70ba\u4e86\u8b93 WSL \u64c1\u6709\u66f4\u591a\u5171\u4eab\u8a18\u61b6\u9ad4\uff0c\u7279\u5225\u662f\u5728\u9032\u884c\u5927\u578b\u8a13\u7df4\u6216\u5305\u542b\u5f71\u7247\u7684\u8a13\u7df4\u6642\uff0c\u60a8\u9700\u8981\u5728 Windows \u7684 C:\\Users\\*\u60a8\u7684\u4f7f\u7528\u8005\u540d\u7a31* \u76ee\u9304\u4e0b\u5efa\u7acb\u4e00\u500b\u540d\u70ba [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"googlesitekit_rrm_CAowvqSiDA:productID":"","footnotes":""},"class_list":["post-4937","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/4937","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/comments?post=4937"}],"version-history":[{"count":0,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/4937\/revisions"}],"wp:attachment":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/media?parent=4937"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}