Mastering SD3 with ComfyUI: From Theory to Practice, Detailed Comparison of New Nodes and Parameters

黎黎原上咩
黎黎原上咩
578 بار بازدید - 3 ماه پیش - In this video, I'll guide
In this video, I'll guide you through a detailed analysis of the SD3 ComfyUI workflow, covering each node and parameter comparison to optimize and fully utilize the potential of this model!

Shift: Higher values focus more on structure, lower values emphasize detail. Range 1 to 15 is usable, but I typically choose 2 to 6.

CFG: Should be lower than SD1.5, generally between 3 to 4.5.

Sampling Method: Use sgm_uniform for the scheduler. For the sampler, options include dpmpp_2m, euler, uni_pc, DPM Adaptive, and uni_pc_2.

Iteration Steps: Depends on the sampling method. For dpmpp_2m, stay within 30 steps.

Size: 1024 x 1024 for standard images. For horizontal images, use 1344 x 768 or 1728 x 1024. For vertical images, use 768 x 1344.

Prompts: For simplicity, use the CLIP Text Encode (Prompt) node without distinguishing CLIP inputs. For more precise control, use the CLIPTextEncodeSD3 node. Inputs for clip_g and clip_l should be the same, describing the background's color and atmosphere, excluding the main subject. In t5xxl, define the subject's hair, facial features, clothing, and overall style.
3 ماه پیش در تاریخ 1403/03/31 منتشر شده است.
578 بـار بازدید شده
... بیشتر