Generate ComfyUI Workflow

Generate a ComfyUI Workflow using AI

saqibsaqib
{ "3": { "inputs": { "images": [ "25", 0 ] }, "class_type": "VHS_VideoCombine", "_meta": { "title": "VHS_VideoCombine" } }, "4": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "5": { "inputs": { "width": 1024, "height": 1024, "batch_size": 16 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "6": { "inputs": { "text": "desert with pyramids", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Start Frame)" } }, "7": { "inputs": { "text": "bad hands, text, watermark", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative)" } }, "8": { "inputs": { "text": "futuristic cyberpunk city", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (End Frame)" } }, "10": { "inputs": { "model_name": "mm_sd_v15_v2.ckpt", "beta_schedule": "sqrt_linear (AnimateDiff)", "motion_scale": 1, "apply_v2": true }, "class_type": "ADE_AnimateDiffLoaderWithContext", "_meta": { "title": "Load AnimateDiff Model" } }, "22": { "inputs": { "seed": 849617566922766, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "10", 0 ], "positive": [ "23", 0 ], "negative": [ "7", 0 ], "latent_image": [ "5", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "23": { "inputs": { "start_clip": [ "6", 0 ], "end_clip": [ "8", 0 ], "num_frames": 16 }, "class_type": "LinearPromptInterpolation", "_meta": { "title": "Linear Prompt Interpolation" } }, "24": { "inputs": { "samples": [ "22", 0 ], "vae": [ "4", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "25": { "inputs": { "frame_rate": 8, "loop_count": 0, "filename_prefix": "animation", "format": "image/gif", "pingpong": false, "save_image": true, "images": [ "24", 0 ] }, "class_type": "VHS_SaveVideo", "_meta": { "title": "Save Video" } } }
{ "1": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "2": { "inputs": { "width": 1024, "height": 1024, "batch_size": 1 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "3": { "inputs": { "text": "a detailed photograph of a cat, high quality, realistic", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "4": { "inputs": { "text": "low quality, blurry, distorted", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "5": { "inputs": { "seed": 721897303308196, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "1", 0 ], "positive": [ "3", 0 ], "negative": [ "4", 0 ], "latent_image": [ "2", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "6": { "inputs": { "samples": [ "5", 0 ], "vae": [ "1", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "7": { "inputs": { "model_name": "RealESRGAN_x2.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "8": { "inputs": { "upscale_model": [ "7", 0 ], "image": [ "6", 0 ] }, "class_type": "ImageUpscaleWithModel", "_meta": { "title": "Upscale Image (using Model)" } }, "9": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } } }
Here's a workflow that allows you to ask a question, generate an answer, and then create an image based on that answer using the SDXL model: { "1": { "inputs": { "text": "What is a unique animal hybrid that doesn't exist in real life?", "max_tokens": 50, "preset": "None" }, "class_type": "CLIPTextEncode", "_meta": { "title": "Question Input" } }, "2": { "inputs": { "text": [ "1", 0 ] }, "class_type": "moondream_text_model", "_meta": { "title": "Moondream Text Model" } }, "3": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load SDXL Checkpoint" } }, "4": { "inputs": { "width": 1024, "height": 1024, "batch_size": 1 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "5": { "inputs": { "text": [ "2", 0 ], "clip": [ "3", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "6": { "inputs": { "text": "low quality, blurry, distorted", "clip": [ "3", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "7": { "inputs": { "seed": 721897303308196, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "3", 0 ], "positive": [ "5", 0 ], "negative": [ "6", 0 ], "latent_image": [ "4", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "8": { "inputs": { "samples": [ "7", 0 ], "vae": [ "3", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "9": { "inputs": { "filename_prefix": "question_answer_image", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } } } This workflow does the following: 1. It starts with a "Question Input" node where you can input your question. 2. The question is then passed to a "Moondream Text Model" which generates an answer based on the question. 3. The generated answer is used as the prompt for image generation. 4. The workflow uses the SDXL model (specifically albedobaseXL_v12) to generate the image. 5. The generated image is then saved. To use this workflow: 1. Input your question in the "Question Input" node. 2. The Moondream model will generate an answer. 3. This answer will be used as the prompt for the SDXL model to generate an image. 4. The resulting image will be based on the answer to your question. This workflow allows for a creative process where you can ask a question and get a visual representation of the answer.

6 Runs

6/2/2024, 3:08:01 AM

Burak-5707bsormagec5 mo. ago
{ "3": { "inputs": { "images": [ "25", 0 ] }, "class_type": "VHS_VideoCombine", "_meta": { "title": "VHS_VideoCombine" } }, "4": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "5": { "inputs": { "width": 1024, "height": 1024, "batch_size": 16 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "6": { "inputs": { "text": "desert with pyramids", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Start Frame)" } }, "7": { "inputs": { "text": "bad hands, text, watermark", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative)" } }, "8": { "inputs": { "text": "futuristic cyberpunk city", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (End Frame)" } }, "10": { "inputs": { "model_name": "mm_sd_v15_v2.ckpt", "beta_schedule": "sqrt_linear (AnimateDiff)", "motion_scale": 1, "apply_v2": true }, "class_type": "ADE_AnimateDiffLoaderWithContext", "_meta": { "title": "Load AnimateDiff Model" } }, "22": { "inputs": { "seed": 849617566922766, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "10", 0 ], "positive": [ "23", 0 ], "negative": [ "7", 0 ], "latent_image": [ "5", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "23": { "inputs": { "start_clip": [ "6", 0 ], "end_clip": [ "8", 0 ], "num_frames": 16 }, "class_type": "LinearPromptInterpolation", "_meta": { "title": "Linear Prompt Interpolation" } }, "24": { "inputs": { "samples": [ "22", 0 ], "vae": [ "4", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "25": { "inputs": { "frame_rate": 8, "loop_count": 0, "filename_prefix": "animation", "format": "image/gif", "pingpong": false, "save_image": true, "images": [ "24", 0 ] }, "class_type": "VHS_SaveVideo", "_meta": { "title": "Save Video" } } }
give start frame and end frame for a video and transform between 2 image and give output as video of transformation
Fraser-c8cc410dc2Fuzzyheed7 mo. ago
{ "1": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "2": { "inputs": { "width": 1024, "height": 1024, "batch_size": 1 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "3": { "inputs": { "text": "a detailed photograph of a cat, high quality, realistic", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "4": { "inputs": { "text": "low quality, blurry, distorted", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "5": { "inputs": { "seed": 721897303308196, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "1", 0 ], "positive": [ "3", 0 ], "negative": [ "4", 0 ], "latent_image": [ "2", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "6": { "inputs": { "samples": [ "5", 0 ], "vae": [ "1", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "7": { "inputs": { "model_name": "RealESRGAN_x2.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "8": { "inputs": { "upscale_model": [ "7", 0 ], "image": [ "6", 0 ] }, "class_type": "ImageUpscaleWithModel", "_meta": { "title": "Upscale Image (using Model)" } }, "9": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } } }
create an image of a cat and then upscale it 2x
Caglar-481471c927coeptis7 mo. ago
Here's a workflow that allows you to ask a question, generate an answer, and then create an image based on that answer using the SDXL model: { "1": { "inputs": { "text": "What is a unique animal hybrid that doesn't exist in real life?", "max_tokens": 50, "preset": "None" }, "class_type": "CLIPTextEncode", "_meta": { "title": "Question Input" } }, "2": { "inputs": { "text": [ "1", 0 ] }, "class_type": "moondream_text_model", "_meta": { "title": "Moondream Text Model" } }, "3": { "inputs": { "ckpt_name": "albedobaseXL_v12.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load SDXL Checkpoint" } }, "4": { "inputs": { "width": 1024, "height": 1024, "batch_size": 1 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "5": { "inputs": { "text": [ "2", 0 ], "clip": [ "3", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "6": { "inputs": { "text": "low quality, blurry, distorted", "clip": [ "3", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "7": { "inputs": { "seed": 721897303308196, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "3", 0 ], "positive": [ "5", 0 ], "negative": [ "6", 0 ], "latent_image": [ "4", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "8": { "inputs": { "samples": [ "7", 0 ], "vae": [ "3", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "9": { "inputs": { "filename_prefix": "question_answer_image", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } } } This workflow does the following: 1. It starts with a "Question Input" node where you can input your question. 2. The question is then passed to a "Moondream Text Model" which generates an answer based on the question. 3. The generated answer is used as the prompt for image generation. 4. The workflow uses the SDXL model (specifically albedobaseXL_v12) to generate the image. 5. The generated image is then saved. To use this workflow: 1. Input your question in the "Question Input" node. 2. The Moondream model will generate an answer. 3. This answer will be used as the prompt for the SDXL model to generate an image. 4. The resulting image will be based on the answer to your question. This workflow allows for a creative process where you can ask a question and get a visual representation of the answer.
A workflow where I can ask a question and generate an image from the answer
PauluzPauluzRTX11 mo. ago
{ "1": { "inputs": { "ckpt_name": "sd_xl_base_1.0.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "2": { "inputs": { "width": 1024, "height": 1024, "batch_size": 1 }, "class_type": "EmptyLatentImage", "_meta": { "title": "Empty Latent Image" } }, "3": { "inputs": { "text": "A cute cat sitting on a windowsill", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "4": { "inputs": { "text": "text, watermark, low quality", "clip": [ "1", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Negative Prompt)" } }, "5": { "inputs": { "seed": 123456789, "steps": 50, "cfg": 8, "sampler_name": "dpmpp_sde", "scheduler": "karras", "denoise": 1, "model": [ "1", 0 ], "positive": [ "3", 0 ], "negative": [ "4", 0 ], "latent_image": [ "2", 0 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "6": { "inputs": { "samples": [ "5", 0 ], "vae": [ "1", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "7": { "inputs": { "model_name": "RealESRGAN_x2.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "8": { "inputs": { "upscale_model": [ "7", 0 ], "image": [ "6", 0 ] }, "class_type": "ImageUpscaleWithModel", "_meta": { "title": "Upscale Image (using Model)" } }, "9": { "inputs": { "filename_prefix": "ComfyUI_Cat", "images": [ "8", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } } }
create an image of a cat and then upscale it 2x
saqibsaqib11 mo. ago
create an image of a cat and then upscale it 2x
glif - Generate ComfyUI Workflow by saqib