Setlist

Comfyui save image node reddit

Comfyui save image node reddit. Customize what information to save with each generated job. Look for imageCompositeMasked node. However, the save image node should save a separate png for each image in the batch, regardless of how it's displayed in the GUI. - 832 x 1216. Its possible to auto-generate keyword tags for any LOAD IMAGE, using either "MS Kosmos-2 Interrogator" or "WD14 Tagger" or "CXH_JoyTag", but its not possible in any available node to actually save these keywords into the default PNG metadata or XMP info (its only possible to save them as extra text files). So I use batch picker, but I cant use that with efficiency nodes. If you can code, yes. I haven't had conflicts yet but I imagine you have to disable the custom node / pack you're currently not using to get around this if you have issues. png from Dall-e3. from io import BytesIO. open("example. Right-click the output (from a PreviewImage node), and select "copy clipspace". Other than that, you can restart comfyui like said. ComfyUI only saves data available during queuing of the prompt, while useful, these data are not absolute and in many cases won't be able to generate the same image again. import time. I'm trying to create a node that pulls the exif data from an image. Layout All images are assembled together. Oh wait, yes I know ComfyUI saves the whole workflow and values as JSON in the image. So I created another one to train a LoRA model directly from ComfyUI! Welcome to the unofficial ComfyUI subreddit. It also allows you to save the data as a text file. ckpt_name_1, ckpt_name_2, etc. - 1152 x 896. Save that basic setup to your workflows and you can use that anytime you just need to upscale an existing image. This causes my steps to take up a lot of RAM, leading to killed RAM. For ex, if your work is generally Base > Refiner > Face Fix > Upscale, you can send each to each preview node. Locate the IMAGE output of the VAE Decode node and connect it to the images input of the Preview Image node you just added. Please keep posted images SFW. Trying to enable lowvram mode because your GPU seems to have 4GB or less. Need urgent help for generate image of face. Then right-click in a LoadImage node and "paste clipspace". -A node that extracts AI generation data: prompt, seed, model ect from comfyui images; and Exif data ( camera settings from jpg photographs, AI generation data from Auto 1111 jpg's). Reply More replies More replies More replies. Not ideal. Share. from PIL import Image, ImageOps. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in Copy-paste all that code in your blank file. Double-click on an empty part of the canvas, type in preview, then click on the PreviewImage option. Basically save configurations for different checkpoints, which already becomes confusing every time I try one or the other, the parameters of steps, cfg, sampler and sheduler, I have to remember that It works for which one and they are different in 1. Tedious_Prime. I'm an ultra newbie in using nodes and Comfy UI (and I'm a 58 yo sound engineer / composer - it doesn't fix things). I've tried ipadapter plus face, instant id, reactor, pulid but the result is not same as the real face images. Export and edit your image. The same text node as in module 8 is used here. text% and whatever you entered in the 'folder' prompt text will be pasted in. I first get the prompt working as a list of the basic contents of your image. Here is why it would be useful: Many of us generate huge number of images, and the png format eats a lot of space. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Added 'job_custom_text' string input. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. Congratulations, you made your very first custom node! But for now, it’s not really interesting. The story text output from module 1 is converted into an image for display on the right side of the final image. Draw in Photoshop then paste the result in one of the benches of the workflow, OR. But, the yyyy will need to be fairly technically accurate, and expect a few/many hours iterating with it. Turn off metadata with this launch option : Where ever you launch ComfyUI from, will now need to become. Features. If you don't have a Save Image node in your workflow, add one. Wire the original filename (or create one from the seed or whatever) into ASTERR input A, and the tag list into input B. I'm doing some test with comfyUI. Compatible with Civitai & Prompthero geninfo auto-detection. I have a video and I want to run SD on each frame of that video. Please share your tips, tricks, and workflows for using this software to create your AI art. 2. This Node leverages Python Imaging Library (PIL) and PyTorch to dynamically render text on images, supporting a wide range of customization options including font size, alignment, color, and padding. tiktaalik111. You can also use the image filter node from same set and set saturation to 0. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. When you have both WAS and ymc installed, it means that the Image Save node may appear as an extension node, and it's uncertain which extension node will be displayed. Highly recommend keeping it on your radar even if you don't end up using it. Edit: I just saw your terminal image, and I’m not sure the failure in my case generated any errors. It has a draggable interface that you can rearrange at your whim, custom nodes that expose the node inputs as input fields, and you can open a graph mode which lets you edit nodes as you would normally in ComfyUI. If you are using ComfyUI-Manager, you can right-click on the group node and Using a 'Clip Text Encode (Prompt)' node you can specify a subfolder name in the text box. png'. But hear me out, that's not enough at all. " If you click this, it will expand a selection menu that contains every generation parameter and node setup that you have used during that session of ComfyUI. I'd like to append certain details to my filenames for quick reference. There's a custom Save Image node for it in WAS node suite. Jpeg would be better, but one cannot save workflows to jpegs. png") as img: img. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. -The data extractor doesn't require an OpenAI I know I could save the preprocessed image (and bypassing the preprocessors) and then bring it back in a load image, but that's extremally cumbersome. import struct. Would any of you knowledgable souls be able to guide me on how to achieve this? With my just released node Yeah, you can save them as a ‘Template’ from the context menu - select a bunch of nodes, then right click outside of the selection on the background and find “Save as Template”. assuming your using a fixed seed you could link the output to a preview and a save node then press ctrl+m with the save node to disable Welcome to the unofficial ComfyUI subreddit. The WAS "Image Save" node was recently updated, which broke existing worklfows using it. I created a background image, a Chinese-style scroll. Any help would be appreciated, been trying to figure this out for a couple days now. OAI Dall_e 3: Takes your prompt and parameters and produces a Dall_e3 image in ComfyUI. The goal is to build a node-based Automated Text Generation AGI. Make sure the images are all in png. Save the prompt, basic data, sampler data, and loaded models. The example given in the plugin are: import pillow_jxl from PIL import Image # Lossless encode a png image with Image. Hello i am running some batch processing and I have setup a save image node for my controlnet outputs. Then your ASTERR function would be something like: if '2girl' in b: asterr_result = 'discard. Yeah, you can save them as a ‘Template’ from the context menu - select a bunch of nodes, then right click outside of the selection on the background and find “Save as Template”. Paste in the code to the closet node you can find and tell it to change it from doing xxx to doing yyyy. If you don't want this use: --normalvram. this creats a very basic image from a simple prompt and sends it as a source. TripoSR was just released and I just felt like I had to create a node for comfyUI so I can experiement with it. Using the 'Save Image Extended' node with the 'Get Date Time String' node, outputs are organized into date-named subfolders under ‘Output’ as I would like them, but the folder names are a day ahead. Then go into the properties (Right Click) and change the 'Node name for S&R' to something simple like 'folder'. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. jpg. I am taking in the image from another node using the "IMAGE" type. Convert to input to save any node text in the job data. The mixlab nodes have a class that save the image to a specific location but it creates automatically a numbered sequence of images. Truncated decimals for 'cfg' and 'denoise' values. So I was wondering if there's any custom node that can give me a similar feature on ComfyUI, maybe an image node with both an input and Krea, you can see the useful ROTATE/DIMENSION tool on the dogo image i pasted. I have no idea on how to code in python but I guess it would be easy to start from that mixlab class and modifying a string or so to let it overwrite the result? Welcome to the unofficial ComfyUI subreddit. to get the kind of button functionality you want, you would need a different UI mod of some kind that sits above comfyUI. Plug the image output of the Load node into the Tagger, and the other two outputs in the inputs of the Save node. You can selectively mute Preview node or Save node (Ctrl+M), then Comfy will not do anything to generate it. Hi it doesn’t copy any image, it just saves the file path string of the image in your output folder, so if you deleted or moved the image from the output folder, it will show a broken image link in gallery. model there wouldn't a name to retrieve because that information would be in the XY Input or a checkpoint Aug 1, 2023 · I've installed the "was node suite" because it can generate automatically a date when you save an image by using a node "text add tokens". I'm using a 10gb card but I find to run a text2img2vid pipeline like you are I need to launch ComfyUI with the --novram --disable-smart-memory parameters to force it to unload models as it moves through the pipeline. You save it through SaveImage node. Instead of fiddling around with flow control on the save node, I'd just save all the rejected images over one another. By default it only shows the first image, you have to either hit the left/right cursor keys to scroll through, or click the tiny X icon at the top left to move from single image to grid view. But captions are just half of the process for LoRA training. A lot of people are just discovering this technology, and want to show off what they created. In trying to convert this to a Pillow Image it appears this type is a tensor. All the tools you need to save images with their generation metadata on ComfyUI. Adding other Loader Nodes. I would like to save them to a new folder for each generation so I can better data manage. There should be a Save image node in the default workflow, which will save the generated image to the output directory in the ComfyUI directory. This extension should ultimately combine the powers of, for example, AutoGPT, babyAGI, and Jarvis. Would any of you knowledgable souls be able to guide me on how to achieve this? With my just released node Then modify the code to save the images. Device: cuda:0 Quadro T2000 with Max-Q Design : cudaMallocAsync. r/comfyui. ago. If they don't rerun it means you didn't change their setting. , which isn't useful for a one name fits all save name. So a good compromise would be to save as jpeg and save the metadata as a separate Added ability to save job data file for each image. Hello, fellows. import comfy. And another general difference is that A1111 when you set 20 steps 0. And above all, BE NICE. Comfy won't do anything with nodes in pathway if there is no output. Copy that folder’s path and write it down in the widget of the Load node. Works with png, jpeg and webp. Reply. • 2 mo. While it may not be very intuitive, the simplest method is to use the ImageCompositeMasked node that comes as a default. See if that works :) There are also a few custom nodes for image resizing that may help but I don't have the names of those in To customize file names you need to add a Primitive node with the desired filename format connected to the Save Image node as explained here . 5, SDXL, SDXLturbo models, and each finetuning has a configuration that works in some and not Right-click on the Save Image node, then select Remove. At the top and bottom of the image, I placed the game logo: BG. It only saves anything added to your output/ folder like Save Image node or Video Combine node. Finally, add a save image node and connect it to the image of the ReActor node. 1. Installation: Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them. 10. c) a generational pass for integrating object pasted (we can do New Features: Anthropic API Support: Harness the power of Anthropic's advanced language models like Claude-3 (Opus, Sonnet, and Haiku) to generate high-quality prompts and image descriptions. Bit of a panic and decided to try the ComfyUI and Python Dependencies batch files again, ComfyUI opened properly after that and I've got my upload button back in the Load Image node :) Somehow my Load Image node no longer shows a preview of the image or a button to upload a new image. As for the generation time, you can check the terminal, and the same information should be written in the comfyui. But if you attach it to the top reroute node, then the image will go to the Save Image and get saved. Out of the box this works with any image generated by Comfy, and gives you access to all widget settings. assuming your using a fixed seed you could link the output to a preview and a save node then press ctrl+m with the save node to disable looking at efficiency nodes - simpleEval, its just a matter of time before someone starts writing turing complete programs in ComfyUI :-) The WAS suite is really amazing and indispensible IMO especially the text concatenation stuff for starters, and the wiki has other examples of photoshop like stuff. Just unmute it when your image is ready for next step. (if possible in ComfyUI) Suggestion list: - 1024 x 1024. #binary images on the websocket with a 8 byte header indicating the type. model but . - giriss/comfy-image Then add the ReActor Fast Face Swap node. log located in the ComfyUI_windows_portable folder. Now you can just load that node cluster into any workflow you’re working on. Load an image, and it shows a list of nodes there's information about, pick an node and it shows you what information it's got, pick the thing you want and use it (as string, float, or int). Added 'model_name' to file or folder keys (upscale model). You can't use that model for generations/ksampler, it's still only useful for swapping. Upscale model now saved in job data. where sources are selected using a switch, also contains the empty latent node it also resizes images Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Delete/Backspace: Delete the Welcome to the unofficial ComfyUI subreddit. Now with timestamp in local format it defaults to save without asking where on operagx which is what I run comfy through. If, for example, you want to save just the refined image and not the base one, then you attach the image wire on the right to the top reroute node, and you attach the image wire on the left to the bottom reroute node (where it currently is). Now in your 'Save Image' nodes include %folder. save("example. - 896 x 1152. Belittling their efforts will get you banned. Set vram state to: LOW_VRAM. #images will be sent in exactly the same format as the image previews: as. You can edit the metadata in Photopea by pressing File - File Info. Now you can try it out too! TripoSR is a state-of-the-art open-source model for fast 3D reconstruction from a single image, collaboratively developed by Tripo AI and Stability AI. The code might be a little bit stupid Instead, restart comfyui, start the workflow, Check if it works, if it doesn't, copy the console log, maybe I can figure out what's going on. Try this. If you are using ComfyUI-Manager, you can right-click on the group node and Here is the pre-model-load excerpt from startup log: Total VRAM 4096 MB, total RAM 32513 MB. import numpy as np. Go to comfyui. When you export a JPG or a PNG in Photopea, you should enable the checkmark "Attach metadata". it defaults to save without asking where on operagx which is what I run comfy through. You should see myNode in the list! Select it. Just delete the node and replace with the same node. "DPI" was added, which broke existing ones. ) There is a folder called TEMP in the root directory of ComfyUI that saves all images that were previewed during I present the first update for this node! A couple of new features: Added delimiter with a few options Save prompt is now Save job data, with some options. Restarted it a couple of time but it was the same. One thing I miss from Automatic1111 is how easy it is to just preprocess the image before generating and have this image available to be used with a single toggle without having it get preprocessed everytime. I've looked into vid2vid, ComfyWarp, and WAS NODES, and all them don't seem to work since the last update with Comfy UI. I don't have ComfyUI in front of me but if the KSampler does say . add a pixel-based upscale node, and then save it with a save image node. Specifically like a PNG sequence for a video similar with how you would do batch sequences in automatic 1111. transforms it tells me it is a 4d array which isn't valid. Then save it, and open ComfyUI. Input sources-. The ComfyUI Colab just dumps all outputs into the ‘Output’ folder without any structure. What I want to do is, I have a image of one real person and want to make full body images with same face as the original image. model there wouldn't a name to retrieve because that information would be in the XY Input or a checkpoint ImageTextOverlay is a customizable Node for ComfyUI that allows users to easily add text overlays to images within their ComfyUI projects. For now, only text generation inside ComfyUI with LLaMA models like vicuna-13b-4bit-128g In the Image is a workfow (untested) to enhance Prompts using text generation. Fixed counter issue when position was 'first'. ) On the right hand panel, there is a button that says "History. Generate from Comfy and paste the result in Photoshop for manual adjustments, OR. If you have created a 4 image batch, and later you drop the 3rd one into comfy to generate with that image, you dont get the third image, you get the first. The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. Welcome to the unofficial ComfyUI subreddit. My custom nodes felt a little lonely without the other half. i am aware, that i can save images into whatever folder i wish, that is not (!) my problem, i am not talking about the save node or about manually saving the images into whatever folder i like, i am talking about comfy writing the outputs of the samplers and preview nodes into a folder named "temp" Hello i am running some batch processing and I have setup a save image node for my controlnet outputs. Also, if this is new and exciting to you, feel free to post Yes, that's correct. Probably harder then I think but I think it would be very popular ;) And perhaps a button that can switch from landscape to portrait and back would be very nice. . Currently they all save into a single folder. Plush-for-ComfyUI Plush contains two OpenAI enabled nodes: Style Prompt: Takes your prompt and the art style you specify and generates a prompt from ChatGPT3 or 4 that Stable Diffusion can use to generate an image in that style. Thank you! What I do is actually very simple - I just use a basic interpolation algothim to determine the strength of ControlNet Tile & IpAdapter plus throughout a batch of latents based on user inputs - it then applies the CN & Masks the IPA in alignment with these settings to achieve a smooth effect. Double-click on an empty space in your workflow, then type “Node”. Add a load image node, select a picture you want to swap faces with, and connect it to the input face of the ReActor node. Initial Input block -. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. If you have such a node but your images aren't being saved, make sure the node is connected to the rest of the workflow and not disabled. If I use ToPILImage from torchvision. You could try renaming the XY input but the attribute name there isn't . But if you put the Add Info node into your Welcome to the unofficial ComfyUI subreddit. Load image back up in comfy using an image loader, pass it through a sampler at 0 noise and save it out. #You can use this node to save full size images through the websocket, the. and no workflow metadata will be saved in any image. There are tons of nodes that do that however there's also custom set that does more of the convert to black and white where you can selectively choose how to conversion goes forget the name but it also has glitch so you can search for glitch in the manager. Here is how it works: Gather the images for your LoRA database, in a single folder. Custom Node Image Input Type. jxl",lossless=True) You should use lossless=True for your use case. The other day on the comfyui subreddit, I published my LoRA Captioning custom nodes, very useful to create captioning directly from ComfyUI. A node that takes a text prompt and produces a . utils. If you want to upscale in latent space, you'll need vae encode to turn it into a latent, the latent upscale node you want, then a vae decode to save image to Welcome to the unofficial ComfyUI subreddit. - 1216 x 832. That actually does create a json, but the json cannot be loaded back into ComfyUI. Next, link the input image from this node to the image from the VAE Decode. As you can see we can understand a number of things Krea is doing here: a) likely LCM to do near real time generation (we can do this in comfy) b) allow the use of MULTIPLE images as part of ONE output. With these exciting updates, the ComfyUI IF AI Tools repository offers a comprehensive suite of tools to streamline your image generation workflow: IF Welcome to the unofficial ComfyUI subreddit. oo zk hb qa ph qi cz id wt qz