Comfyui upscale example reddit

Comfyui upscale example reddit. 2 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Welcome to the unofficial ComfyUI subreddit. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Upscale x1. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. Please share your tips, tricks, and workflows for using this software to create your AI art. Examples of ComfyUI workflows. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. That's it for upscaling. 1 or not. For some context, I am trying to upscale images of an anime village, something like Ghibli style. Like 1024, 1280, 2048, 1536. Latent quality is better but the final image deviates significantly from the initial generation. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Feature/Version Flux. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. You can do the ControlNet/Ulitmate SD Upscale combo. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I originally wanted to release 9. 19K subscribers in the comfyui community. TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. It's why you need at least 0. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. The equivalent to Ultimate SD Upscale for A1111 is Ultimate SD Upscale for ComfyUI. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Both these are of similar speed. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters I wonder if there are any workflows for ComfyUI that combine Ultimate SD Upscale + controlnet_tile + IP-Adapter. all in one workflow would be awesome. The cape is img2img upscale after the first 2x upscale, cropped out that portion as a square, and just highres that portion, and comp it back in. Step 2: Download this sample Image. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now run the upscaler and second pass. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 - run your prompt. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Depending on the noise and strength it end up treating each square as an individual image. Hope someone can advise. AP Workflow 9. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. I also combined ELLA in the workflow to make it easier to get what I want. There's "latent upscale by", but I don't want to upscale the latent image. Start ComfyUI. Thanks. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). Belittling their efforts will get you banned. You're funny. Please keep posted images SFW. thats Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. 43 votes, 16 comments. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. If the workflow is not loaded, drag and drop the image you downloaded earlier. Using the Iterative Mixing KSampler to noise up the 2x latent before passing it to a few steps of refinement in a regular KSampler. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. Thanks I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to An example might be using a latent upscale; it works fine, but it adds a ton of noise that can lead your image to change after going through the refining step. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata Jan 5, 2024 · Example. The workflow is kept very simple for this test; Load image Upscale Save image. I haven't been able to replicate this in Comfy. The downside is that it takes a very long time. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. I try to use comfyUI to upscale (use SDXL 1. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. this is just a simple node build off what's given and some of the newer nodes that have come out. Thank I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. But I probably wouldn't upscale by 4x at all if fidelity is important. ComfyUI Examples. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. I upscaled it to a resolution of 10240x6144 px for us to examine the results. 5 denoise. 5 "Upscaling with model" and then denoising 0. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). And when purely upscaling, the best upscaler is called LDSR. Thanks for your help This repo contains examples of what is achievable with ComfyUI. Pixel upscale to a low denoise 2nd sampler is not as clean as Upscaler roundup and comparison. This is done after the refined image is upscaled and encoded into a latent. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. There isn't a "mode" for img2img. Latent upscale is different from pixel upscale. You guys have been very supportive, so I'm posting here first. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird . Here is an example: You can load this image in ComfyUI to get the workflow. This will get to the low-resolution stage and stop. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Explore its features, templates and examples on GitHub. A lot of people are just discovering this technology, and want to show off what they created. The final node is where comfyui take those images and turn it into a video. No attempts to fix jpg artifacts, etc. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. You end up with images anyway after ksampling so you can use those upscale node. Images reduced from 12288 to 3840 px width. Larger images also look better after refining, but on 4gb we aren’t going to get away with anything bigger than maybe 1536 x 1536. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. There are also "face detailer" workflows for faces specifically. The armor is upscaled from the original image without modification. Flux is a family of diffusion models by black forest labs. It does not work with SDXL for me at the moment. Hands are still bad though. When I search with quotes it didn't give any results (know it's only giving this reddit post) and without quotes it gave me a bunch of stuff mainly related to sdxl but not cascade and the first result is this: Examples of ComfyUI workflows. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. Yes, I search google before asking. Flux Examples. But I hardly ever use controlnet for upscaling. Ugh. g Use a X2 Upscaler model. 0. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. This could lead users to increase pressure to developers. Is there a workflow to upscale an entire folder of images as is easily done in A1111 in the img2img module? Basically I want to choose a folder and process all the images inside it. 1-0. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. I used 4x-AnimeSharp as the upscale_model and rescale the video to 2x. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. I might do an issue in ComfyUI about that. If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. I just uploaded a simpler example workflow that does a 2x latent upscale in two ways: . Still working on the the whole thing but I got the idea down A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share second pic. I usually take my first sample result to pixelspace, upscale by 4x, downscale by 2x, and sampling from step 42 to step 48, then pass it to my third sampler for steps 52 to 58, before going to post with it. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. Adding LORAs in my next iteration. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. If you are looking for upscale models to use you can find some on 31 Aug 2024 76:17. Try immediately VAEDecode after latent upscale to see what I mean. After that, they generate seams and combine everything together. We would like to show you a description here but the site won’t allow us. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. That's because of the model upscale. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Here is an example of how to use upscale models like ESRGAN. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. 2 and resampling faces 0. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 1 Dev Flux. You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. 0 for ComfyUI. So. In the Load Video node, click on choose video to upload and select the video you want. I want to upscale my image with a model, and then select the final size of it. It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. 2 options here. Makeing a bit of progress this week in ComfyUI. I have been generally pleased with the results I get from simply using additional samplers. 0 Alpha + SD XL Refiner 1. 4x using consumer-grade hardware. Jan 13, 2024 · submitted 7 months ago * by nooblito. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. Thanks! I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. it's nothing spectacular but gives good consistent results without If your image changes drastically on the second sample after upscaling, it's because you are denoising too much. repeat until you have an image you like, that you want to upscale. The 16GB usage you saw was for your second, latent upscale pass. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. You should be able to see where the comp ends, and the quality of the cape drops down to the original upscale. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. I go back and forth between OG SD Upscale and Ultimate. Thanks for the tips on Comfy! I'm enjoying it a lot so far. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. 1 Pro Flux. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. And above all, BE NICE. SDXL most definitely doesn't work with the old control net. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. Please share your tips, tricks, and… These comparisons are done using ComfyUI with default node settings and fixed seeds. I just find I'm gonna inpaint on my images anyways so that whole process is just an extra step and time. the good thing is no upscale needed. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. This repo contains examples of what is achievable with ComfyUI. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. psosah dpxrw anmhaj szo lclk tcgwvk npatr vljjqgs brfym ntd