Comfyui controlnet preprocessor example reddit.
Comfyui controlnet preprocessor example reddit May 12, 2025 · For example, in the image below, we used ComfyUI’s Canny preprocessor, which extracts the contour edge features of the image. I am a fairly recent comfyui user. com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW. For those who have problems with the controlnet preprocessor and have been living with results like the image for some time (like me), check that the ComfyUI/custom_nodes directory doesn't have two similar folders "comfyui_controlnet_aux". Only select combinations work moderately alright. View community ranking In the Top 10% of largest communities on Reddit Does comfyui support preprocess of image? In Automatic1111 you could put an image and it will preprocess it to depth/canny/etc image to be use. The Workflow Pose ControlNet. I have used: - CheckPoint: RevAnimated v1. Not as simple as dropping a preprocessor into a folder. Belittling their efforts will get you banned. Here is ControlNetwrite up and here is the Update discussion. 1 Lineart ControlNet 1. When loading the graph, the following node types were not found: CR Batch Process Switch. As of 2023-02-26, Pidinet preprocessor does not have an "official" model that goes Example depth map detectimage with the default settings. Reply reply More replies More replies More replies I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). This reddit community is for submitting your favourite digital or natural media **pictorial** creations of landscapes or scenery. Example Pidinet detectmap with the default settings. There's a PreProcessor for DWPose in comfyui_controlnet_aux which makes batch-processing via DWPose pretty easy. Is this possible? In A1111 I can set preprocessor to none, but ComfyUI controlnet node does not have any preprocessor input, so I assume it is always preprocessing the image (ie. Please keep posted images SFW. RunComfy also provides AI Playground , enabling artists to harness the latest AI tools to create incredible art. 4-0. In other words, I can do 1 or 0 and nothing in between. (Results in following images -->) I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. 1 Anime Lineart ControlNet 1. Since we already created our own segmentation map there is Welcome to the unofficial ComfyUI subreddit. MLSD ControlNet preprocesor. In my canny Edge preprocessor, I seem to not be able to go into decimal like you or other people I have seen do. Segmentation ControlNet preprocessor . Choose a weight between 0. I was wondering if anyone has a workflow or some guidance on how to to get the color model to function? I am guessing I require a preprocessor if I just load an image to the "Apply ControlNet" node. I did try it, it did work quite well with ComfyUI’s canny node, however it’s nearly maxing out my 10gb vram and speed also took a noticeable hit (went from 2. It is good for positioning things, especially positioning things "near" and "far away". FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Enable ControlNet, set Preprocessor to "None" and Model to "lineart_anime". You can load this image in ComfyUI to get the full workflow. Workflows are tough to include in reddit Workflow Not Included May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. It is also fairly good for positioning things, especially positioning things "near" and "far away". If so, rename the first one (adding a letter, for example) and restart ComfyUI. I hope the official one from Stability AI would be more optimised especially on lower end hardware. 5, Starting 0. Using Multiple ControlNets to Emphasize Colors: In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. However, since a recent Controlnet update, 2 Inpaint preprocessors have appeared, and I don't really understand how to use them : Hi all! I recently made the shift to ComfyUI and have been testing a few things. Depth_leres is almost identical to regular "Depth", but with more ability to fine-tune the options. json got prompt… c:\Users\your-username-goes here\AppData\Roaming\krita\pykrita\ai_diffusion\. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. If you are asked too or the title of the post asks for help in style, technique, etc. LATER EDIT: I noticed this myself when I wanted to use ControlNet for scribbling. You might have to use different settings for his controlnet. Type in your console Depth_lres preprocessor. Example fake scribble detectmap with the default settings Welcome to the unofficial ComfyUI subreddit. For example, we can use a simple sketch to guide the image generation process, producing images that closely align with our sketch. Also, if this is new and exciting to you, feel free to post Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments Backup your workflows and picture. ). I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. example at the end of the filename, and placed my models path like so: d:/sd/models replacing the one in the file. see the search faq for details. DWPose might run very slowly Welcome to the unofficial ComfyUI subreddit. Certainly easy to achieve this than with prompt alone. When you click on the radio button for a model type, "inverted" will only appear in the preprocessor popup list for the line-type of models, i. subreddit:aww site:imgur. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Controlnet can be used with other generation models. Also, if you're using comfy, add an ImageBlur node between your image and the apply controlnet node and set both blur radius and sigma to 1. There is now a install. You just run the preprocessor and then use that image in a “Load Image” Node and use that in your generation process. For those who don't know, it is a technique that works by patching the unet function so it can make two passes during an inference loop: one to write data of the reference image, another one to read it during the normal input image inference so the output emulates the reference Welcome to the unofficial ComfyUI subreddit. The reason we're reinstalling the latest version (12. It is used with "mlsd" models. I don't think the generation info in ComfyUI gets saved with the video files. Fake scribble ControlNet preprocessor Fake scribble is just like regular scribble, but ControlNet is used to automatically create the scribble sketch from an uploaded image. While depth anything does provide a new controlnet model that's supposedly better trained for it, the project itself is for a depth estimation model. It is used with "normal" models. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. Hey everyone! Like many, I like to use Controlnet to condition my inpainting, using different preprocessors, and mixing them. When you generate the image you'd like to upscale, first send it to img2img. Load your segmentation map as an input for ControlNet. I tried to collect all the ones I know in one place. Load the noise image into ControlNet. 5. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. e. control_depth-fp16) In a depth map (which is the actual name of the kind of detectmap image this preprocessor creates), lighter areas are "closer" and darker areas are "further away" not quite. The row label shows which of the 3 types of reference controlnets were used to generate the image shown in the grid. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. I was frustrated by the lack of some controlnet preprocessors that I wanted to use. There are controlnet preprocessor depth map nodes (MiDaS, Zoe, etc. Please share your tips, tricks, and workflows for using this software to create your AI art. bat you can run to install to portable if detected. All fine detail and depth from the original image is lost, but the shapes of each chunk will remain more or less consistent for every image generation. It involves supplying a reference image, using a preprocessor to convert the reference image into a usable "guide image", and then used the matching controlnet model to guide the image generation alongside your prompt and generation model. And its hard to find other people asking this question on here. I saw a tutorial, long time ago, about controlnet preprocessor « reference only ». yaml. Is there something like this for Comfyui including sdxl? We would like to show you a description here but the site won’t allow us. Be respectful 2. Nov 4, 2024 · RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. The subject and background are rendered separately, blended and then upscaled together. Not sure why the OpenPose ControlNet model seems to be slightly less temporally consistent than the DensePose one here. Can I know how do you guys get around this? This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. Apr 1, 2023 · If a preprocessor node doesn't have version option, it is unchanged in ControlNet 1. This is the purpose of a preprocessor: it converts our reference image (such as a photo, line art, doodle, etc. I have "Zoe Depth map" preprocessor, but also not the "Zoe Depth Anything" shown in the screenshot. 1 Tile (Unfinished) (Which seems very interesting) Testing ControlNet with a simple input sketch and prompt. 1. Only the layout and connections are, to the best of my knowledge, correct. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. Welcome to the unofficial ComfyUI subreddit. Overgrown jungles, barren planets, futuristic cityscapes, or interiors, are just some examples of what is expected. trying to extract the pose). Make sure you set the resolution to match the ratio of the texture you want to synthesize. control_mlsd-fp16) We would like to show you a description here but the site won’t allow us. Reply reply We would like to show you a description here but the site won’t allow us. Example MLSD detectmap with the default settings . ControlNet 1. And above all, BE NICE. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. you input that picture, and use "reference_only" pre-processor on ControlNet, and choose Prompt/ControlNet is more important, and then change the prompt text to describing anything else except the chlotes, using maybe 0. Pidinet is similar to hed, but it generates outlines that are more solid and less "fuzzy". Ty i will try this. 8. Hook one up to vae decode and preview image nodes and you can see/save the depth map as a PNG or whatever. THESE TWO CONFLICT WITH EACH OTHER. Mixing ControlNets At this point, you can use this file as an input to ControlNet using the steps described in How to Use ControlNet with ComfyUI – Part 1. edit: nevermind, I think my installation of comfyui_controlnet_aux was somehow botched I didn't have big parts of the source that I can see in the repo. i am about to lose my mind :< Share Add a Comment Sort by: We would like to show you a description here but the site won’t allow us. And sometimes something new appears. In this case, I changed the beginning of the prompt to include, "standing in flower fields by the ocean, stunning sunset". It's a preprocessor for a controlnet model like leres, midas, zoe, marigold I think cold may be needed to support it. Upload your desired face image in this ControlNet tab. Leave the Preprocessor to None. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. There is one for a preprocessor and one for loading an image. Install a python package manager for example micromamba (follow the installation instruction on the website). Controlnet can be used with other generation models. So you'll end up with stuff like backwards hands, too big/small, and other kinds of bad positioning. Is there something similar I could use ? Thank you I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: We would like to show you a description here but the site won’t allow us. Speaking of Controlnet, how do you guys get your line drawings? Use photoshop find edges filter and then clean up by hand with a brush? It seems like you could use comfy AI to use controlnet to make the line art, then use controlnet against to use it to generate the final image. 1バージョンモデルを例に説明し、具体的なワークフローは後続の関連チュートリアルで補足します。 - selected "OpenPose" control type, with "openpose" preprocessor, and "t2i-adapter_xl_openpose" model, "controlnet is more important" - used this image - received a good openpose preprocessing but this blurry mess for a result Normal map ControlNet preprocessor. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Rules 1. 6. They must be original creations, not photographs of already-existing places. In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. If the input is manually inverted, though, for some reason the no-preprocessor inverted-input seems to be better. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. May 12, 2025 · 現在ComfyUIのControlNetモデルバージョンは多数あるため、具体的なフローは異なる場合がありますが、ここでは現在のControlNet V1. This works fine as I can use the different preprocessors. Set ControlNet parameters: Weight 0. py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. 1 of preprocessors if they have version option since results from v1. (e. I'm trying to implement reference only "controlnet preprocessor". I've installed ComfyUI Manager through which I installed ComfyUI's ControlNet Auxiliary Preprocessors. r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. Segmentation is used to split the image into "chunks" of more or less related elements ("semantic segmentation"). It's such a great tool. Example normal map detectmap with the default settings. server\ComfyUI\extra_model_paths. The current implementation has far less noise than hed, but far fewer fine details. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala can anyone please tell me if this is possible in comfyui at all, and where i can find an example workflow or tutorial. Reference Only is a ControlNet Preprocessor that does not need any ControlNet Model. All old workflows still can be used I'm trying to use an Open pose controlnet, using an open pose skeleton image without preprocessing. But I don’t see it with the current version of controlnet for sdxl. 5-1. Example depth map detectmap with the default settings . I get a bit better results with xinsir's tile compared to TTPlanet's. There are quite a few different preprocessors in comfyui, which can be further used in the same ControlNet. Done in ComfyUI with lineart preprocessor and controlnet model and dreamshaper7. /// Does anyone have a clue why I still can't see that preprocessor in the dropdown? I updated it (and controlnet too). Normal maps is good for intricate details and outlines. But as it turned out, there are quite a lot of them. We would like to show you a description here but the site won’t allow us. Download and install the latest CUDA (12. I found that one of the better combinations is to pick preprocessor "canny" and use Adapter XL Sketch, or preprocessor "t2ia_sketch_pidi" and use a ControlLite model by kohya-ss in its "sdxl fake scribble anime" edition. I also automated the split of the diffusion steps between the Base and the Refiner models. With controlnet I can input an image and begin working on it. Apr 15, 2024 · Rather than remembering all the preprocessor names within ComfyUI ControlNet Aux, this single node contains a long list of preprocessors that you can choose from for your ControlNet. Don't give criticism or your opinions on others painting styles onless asked. The img2img source is the same photo, but colorized manually and simply, which shows SD the colors it should approximately paint. (Results in following images -->). MLSD is good for finding straight lines and edges. A lot of people are just discovering this technology, and want to show off what they created. x) again, is because when we installed 11. It is used with "depth" models. You don't need to Down Sample the picture, this is only usefull if you want to get more detail at the same size unfortunately your examples didn't work. I don’t remember if you have to Add or Multiply it with the latent before putting it into the ControlNet node though it’s been a few since I messed with Comfy. Hi guys, do you know where I can find preprocessor tile_resample for ComfyUI? I've been using it without any problem on A1111 but since I just moved the whole workflow to ComfyUI, I'm having a hard time making controlnet tile work in the same way to controlnet tile on A1111. This is what I have so far (using the custom nodes to reduce the visual clutteR) . So I decided to write my own Python script that adds support for more preprocessors. Start Stable Diffusion and enable the ControlNet extension. Hey there, im trying to switch from A1111 to ComfyUI as I am intrigued by the nodebased approach. Hi, I hope I am not bugging you too much by asking you this on here. This makes it particularly useful for architecture like room interiors and isometric buildings. DWPreprocessor First I thought it would allow me to add some iterative details to my upscale jobs, for example, if I started with a picture of empty ocean and added a 'sailboat' prompt, tile would give me an armada of little sailboats floating out there. You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. The preprocessor for openpose makes the images like the one you loaded in your example, but from any image, not just open pose likes and dots. ComfyUI is hard. Would you have even the begining of a clue of why that it. Run the WebUI. It does lose fine, intricate detail though. I do see it in the other 2 repos though. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow Jul 20, 2024 · site:example. What do I need to install? (I'm migrating from A1111 so comfyui is a bit complex) I also get these errors when I load a workflow with controlnet. x, at this time) from the NVIDIA CUDA Toolkit Archive. It kinda seems like the best option is to have a white background, NOT invert input and use the scribble preprocessor, OR invert input in the UI but use no preprocessor. Sometimes you want to compare how some of them work. Maybe it's your settings. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. , Canny, Lineart, MLSD and Scribble. Once I applied the Face Keypoints Preprocessor and ControlNet after the InstantID node the results were really good When the controlnet was turned OFF, the prompt generates the image shown on the bottom corner. The controlnet part is lineart of the old photo which tells SD the contour it shall draw. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. g. 8, among other things, the installer updated our global CUDA_PATH environment variable to point to 11. e. 2 - Lora: Thickeer Lines Anime Style Lora Mix - ControlNet LineArt - ControlNet OpenPose - ControlNet TemporalNet (diffuser) Custom Nodes in Comfyui: - Comfyui Manager The problems with hands adetailer are that: If you use a masked-only inpaint, then the model lacks context for the rest of the body. It is recommended to use version v1. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. I don't know why it didn't grab those on the update. com find submissions from "example. control_normal-fp16) When trying to install the ControlNet Auxiliary Preprocessors in the latest version of ComfyUI, I get a note telling me to refrain from using it alongside this installation. I think the old repo isn't good enough to maintain. Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. In ControlNet, select Tile_Resample as a Preprocessor and Control_V11f1e_sd15_tile as a Model. Make sure that you save your workflow by pressing Save in the main menu if you want to use it again. I went for half-resolution here, with 1024x512. At the moment, the assembly includes Welcome to the unofficial ComfyUI subreddit. Pidinet ControlNet preprocessor . 0. Appreciate just looking into it. If you click the radio button " all " and then manually select your model from the model popup list, " inverted " will be at the very top of the list of all We would like to show you a description here but the site won’t allow us. 1 Instruct Pix2Pix ControlNet 1. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. But now I can't find the preprocessors like Hed, Canny etc in ComfyUi. example I renamed it by removing the . That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. You can also right click open in mask editor and apply a mask on the uploaded original image if it contains multiple people, or elements in the background you do not want the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. When the controlnet was turned ON, the image used for the controlnet is shown on the top corner. 1 Shuffle ControlNet 1. It's about colorizing an old picture. 5 denoising value. Just drop any image into it. 2. 3. 4. Additional question. com dog. Here are the Controlnet settings, as an example: Step 3: Modify your prompt or use a whole new one, and the face will be applied to the new prompt. Here is an example of the final image using the OpenPose ControlNet model. Where can they be loaded. You can also specifically save the workflow from the floating ComfyUI menu I am looking for a way to input an image of a character, and then make it have different poses without having to train a Lora, using comfyUI. Select the size you want to resize it. shows an example of using controlnet and img2img in a process. 1 Inpaint (not very sure about what exactly does this one) ControlNet 1. ) into a structured feature map so that the ControlNet model can understand and guide the generated result. I found one that doesn't use sdxl but can't find any others. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. It is not very useful for organic shapes or soft smooth curves. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow F:\##_ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose. 1 preprocessors are better than v1 one and compatibile with both ControlNet 1 and ControlNet 1. Thank you so much! Is there a way to create depth maps from an image inside ComfyUI by using ControlNET like in AUTO1111? I mean, in AUTO i can use the depth preprossessor, but i can't find anything like that in Comfy. But it gave better results than I thought. 9 it/s to 1. I'm just struggling to get controlnet to work. 8 it/s). Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Welcome to the unofficial ComfyUI subreddit. 1, Ending 0. EDIT: Nevermind, the update of the extension didn't actually work, but now it did. rqzedu bbhhpw etvdqu lufeca eziew sff vjuzm zps crkuqwp lbtcs