New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal Stable Diffusion #1602
base: main
Are you sure you want to change the base?
Conversation
adodge
commented
Feb 26, 2023
•
edited
edited
- Proof of concept and request for comment
- Using a fork of ComfyUI that's packaged as a library (https://github.com/adodge/ComfyLib)
- Currently you need to manually install from git, but I'll put it in PyPi if this looks viable
- Need to work on the Navi input/output types and the renderer components
- Would be nice to cache checkpoints between runs. loading the model can take 5-10 seconds
- Needs a higher-level node that does something like the exsting external "Text to Image"
- There are lots of other nodes in ComfyUI that could be ported over
64d7a04
to
38d5124
Compare
38d5124
to
88b8744
Compare
Did some work on the library. Better VRAM memory management options. Rearranged the node inputs to have the image and model as the first two inputs. Stuff to do before I mark this ready to review:
|
|
54d27be
to
bd74586
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few initial comments. In general this looks pretty good and clean, and I like that comfylib did most of the heavy lifting so we didn't have to have much code here.
name="Stable Diffusion", | ||
description="Nodes for using Stable Diffusion", | ||
icon="PyTorch", | ||
color="#DD6B20", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO these should probably be a different icon & color (like the webui ones). While they are still PyTorch, I think it's separate enough where it makes sense to make it unique. The webui nodes I made purple, which coincidentally also happens to be the same color as pytorch lightning. Might at least be good to use the lightning logo for the load checkpoint node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I just used the color and icon of whatever node I copied to get started with. I changed the color to match the external SD one, and changed the icons to something suitably vague. Hard to find good icons to represent the things these nodes do, at least without designing our own icons. I tried to use the SiPytorchLightling
icon from here https://react-icons.github.io/react-icons/search?q=lightning, but it doesn't seem to be part of the icon library we're using.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably need to update the package
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
def run(self, clip: CLIPModel, prompt: Optional[str]) -> Conditioning: | ||
prompt = prompt or "" | ||
try: | ||
clip.cuda() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of doing .cuda()
, could you do .to(torch.device(exec_options.full_device))
like we do elsewhere? This not only ensures it goes on the right GPU but also allows things like DML to work (if someone has that manually set up)
|
||
try: | ||
vae.cuda() | ||
img = RGBImage.from_array(image, device="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same thing with the device param here. Might be more complex than just changing the .cuda calls, but can probably just pass in the execution options device
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I went and replaced the .cuda()
s with .to(exec_options.full_device)
. For some reason, that seems to be set to "cpu" for me. I don't have the CPU Mode setting set in the options. I'm not really sure how it's getting set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normal PyTorch nodes seem to be running on GPU. Odd...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is that since this isn't pytorch directly (I'm not even sure what this is) it probably doesn't support that syntax. I'd say just check if the device is CUDA and if so do this, otherwise do CPU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .to
calls are getting passed transparently into pytorch. When I stick a breakpoint in the sampler node, exec_options.full_device
is set to cpu
. When I put the breakpoint in the PyTorch load model node, it's set to cuda
. I don't really know why that would be, and I haven't had a chance to dig into it. It looks like it's supposed to be a global.
{ | ||
packageName: 'comfylib', | ||
version: '0.0.5', | ||
sizeEstimate: 3 * GB, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly makes this 3GB? We aren't downloading torch again through this are we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a bunch of other junk that's pulled in, directly or indirectly. I should do a pass on the comfy code and figure out if there's any dependencies we can drop. I believe it should not re-download pytorch, if it's already installed.
With a fresh venv, after installing comfylib the lib
directory is 4.6GB, and contains these libraries: https://gist.github.com/adodge/b6722ab4d2229562ef5e2663a85e1daf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's that nvidia
package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hrm. I'm not sure. That doesn't seem to be installed anymore. it's possible I didn't generate that list correctly.
Here's the list of dependencies for comfylib, and all their dependencies, courtesy of pipdeptree
: https://gist.github.com/adodge/de2e1686f0ec19727c44bd905b3c119e
When I get a chance, I'll go through and do a knockout experiment to see which of these are actually being used by the code integrated in chainner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I"m done, I'll add a specific version of all the packages it installs to dependencies.ts
.
8536961
to
5cb55c3
Compare
Note: this will now need to be updated to the new package node importing & function-based node format. I can do this myself if you'd prefer |
That would be very nice, if it's not too much trouble. Sorry this is taking so long for me to get back to. Hoping for this week. |
I'm working on this today, so I'll take a crack at rebasing it. |
Sorry, been working on other things and haven't gotten a chance to make that change. If you still want me to do it i can do it today |
I think it shouldn't take me too long. Will be a good chance to familiarize myself with the new stuff |
03be695
to
8b7bc3b
Compare
Rebase done. Haven't addressed any of the other feedback, but it works again now. I like the new system! |
8b7bc3b
to
c7ec78c
Compare
), | ||
TextAreaInput("Prompt").make_optional(), | ||
TextAreaInput("Negative Prompt").make_optional(), | ||
group("seed")(NumberInput("Seed", minimum=0, default=42, maximum=4294967296)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now have a seed input that gives you a seed object. See this example:
group("seed")(SeedInput()), |
You can then use seed.to_u32()
to get a number within limits.
chaiNNer/backend/src/nodes/utils/seed.py
Line 28 in 04ba6fe
def to_u32(self) -> int: |
@joeyballentine , I came across your tool and discovered this PR while looking for similar functionality. Is the PR still a Work in Progress or is it stale ? |
@MElhagaly at this point, this PR is stale as adodge never finished it and I don't have the time to redo the work that went into it to fix it. If you want node-based SD, just use comfyui |
@joeyballentine I was considering picking up this PR and finish it. At least, if the functionality is deemed interesting and relevant to the community. Thoughts? |
To be honest, I don't think having SD functionality in chaiNNer is even worth it. I don't have the time to keep up with the constantly changing landscape and constant new features popping up in that space. Even if this PR were to be finished, it would be massively out of date and wouldn't support most of the things people have come to expect in an SD application in the time since this PR was first made. It would take a significant amount of effort to implement all that new stuff on top of this (not to mention you'd need to update ComfyLib first). Maybe some day once chaiNNer supports extensions/third party nodes, someone can make and maintain their own SD extension independently. That would probably be the most ideal scenario. So, if you were to finish this out, that would be great, but it would be sorta a maintenance hell for us unless you're willing to stick with it and keep it up to date for us. And that would include maintaining some version of ComfyLib (if you were to finish out that project as well). If you're willing to do that and can commit to at least keeping it up to date every once in a while, then I say go for it (and I would very much appreciate it). I don't remember exactly where this left off, but I think one of the biggest blockers were all of the unnecessary dependencies ComfyLib was pulling in. In order for this to be considered, it would need to significantly reduce that. |
Thank you for your detailed reply. I will take a look at this PR, and try and get it working. I see what you mean with ComfyLib having a huge amount of dependencies. My workflow will be to get everything up and running using @adodge's work. Then see what can eliminated. I already noticed that Dependency management has been moved to the backend since the start of this PR. So I will try and evaluate what other changes have been pushes as well. In the meantime, I will get started and keep you updated. |
I think dependency definitions being moved to the backend is the only major thing that changed since then, but I could be forgetting things. After a quick glance at the code changes for this, that seems like the only thing thats super out of date (in terms of chaiNNer code that is) |
@joeyballentine quick update: I have managed to get this PR to work on the latest version of ChaiNNer. It works as expected. I haven't implemented the dependencies part yet, mainly to figure out how many dependency installs are required. As you stated, it is a lot of (unnecessary) dependencies. Next steps for me are:
I am willing to dedicate some time to get the base up and running as well as maintain it in the long run. You already mentioned a future functionality where chaiNNer would allow third party packages / nodes. Any thoughts? |
@MElhagaly that sounds good to me. Thanks for the update and I appreciate you being willing to work on this 🙏 |