Skip to content

Releases: chaiNNer-org/chaiNNer

Alpha v0.13.0

29 Sep 00:31
Compare
Choose a tag to compare

Before we begin, you might notice chaiNNer has a new home! This repository now lives in an organization (chaiNNer-org) rather than my personal GitHub account. This means that the URL to the repository is now https://github.com/chaiNNer-org/chaiNNer. The old one should still redirect here though, but you should update any links you have (in descriptions, tutorials, etc) just in case.

This update brings a long-awaited addition: GFPGAN support! Luckily, we were able to add this without much hassle. However, it does not support the first v1 GFPGAN model (but it does support v1.2+). This is because GFPGAN v1 requires compiled CUDA extensions, which is not simple to support at this time. The good news is, the later models are much better anyway, so you are better off using those to begin with. GFPGAN support does require installing a new dependency package from the dependency manager, facexlib (as part of the PyTorch package collection), so make sure you do that!

To use GFPGAN with chaiNNer, there is now a new node: Face Upscale. You pass the loaded model into this node, and can optionally pass in an upscaled version of the background as well. This allows you to fully customize the background upscale, unlike the official GFPGAN code which only allows you to upscale with RealESRGAN at a fixed scale.

Speaking of scale, I did add an additional scale option for the GFPGAN output. While GFPGAN internally always does an 8x upscale, the official code as well as existing GUIs have included an output scale option (which just downscales the result). To ease confusion from people expecting this, I just decided to implement this as well. Unlike these other implementations though, the scale can be any number between 1 and 8, instead of just powers of 2. The important thing to keep in mind here is that adjusting the scale DOES NOT make it actually process with lower VRAM or anything like that.

Another important note: The first time you use GFPGAN, facexlib will automatically download some necessary models to /chaiNNer/python/gfpgan. This only happens for the first use, and depending on your internet speed might take a while. Just let it download and once finished everything should work fine.

For everything else about this release, see this full changelog:

New Features

  • GFPGAN (Face Upscaling/Restoration) support (#999) (#1018)
    • GFPGAN (as well as RestoreFormer) have been added to chaiNNer. Once loaded just like regular models, these models can be used with the Face Upscale node.
  • Dependency Manager improvements (#990)
    • The dependency manager now shows exactly which packages are installed, missing, and out of date.
    • Each dependency now has a description associated with it.
  • TGA saving support (#1033) (thanks @emarron)

New Nodes

  • Pass Through Node (#968)
    • This node simply passes the input value through to the output.
    • This is useful if you want to have one output connection go to multiple inputs, but also want to be able to easily swap what node gives that input.
  • Create Edges Node (#1009)
    • Like a cross between Create Border and Crop (Edges), this node creates a border around the image, but using adjustable numbers for each side.
  • Face Upscale (#999)
    • Used for processing with GFPGAN and RestoreFormer.

Other Changes

  • Removed Discord Rich-Presence (#1038)
    • This will eventually be added back, but for now it has been removed. It has come to our attention that it sometimes causes chaiNNer to not start up, and when it does work it leaks the name of chain files. While we work on resolving these issues, the feature has been temporarily removed.
  • Blend Image optimizations (#989)
    • Should see significantly better performance with this node, under certain circumstances.
  • Added timer support for iterators (#1021)
  • Various crop node improvements (#1013)
  • Changed input name for converted ONNX models to allow them to be compatible with VSGAN-tensorrt-docker (#1006)
  • Rename "relative path" to "subdirectory path" to make its use-case more obvious (#1022)

Bug Fixes:

  • Drastically fixed NCNN performance in iterators by only loading model once (#1023)
  • Fixed FPS values for videos (via the Video Frame Iterator) getting rounded in the output (#987)
  • Fixed Fill Alpha node (#992)
  • Fixed Canny Edge Detection node (#1010)
  • Fixed image iterator sorting (#1004)
  • Fixed output type of Text Pattern node (#1012)
  • Fixed output type of Crop (Border) node (#1013)
  • Fixed Linux bug due to missing qt environment variable (#1016)
  • Fixed pixelunshuffle ESRGAN models (RealESRGANx2 mainly) erroring due to uneven image sizes (#1017)
  • Show correct install size when only some packages are installed in the dependency manager (#1030)

Thanks to @RunDevelopment and @theflyingzamboni for their various contributions as always.

Alpha v0.12.7

18 Sep 02:54
Compare
Choose a tag to compare

Another minor release that fixes some bugs and makes a couple of changes.

Bug Fixes

  • Fix ONNX node issues (#970) (#980)
    • Mainly, fixed Convert to ONNX being broken.
  • Lower max PyTorch VRAM amount again (#981)
    • There have been some reports that the increased VRAM is making it actually slower overall, so this brings it back down closer to what it was originally.
  • Clear chaiNNer's internal cache when a node gets cleared (#978)
  • Fixed starting nodes not running after clearing (#974)

Other

  • Improve the update check on startup (#971)
    • From now on, the update checker will give a list of the main changes as well as tell you your current version, rather than only telling you that an update is available.
  • Pre-optimize ONNX models via constant folding on convert from PyTorch (#969)
  • Shorter type error messages (#975)

Alpha v0.12.6

15 Sep 20:45
Compare
Choose a tag to compare

This hotfix update attempts to fix a few issues reported in the last update. Apologies to anyone affected by these things.

Bug Fixes

  • Fix pasteboard install on M1 macOS causing crash on startup (#963)
    • We did not realize after adding a new required dependency for the Copy to Clipboard node that the pasteboard package is only built for x64 macOS. I'm working out how to add support for pre-built arm64 wheels, but for now pasteboard just won't try to install on M1. This means copy to clipboard won't work on that platform while we work this out.
  • Fix PyTorch's Convert to ONNX node (#962)
    • This node was accidentally not updated after a separate ONNX fix, and therefore connecting the resulting model to any of the ONNX inputs caused an error. This has been resolved
    • This was discovered to still have an issue. Working on it now.
  • Fixed Shift node output typing (#966) (thanks @RunDevelopment)

Other

  • Better generated error reports (#957) (thanks @RunDevelopment)
    • Generates better error reports when a crash happens in the main process. This is more of a helpful feature for us devs rather than for the end user.
  • Lower VRAM cap slightly (#967)
    • Got a report saying PyTorch was hogging more VRAM than it should have been, so I've decreased the value I increased last update. It's still more than it was in 0.12.4, but hopefully now it's at just the right place for a balance of optimal performance and stability.

Alpha v0.12.5

15 Sep 03:11
Compare
Choose a tag to compare

This update adds a few improvements as well as some extra features.

One thing I'm not sure about is if I fixed a common NCNN issue. Please let me know if this solves any of your previous issues, if you had any.

Dependency Updates

  • NCNN
    • NCNN now auto updates if installed, so you don't have to update it manually anymore. This will ensure that any changes I make to the NCNN bindings will not cause chaiNNer to suddenly stop working due to outdated NCNN.

New Features

  • Add ONNX execution options to settings (#931)
    • This allows you to select a GPU to use for ONNX processing, as well as picking an execution engine. If you have TensorRT set up properly on your system, this also means you can select TensorRT. This should theoretically give you much faster speeds when doing batch processing (just make sure to put the load model node outside the iterator. TensorRT takes a long time to convert the model to an engine)
  • Reporting all type mismatches (#939) (thanks @RunDevelopment)
    • We will now warn you if nodes that were previously compatible have suddenly become incompatible due to an upstream change, even if no custom error message has been set.
  • "Soft light" blend mode (#941) (thanks @JustNoon)
    • This is a new blend mode in the Blend Images node
  • Show proper error message on integrated python download failure (#949) (thanks @RunDevelopment)

New Nodes

  • Copy To Clipboard (#920) (thanks @Sryvkver)
    • This node allows copying an image, text, or number to the clipboard. You can find it in the utilities section.

Other Changes

  • Instead of attempting to update the required dependencies every startup, it will now do so only when needed. (#934)
  • Increased the max amount of VRAM PyTorch will use before tiling further in auto mode. Should improve performance a little bit more (#940)
  • PyTorch's Convert To NCNN node now will not hide the outputs of the node when ONNX is not installed and instead will warn the user about it when attempting to run it. (#952)

Bug Fixes

  • Fix ONNX nodes reloading on every upscale. Now it loads once in Load Model as it should. (#933)
  • Fixed the "FATAL ERROR!" message some users would get in their logs with NCNN during upscaling. (#947)
  • Potentially fixed other NCNN upscale issues, but need users to confirm for me.
  • Fixed NCNN GPU selector order problem (#948)
  • Improved modulo typing in Math node (#938) (thanks @RunDevelopment)
  • Added pow typing in Math node (#936) (thanks @RunDevelopment)

Alpha v0.12.4

10 Sep 18:40
Compare
Choose a tag to compare

This is another smaller update that adds a couple of new things and fixes a few bugs.

New Features

  • GPU Selector for PyTorch & NCNN (#919)
    • Long overdue, now you can select what GPU you want to use for PyTorch or NCNN. This is great for NCNN users as now you can have chaiNNer use your dedicated GPU instead of defaulting to your integrated GPU.
    • This is in the "Python" tab in settings, in the PyTorch and NCNN sub-tabs
    • If you have any issues that seem to stem from this change, please let me know.

Bug Fixes

  • Fixed loading RealESRGAN model loading at scales other than 4x (tested with 8x and 2x) (#921)

New Nodes

  • Resize to Side (#910) (thanks @BigBoyBarney)
    • Lets you resize conditionally based on the properties of the images

Changes

  • Moved CPU & FP16 settings to the PyTorch sub-tab of the Python tab in settings
  • Added icons to settings tabs (#922)
  • Added modulo operator to Math node (#908)

Alpha v0.12.3

07 Sep 00:46
Compare
Choose a tag to compare

I accidentally broke PyTorch model loading in the last release, so this is merely a hotfix to fix that.

Bug Fixes

  • Fix PyTorch model loading (#915)

Alpha v0.12.2

06 Sep 21:26
Compare
Choose a tag to compare

This update fixes a few important bugs (and some other things, of course). One major bug I found was the FP16 processing mode for PyTorch not working correctly. With this update, you should notice a significant performance improvement.

Bug Fixes

  • Fixes FP16 casting not working as expected with PyTorch (#912)
  • Hopefully prevents VRAM out-of-memory errors for PyTorch in "auto" mode (#912)
  • Hopefully fixes the "Failed to fetch" error some users were getting on first launch (#911)

New Features

  • Allow iterators to use the "drag a connection out to the pane" context menu, in the iterator editor zone (#905)
    • This does not include the right-click version of this menu at this time

Changes

  • Error after iteration is finished instead of during, to avoid interrupting batch processing (#901)
  • Clear starting node cache on node deletion to prevent memory leak (#909)
  • Include name of model in load model error (#902)
  • Add note about fp16 models to NCNN's Save Model node's description (#907)

Alpha v0.12.1

05 Sep 16:03
Compare
Choose a tag to compare

This is a small release that fixes a few issues noticed in v0.12.0 as well as adds a few things from contributors.

Bug Fixes

  • Fix ONNX Interpolate Models description (#894) (thanks @theflyingzamboni)
  • Fix Convert To ONNX and Convert To NCNN requiring PyTorch to have CUDA support if the CPU option was off (#896)
  • Fix ONNX in_nc detection to theoretically allow more unofficially supported ONNX models to work properly (#890) (thanks @theflyingzamboni)
  • Fix caption overflow if the width is too small for text (#899)

New Features

Alpha v0.12.0

04 Sep 15:06
Compare
Choose a tag to compare

AMD users rejoice: a long-awaited, highly requested feature is finally here: PyTorch to NCNN conversion! @theflyingzamboni did a wonderful job converting the original onnx2ncnn C++ code into python code in order to make this happen. Under the hood, we're converting from PyTorch to ONNX to NCNN (which you can still do if you want to), but we also have direct PyTorch to NCNN conversion for convenience. This means no more needing to convert to ONNX and use convertmodel.com! It can all be done in chaiNNer. Note: Make sure you have up-to-date NCNN and ONNX packages from the dependency manager in order for this to work properly

The other big thing this release is SwinIR support (for PyTorch). I've wanted to add this architecture for a while and I finally got around to it. It is a really good Super Resolution architecture with lots of great pretrained models to use. Ideally, we support every variation of SwinIR model (with all the params auto-detected), but if any of the SwinIR models give you an error please let me know and I'll try to resolve it. One weird thing about SwinIR is that it actually doesn't support fp16, so that will be automatically disabled when using this arch (so that means no fp16 speed-up). It also doesn't support conversion to NCNN, as NCNN does not support all the operations required for the architecture. Next-up architecture additions: GFPGAN and HAT (no ETA on these yet though).

I've also reworked how the PyTorch auto-tiling works. After some experimenting, I was able to use the VRAM estimations I added a while back in the logging (with a bit of added safety) instead of the old try/catch splitting method. This yielded a significant performance boost for PyTorch upscaling in general, though only when upscaling with GPU. CPU upscaling is unaffected by this change.

I've also done a bit to reduce some of the RAM overhead we previously had. Before, we weren't clearing out our internal output cache until after the entire chain was finished processing. This caused RAM usage to build up significantly for large chains, especially with large images. Now, we clear out things we no longer need so that RAM usage stays as low as possible at all times.

There's also a lot more added in this release, so here is the full changelog:

Dependency Updates

  • NCNN
    • There isn't actually an NCNN update this version, however this is a reminder that if you haven't updated since before v0.11, you should update for this release as otherwise NCNN upscaling will not work.
  • ONNX
    • Updated the version to support python 3.10 finally (if using system python with 3.10)
    • Added another required package that will need to be installed
    • Required to convert to NCNN

New Features

  • PyTorch -> NCNN conversion & ONNX -> NCNN conversion (#721) (#845) (thanks @theflyingzamboni)
    • PyTorch models can now be converted to NCNN models.
  • ONNX interpolation (#762) (thanks @theflyingzamboni)
    • Interpolate two of the same kinds of ONNX models the same way you would with PyTorch or NCNN
  • SwinIR support (#812) (#850) (#878)
  • RAM usage optimizations (#834) (#860)
  • PyTorch optimizations (#863) (#871)
  • Added a warning when modifying the chain during an execution (#843)
  • PyTorch model file iterator (#877)
    • Iterate through a directory and load every .pth model in it, like you would an image file iterator.
  • Add toggle for checking for an update on startup (#875) (thanks @jumpyjacko)
  • Node execution times now display on the bottom right corner of nodes (#882)

Change to Existing Features

  • "Tile Size Target" input changed to "Number of Tiles" (#720)
    • I found that the previous "Tile Size Target" was not clear about what it does or that there was auto tiling involved. Now it has been changed to a dropdown with a default "Auto" setting.
  • All string inputs now accept number inputs (#853)
  • Iterator indexes are now number outputs (#854)
  • Changed the NCNN icon to reflect their actual logo (#868)
  • Added favorites to the node selection context menu (#881)

New Nodes

  • Canny Edge Detection node (#869) (thanks @jumpyjacko)
    • Run edge detection on an image
  • (ONNX) Convert to NCNN
  • (PyTorch) Convert to NCNN
  • (PyTorch) Model File Iterator

Bug Fixes

  • Fixed backend not closing on exit on macOS (#879)
  • Fixed bugs with grayscale image upscaling (#829) (#842) (thanks @theflyingzamboni)
  • Fixed viewport position & zoom not loading properly in certain scenarios (#870)
  • Fixed iterator resuming after pausing (#884)
  • Fixed Blend node bugs (#887) (thanks @theflyingzamboni)

Alpha v0.11.6

30 Aug 00:17
Compare
Choose a tag to compare

Back with another bugfix update. This might be the most patch updates I've done for a version yet! v0.12.0 is still in progress but I hope to be able to release that soon. Anyway, onto the changelog:

Dependency Updates

  • NCNN (#864)
    • Semi-fixed the black-output issue AMD users were receiving when using the automatic tiling mode. For the most part, this seems to be fixed, however, I have been told it does break occasionally still. If this happens, just restart chaiNNer and it should work again.

Bug Fixes:

  • Fixed error messages not displaying (#866)
  • Fixed directory outputs for Load Model nodes (#862)

Other

  • Improved the labels of items in the Help menu (#865)