Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CHAINNER][CUSTOM MODELS] Models converted with chaiNNer 0.21.2 and above do not work #690

Open
2 tasks done
jamedom opened this issue Feb 1, 2024 · 14 comments
Open
2 tasks done
Assignees
Labels
documentation Improvements or additions to documentation help-wanted Extra attention is needed

Comments

@jamedom
Copy link

jamedom commented Feb 1, 2024

Checklist

  • I have checked that this issue isn't addressed in the FAQ.
  • I have checked that this issue isn't addressed in any other issue or pull request.

Describe the Issue

Hello brother, Could you update the Model Conversion Guide at https://github.com/upscayl/upscayl/wiki/Model-Conversion-Guide Please. Because it seems not working with the new version of chaiNNer.
The problem is at the final step where I need to change some text in .param file but it's not working now.

🎯 GET_MODELS_LIST:  F:\models
🐞 BACKEND REPORTED:  💾 Updating Save Output Folder:  false
🐞 BACKEND REPORTED:  📐 Updating Compression:  NaN
🐞 BACKEND REPORTED:  🖼️ Updating No Image Processing:  false
🐞 BACKEND REPORTED:  🔕 Updating Turn Off Notifications:  true
⚙️ Getting saveImageAs from localStorage:  jpg
⚙️ Getting model from localStorage:  {"label":"MangaScaleV3","value":"MangaScaleV3"}
⚙️ Getting gpuId from localStorage:  0
🔀 Setting model to 
🐞 BACKEND REPORTED:  📁 Updating Custom Models Folder Path:  F:\models
🐞 BACKEND REPORTED:  📁 Custom Models Folder Path:  F:\models
🐞 BACKEND REPORTED:  🔎 Detected Custom Models:  4xHFA2k,4xLSDIR,4xLSDIRCompactC3,4xLSDIRplusC,4xNomos8kSC,4x_NMKD-Siax_200k,4x_NMKD-Superscale-SP_178000_G,MangaScaleV3,realesr-animevideov3-x2,realesr-animevideov3-x3,realesr-animevideov3-x4,RealESRGAN_General_WDN_x4_v3,RealESRGAN_General_x4_v3,uniscale_restore,unknown-2.0.1
📜 CUSTOM_MODEL_FILES_LIST:  4xHFA2k,4xLSDIR,4xLSDIRCompactC3,4xLSDIRplusC,4xNomos8kSC,4x_NMKD-Siax_200k,4x_NMKD-Superscale-SP_178000_G,MangaScaleV3,realesr-animevideov3-x2,realesr-animevideov3-x3,realesr-animevideov3-x4,RealESRGAN_General_WDN_x4_v3,RealESRGAN_General_x4_v3,uniscale_restore,unknown-2.0.1
🔀 Setting model to MangaScaleV3
🔀 Model changed:  MangaScaleV3
🔀 Setting model to MangaScaleV3
🔄 Resetting image paths
⤵️ Dropped file:  {"type":"image/jpeg","filePath":"C:\\Users\\D\\Desktop\\final.jpg","extension":"jpg"}
🖼 Setting image path:  C:\Users\D\Desktop\final.jpg
🗂 Setting output path:  C:\Users\D\Desktop
🖼 imagePath:  C:\Users\D\Desktop\final.jpg
🔤 Extension:  jpg
🔄 Resetting Upscaled Image Path
🏁 UPSCAYL
🐞 BACKEND REPORTED:  🖼️ Updating No Image Processing:  false
🐞 BACKEND REPORTED:  📐 Updating Compression:  50
🐞 BACKEND REPORTED:  Is Default Model? :  false
🐞 BACKEND REPORTED:  ✅ Upscayl Variables:  {"model":"MangaScaleV3","gpuId":"0","saveImageAs":"jpg","inputDir":"C:\\Users\\D\\Desktop","outputDir":"C:\\Users\\D\\Desktop","fullfileName":"final.jpg","fileName":"final","initialScale":"4","desiredScale":"2","outFile":"C:\\Users\\D\\Desktop\\final_upscayl_2x_MangaScaleV3.jpg","compression":50}
🐞 BACKEND REPORTED:  📢 Upscayl Command:  -i,C:\Users\D\Desktop\final.jpg,-o,C:\Users\D\Desktop\final_upscayl_2x_MangaScaleV3.jpg,-s,4,-m,F:\models,-n,MangaScaleV3,-g,0,-f,jpg
🐞 BACKEND REPORTED:  👶 Updating Child Processes:  {"binary":"C:\\Program Files\\Upscayl\\resources\\bin\\upscayl-bin","args":["C:\\Program Files\\Upscayl\\resources\\bin\\upscayl-bin","-i","C:\\Users\\D\\Desktop\\final.jpg","-o","C:\\Users\\D\\Desktop\\final_upscayl_2x_MangaScaleV3.jpg","-s","4","-m","F:\\models","-n","MangaScaleV3","-g","0","-f","jpg"]}
🐞 BACKEND REPORTED:  🛑 Updating Stopped:  false
🐞 BACKEND REPORTED:  image upscayl:  [0 NVIDIA GeForce GTX 1060 6GB]  queueC=2[8]  queueG=0[16]  queueT=1[2]
[0 NVIDIA GeForce GTX 1060 6GB]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
[0 NVIDIA GeForce GTX 1060 6GB]  fp16-p/s/a=1/1/0  int8-p/s/a=1/1/1
[0 NVIDIA GeForce GTX 1060 6GB]  subgroup=32  basic=1  vote=1  ballot=1  shuffle=1

🚧 UPSCAYL_PROGRESS:  [0 NVIDIA GeForce GTX 1060 6GB]  queueC=2[8]  queueG=0[16]  queueT=1[2]
[0 NVIDIA GeForce GTX 1060 6GB]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
[0 NVIDIA GeForce GTX 1060 6GB]  fp16-p/s/a=1/1/0  int8-p/s/a=1/1/1
[0 NVIDIA GeForce GTX 1060 6GB]  subgroup=32  basic=1  vote=1  ballot=1  shuffle=1

🐞 BACKEND REPORTED:  💯 Done upscaling
🐞 BACKEND REPORTED:  ♻ Scaling and converting now...
🐞 BACKEND REPORTED:  📐 Processing Image:  {"originalWidth":800,"originalHeight":600,"scale":"2","saveImageAs":"jpg","compressionPercentage":50,"compressionLevel":5}
🐞 BACKEND REPORTED:  🖼️ Checking if original image exists:  C:\Users\D\Desktop\final.jpg
🐞 BACKEND REPORTED:  ❌ Error processing (scaling and converting) the image. Please report this error on GitHub. Error: Input file is missing: C:\Users\D\Desktop\final_upscayl_2x_MangaScaleV3.jpg
🔄 Resetting image paths
⚙️ Getting saveImageAs from localStorage:  jpg
⚙️ Getting model from localStorage:  {"label":"MangaScaleV3","value":"MangaScaleV3"}
⚙️ Getting gpuId from localStorage:  0
⚙️ Getting rememberOutputFolder from localStorage:  false

Screenshots

Untitled

@jamedom jamedom added the documentation Improvements or additions to documentation label Feb 1, 2024
@aaronliu0130
Copy link
Member

Could you send the .param file?

@jamedom
Copy link
Author

jamedom commented Feb 2, 2024

Could you send the .param file?
MangaScaleV3.zip

@jamedom
Copy link
Author

jamedom commented Feb 2, 2024

I used caiNNer latest version 0.21.2
https://github.com/chaiNNer-org/chaiNNer/releases/tag/v0.21.2

@sean1138
Copy link

sean1138 commented Feb 2, 2024

i think similar issue here, upscayl doesn't show an image for the after/upscaled, modified 4x_NMKD file in screenshot too
2024 02 02_182251

@Stereodude79
Copy link

Stereodude79 commented Feb 16, 2024

I have the same issue(s) here on Windows 10 using chaiNNer 0.21.2 and then trying them in Upscayl 2.9.9. I've seen both errors reported. No image and the "Input file is missing". In the no image after case the processing is almost instant whereas it normally takes some seconds to process.

The .parameter file from chaiNNER 0.21.2 looks quite a bit different from the ones in the guide and from the .parameter files for working models.

Here's a comparison of LSDIRplusC.
Version provided in https://github.com/upscayl/custom-models:

7767517
999 1782
Input            input.1                    0 1 data
Convolution      Conv_0                   1 1 data 703 0=64 1=3 4=1 5=1 6=1728
Split            splitncnn_0              1 8 703 703_splitncnn_0 703_splitncnn_1 703_splitncnn_2 703_splitncnn_3 703_splitncnn_4 703_splitncnn_5 703_splitncnn_6 703_splitncnn_7
Convolution      Conv_1                   1 1 703_splitncnn_7 705 0=32 1=3 4=1 5=1 6=18432 9=2 -23310=1,2.000000e-01
Split            splitncnn_1              1 4 705 705_splitncnn_0 705_splitncnn_1 705_splitncnn_2 705_splitncnn_3
Concat           Concat_3                 2 1 703_splitncnn_6 705_splitncnn_3 706
Convolution      Conv_4                   1 1 706 708 0=32 1=3 4=1 5=1 6=27648 9=2 -23310=1,2.000000e-01
Split            splitncnn_2              1 3 708 708_splitncnn_0 708_splitncnn_1 708_splitncnn_2
...

output from chaiNNer v0.21.2:

7767517
999 1782
Input            data                     0 1 data
Convolution      /Conv                    1 1 data /Conv_output_0 0=64 1=3 4=1 5=1 6=1728
Split            splitncnn_0              1 8 /Conv_output_0 /Conv_output_0_splitncnn_0 /Conv_output_0_splitncnn_1 /Conv_output_0_splitncnn_2 /Conv_output_0_splitncnn_3 /Conv_output_0_splitncnn_4 /Conv_output_0_splitncnn_5 /Conv_output_0_splitncnn_6 /Conv_output_0_splitncnn_7
Convolution      /Conv_1                  1 1 /Conv_output_0_splitncnn_7 /LeakyRelu_output_0 0=32 1=3 4=1 5=1 6=18432 9=2 -23310=1,2.000000e-01
Split            splitncnn_1              1 4 /LeakyRelu_output_0 /LeakyRelu_output_0_splitncnn_0 /LeakyRelu_output_0_splitncnn_1 /LeakyRelu_output_0_splitncnn_2 /LeakyRelu_output_0_splitncnn_3
Concat           /Concat                  2 1 /Conv_output_0_splitncnn_6 /LeakyRelu_output_0_splitncnn_3 /Concat_output_0
Convolution      /Conv_2                  1 1 /Concat_output_0 /LeakyRelu_1_output_0 0=32 1=3 4=1 5=1 6=27648 9=2 -23310=1,2.000000e-01
Split            splitncnn_2              1 3 /LeakyRelu_1_output_0 /LeakyRelu_1_output_0_splitncnn_0 /LeakyRelu_1_output_0_splitncnn_1 /LeakyRelu_1_output_0_splitncnn_2
...

Does someone know which version of chaiNNer works?

@aaronliu0130
Copy link
Member

That's weird indeed. Maybe 1.20.2 would work.

@Stereodude79
Copy link

Does someone know which version of chaiNNer works?

I tried a bunch of different versions and these all worked:
v0.18.9
v0.19.4
v0.20.2
v0.21.1

The only one I found that doesn't work is v0.21.2 It seems to have a problem with ONNX. No matter how many times I install the ONNX dependency it still shows that it's not installed.

@aaronliu0130
Copy link
Member

@NayamAmarshe Any idea as to the difference in output noted above?

@NayamAmarshe
Copy link
Member

@NayamAmarshe Any idea as to the difference in output noted above?

No, sir. Maybe ONNX introduced breaking changes.

I'll see if there are other ways to convert the models.

@aaronliu0130
Copy link
Member

Anyways, I've updated the documentation to mention the incompatibility with 0.21.2. Going to use this for tracking better solutions

@aaronliu0130 aaronliu0130 changed the title Need update Model Conversion Guide Models converted with chaiNNer 0.21.2 and above do not work Feb 17, 2024
@aaronliu0130 aaronliu0130 added the help-wanted Extra attention is needed label Feb 17, 2024
@NayamAmarshe NayamAmarshe changed the title Models converted with chaiNNer 0.21.2 and above do not work [CHAINNER][CUSTOM MODELS] Models converted with chaiNNer 0.21.2 and above do not work May 14, 2024
@joeyballentine
Copy link

Came across this randomly. Just FYI, we didn't knowingly change anything on the chaiNNer side that would have affected NCNN inference (it still works fine in chaiNNer). My only assumption is that your underlying realesrgan-ncnn-vulkan code is outdated and either does not support the clip op that now gets captured by conversions (we switched from doing the clip with numpy to doing it with pytorch to improve performance and now it gets captured by torch's trace), or with the differently named layers that seems to have happened after an onnx update.

Either way, this most likely needs to be fixed on the realesrgan-ncnn-vulkan side, and my guess is that updating that is out of scope for that project as it is only meant to run official models.

If you want an alternative, the official ncnn package has something called PNNX (which has prebuilt binaries i think), which you could use for conversion. Those might just work out of the box and would mean your users would no longer need to rely on chaiNNer for conversion.

@NayamAmarshe
Copy link
Member

Came across this randomly. Just FYI, we didn't knowingly change anything on the chaiNNer side that would have affected NCNN inference (it still works fine in chaiNNer). My only assumption is that your underlying realesrgan-ncnn-vulkan code is outdated and either does not support the clip op that now gets captured by conversions (we switched from doing the clip with numpy to doing it with pytorch to improve performance and now it gets captured by torch's trace), or with the differently named layers that seems to have happened after an onnx update.

Either way, this most likely needs to be fixed on the realesrgan-ncnn-vulkan side, and my guess is that updating that is out of scope for that project as it is only meant to run official models.

If you want an alternative, the official ncnn package has something called PNNX (which has prebuilt binaries i think), which you could use for conversion. Those might just work out of the box and would mean your users would no longer need to rely on chaiNNer for conversion.

Thank you for the explanation, Joey!
We could look into updating upscayl-ncnn. We're currently maintaining a fork of Real-ESRGAN NCNN-Vulkan on our own. While we're not NCNN experts, we could try fixing it.

@joeyballentine
Copy link

My assumption is that all you'd need to do is update the NCNN submodule to latest and it should just work. But I also am not an NCNN expert so I really don't know either 😆

It is unfortunate that this broke though. If it is the clip op that is causing the issue, I'm pretty sure it can just be removed from the param, as there are no weights associated with it. You'd just have to also modify whatever else needs to be modified when you remove a layer. Last time this was brought up to me it appeared ncnn tracks the number of layers, for example, which would need to be reduced. If the problem is the names, that could be as simple as a find and replace to remove the slashes. I know this has been attempted unsuccessfully before, but I still think it's possible with some work to figure out what all needs to be changed, I just don't have the time to mess around with it myself.

But if you do figure it out and think we could change something on the chaiNNer side to help compatiblity, do let me know and I can see if we can make that happen.

@NayamAmarshe
Copy link
Member

My assumption is that all you'd need to do is update the NCNN submodule to latest and it should just work. But I also am not an NCNN expert so I really don't know either 😆

It is unfortunate that this broke though. If it is the clip op that is causing the issue, I'm pretty sure it can just be removed from the param, as there are no weights associated with it. You'd just have to also modify whatever else needs to be modified when you remove a layer. Last time this was brought up to me it appeared ncnn tracks the number of layers, for example, which would need to be reduced. If the problem is the names, that could be as simple as a find and replace to remove the slashes. I know this has been attempted unsuccessfully before, but I still think it's possible with some work to figure out what all needs to be changed, I just don't have the time to mess around with it myself.

But if you do figure it out and think we could change something on the chaiNNer side to help compatiblity, do let me know and I can see if we can make that happen.

I think it might be more than the clip, since several things in the new param files have changed and my assumption is, it's related to some ONNX update.

I'll try to look more into it and let you know what I discover :D Thanks a lot for all the suggestions and help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation help-wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

7 participants