Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How-to train custom models: Explanations and notes #206

Open
arianaa30 opened this issue Sep 15, 2023 · 10 comments
Open

How-to train custom models: Explanations and notes #206

arianaa30 opened this issue Sep 15, 2023 · 10 comments

Comments

@arianaa30
Copy link

Can we train a variation of the model that is lightweight and can run on a phone?
The train model notebook doesn't have any explanations. Can you please add some notes into that? What data do I need exactly? (I want to run it say on old gameplay videos). What is /hdd/sdb/SYNLA_Plus_4096.npy?

@Tama47
Copy link
Contributor

Tama47 commented Sep 20, 2023

Can we train a variation of the model that is lightweight and can run on a phone?

Does the S variant not work for you? It is lightweight enough to be able to run smoothly on my iPhone.

@4NXIE7Y
Copy link

4NXIE7Y commented Sep 22, 2023

Can we train a variation of the model that is lightweight and can run on a phone?

Does the S variant not work for you? It is lightweight enough to be able to run smoothly on my iPhone.

Which S variant can you please elaborate on? And is it possible to hook the filters in iPhone like mpv?

@arianaa30
Copy link
Author

Can we train a variation of the model that is lightweight and can run on a phone?

Does the S variant not work for you? It is lightweight enough to be able to run smoothly on my iPhone.

Can I also run it easily on Android?
Also, does the latest release have the S model size? It seems the older version has this:

5 network sizes (S/M/L/VL/UL).

@Tama47
Copy link
Contributor

Tama47 commented Sep 22, 2023

Which S variant can you please elaborate on? And is it possible to hook the filters in iPhone like mpv?

I'm referring to Anime4K_Restore_CNN_S.glsl. This is the smallest and most lightweight restore shader that should be able to run on any modern smartphone. This shader can be run individually or chain with other shaders. It is possible to chain and hook the shaders on iPhone like in mpv with Anime4KMetal. I was able to run Anime4K: Mode A (Fast) on an iPhone 13 Pro Max smoothly with minimal frame drops.

Can I also run it easily on Android?

I have not tested on Android, but according to Bloc (the Owner), it is possible #99 (comment).

@arianaa30
Copy link
Author

@Tama47 Getting back to this, we have S variant for 2x upscales. But do we have S and M sizes for 4x upscalers as well?

@Tama47
Copy link
Contributor

Tama47 commented Nov 18, 2023

do we have S and M sizes for 4x upscalers as well?

I don't think so, why do you need S and M sizes for 4x upscale?

@arianaa30
Copy link
Author

Just like 2x If it doesn't run on a phone I go for a more lightweight version.

@arianaa30
Copy link
Author

Btw, do you know how I can train it on my dataset? There is a training script in the Tensorflow directory, but it is not complete. It needs a .npy array file which is not clear how to generate it.

@Tama47
Copy link
Contributor

Tama47 commented Nov 18, 2023

Just like 2x If it doesn't run on a phone I go for a more lightweight version.

Honestly, even with an iPhone 15 Pro Max screen, I can still barely tell the difference when upscaled to 4K. In the rare occasions that I do watch from my phone rather than on my 4K TV, I usually just watch in 1080p directly from the Crunchyroll app. The difference is just too small. Does the 2x Upscale not work for you? All you really need on a phone is maybe a small Restore Shader.

Btw, do you know how I can train it on my dataset? There is a training script in the Tensorflow directory, but it is not complete. It needs a .npy array file which is not clear how to generate it.

Yeah sorry, I do not know either, only Bloc would be able to answer that.

@arianaa30
Copy link
Author

I see the training is performed on 256x256 color images by Synla dataset. How can I replace it with my own dataset? What is the purpose of images being 256x256?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants