Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

so many bugs in your sftgan implementation #65

Open
anguoyang opened this issue Feb 10, 2022 · 6 comments
Open

so many bugs in your sftgan implementation #65

anguoyang opened this issue Feb 10, 2022 · 6 comments

Comments

@anguoyang
Copy link

No description provided.

@Sazoji
Copy link
Contributor

Sazoji commented Feb 10, 2022

Are you willing to elaborate on that, or did you make a descriptionless issue to just say sftgan's got bugs?

@joeyballentine
Copy link
Contributor

It wouldn't surprise me if sftgan is broken since nobody uses it and it's just been carried over from the original basicsr without any updates

@victorca25
Copy link
Owner

Hello! Yes, the SFTGAN code may not be fully functional. It was abandoned by the original author and I have barely been keeping updated with the rest of the changes in the repository, it is waiting for a major rewrite, but as Joey mentioned, nobody used it and the results weren't much better than the models trained without the SFT layers.

The SPADE normalization layers in the project by the same name is based on the same theory as SFT and if I finish integrating Pix2pixHD, I planned to follow with SPADE and then update SFTGAN.

If you're interested, you could troubleshoot the issues and get it to work, but this original instance of SFTGAN depends on some external code that has been deprecated.

@anguoyang
Copy link
Author

Hi@victorca25 , thank you for the comments, yes, the original sftgan was not fully functional, the author just released ver 0.0, some part of code are missing.
I am now debugging mainly on your code, and now it could be run without validation part(still need to be modified on the dataloader).
BTW, what is SPADE normalization layers? anyway, I think the SFT idea is great, because low resolution images are opt to be similar with each other, even if they are resized(downsize) from different HR images, so I think image prior is necessary

@victorca25
Copy link
Owner

Luckily the validation part is not necessary for training and you can evaluate the model outputs manually offline, so at least it's possible to test it.

Regarding SPADE, this is the paper: https://arxiv.org/pdf/1903.07291.pdf and this is the original repo: https://github.com/NVlabs/SPADE. This version of the code is based on pix2pixHD, which is based on the "pix2pix and CycleGAN" code, which this repository also was originally based on, so it should be familiar to browse around.

The SPADE normalization layers also make use of segmentation maps for spatial conditioning like SFT layers as both are spatial feature transforms that are applied through the network in place of regular normalization layers (like batch or instance normalization). It can be a good reference for updating the SFTGAN codebase if needed, so much so that certain things could be combined (which was my original intention in leaving updating SFTGAN for later).

The deprecated code I referred to is the original code used to train the model to generate the segmentation maps for SFTGAN: https://github.com/lxx1991/caffe_mpi , since there are now better and more accurate models to produce them. Alternatively, it would be possible to use datasets with manually created segmentation maps like in the pix2pixHD and SPADE cases (taking into consideration the lower resolution on the input domain), and there is a chance this can help get even better results from SFTGAN than using the outputs from the segmentation model.

@anguoyang
Copy link
Author

Hi@victorca25 , Thank so much for your feedback and kind information, I trained SFTGAN and test offline, the result is a little bit different with xintao's default model, my results are a little bit blur, which means the details are not reproduced well, not sure if it is because of the training hyperparameters, I used the same setting with the default one, the only difference is that in the default LRHRSeg_BG_Dataset they used both background image(with seg=1) and OST dataset(seg with pretrained model), I am not sure why xintao want to use DIV2K as background image, so I just used the OST.

SPADE normalization layer is a good idea, I will read/try it, and then come back here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants