New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Observation] Running same image through AdaIN #12
Comments
@ArturoDeza Could you post an example of input image used? I tried to use |
I think the result similar as fast-neural-style |
this is what I get when I run My output when running |
@ArturoDeza I too get similar results. It seems you are using the default decoder. Try the newer one. It has slightly better decoder weights.
With this you would get same results as mine. For the image you provided these are the results I get for the two decoders: As I had mentioned earlier, I think that the trained decoder is not the exact inverse of encoder. This can be solved by training till Lc (refer Figure 2 of paper) is very low i.e have more number of iterations. |
Makes sense! Thanks, it gave me better results! Hoping the training code comes out soon. |
@ArturoDeza How about the result using my content and style image?Can you show to me? |
There is a port in Tensorflow on which i am working currently and we have an issuem the pictures all appear to be darker and the color is a bit off. I would really appreciate some help take a look at the code here: |
@dovanchan , Hmm, I still seem to get the same tile-like artifacts you are getting. I think this can be avoided with the training procedure that they use in the Diverse Synthesis paper. Attaching my output. *On the flipside -- notice that the style image you are inputting also has this heavy brush-like painting style which I think the AdaIN is capturing |
@hristorv |
@LalitPradhan I have solved the problem. The color values of the images are represented from 0 to 1. However they should be from 0 to 255 after postprocess. P.S. Check https://github.com/jonrei/tf-AdaIN , there is a discussion about this issue. |
@hristorv Can you use my content image and style image to have a test?(The cat photo I used before) |
@hristorv |
@ArturoDeza did you fix the noise problem yet, would you mind share your solution with lua ? |
@MonaTanggg See this thread: #16 Essentially I trained a pix2pix Super-Resolution module that maps back to the original image. It does quite a good job removing the artifacts. |
Hello. I have been implementing this in tensorflow with reference to this . Can someone tell me how I can correct this? Am i doing something wrong? I am also ensuring that the output image is converted to the 0 to 255 range. Thanks! |
I'm tweaking the code to do somewhat of a trivial example: Essentially running any image (in this case a scene, and not a texture), through the AdaIN, and having the style and content be the same image, with the goal of getting an exact reconstruction of the input image (as other style transfer methods can do). However, AdaIN seems to still texturize the output in this procedure, and the output looks a lot like an acuarela-like version of the same image, with most of the fine detail lost. I will try to post an example of this soon.
What is a good suggestion to work around this? I know the training algorithm is not out, but perhaps training the network with non-textures would improve such performance, or just more rounds of training? Thoughts?
The text was updated successfully, but these errors were encountered: