New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Light spot in trainning. #46
Comments
@594cp i meet the same problem during training, but it does not appear in test time, i have seen some of this light spot position in style transfer tasks, but i don't know why it happens. |
As addressed by @Quasimondo, this problem can be alleviated if you replace all zero paddings in the global generator with reflection paddings. |
I should add that changing the padding does not solve the artifact problem entirely, but it seems that there are less of them. Overall this is indeed a problem that bothers me quite a bit since once a model has "caught" these artifacts (which I believe originate from the residual layers) they seems to hamper the learning process considerably since they bias the normalization. You might have noticed that frames that show these artifacts are usually quite flat and without contrast in the other areas. Also further training does usually just increase the amount of artifacts. One experimental approach to "heal" a model that has caught them is to increase the learning rate a lot, e.g. 0.0008 instead of 0.0002. But then it is important to observe the training results and stopping the process once the artifacts seem to have disappeared. Of course this is a dangerous path and you should keep a backup of your model since if the gradients explode you model will be ruined. |
Same issue as described above. @tcwang0509, would you mind detailing how to go about replacing the zero paddings with reflection paddings? I see that the default
Or are changes needed on the following lines, which call line 192 Thank you for your help! |
What I did was adding an nn.ReflectionPadding() module before the
convolution and set the nn.Convolution2d padding to 0
Best,
Mario
…On Wed, Feb 20, 2019, 00:03 Kyle Steinfeld ***@***.*** wrote:
Same issue as described above.
@tcwang0509 <https://github.com/tcwang0509>, would you mind detailing how
to go about replacing the zero paddings with reflection paddings?
I see that the default padding_type argument on of the __init__ method of
the GlobalGenerator class is already "reflect". Does that default setting
a result of a change that has been made since the start of this thread?
class GlobalGenerator(nn.Module):
def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
padding_type='reflect'):
Or are changes needed on the following lines, which call nn.Conv2d with
padding=0?
line 192
model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7,
padding=0), norm_layer(ngf), activation]
line 209
model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7,
padding=0), nn.Tanh()]
Thank you for your help!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#46 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AALHUmu5BLyyJ7Upl2Par3-DPCjxM5dCks5vPILEgaJpZM4VaRHY>
.
|
Like this?
|
Yes, that looks right to me.
…On Tue, Mar 12, 2019 at 10:48 PM Kyle Steinfeld ***@***.***> wrote:
Like this?
starting on line 193...
### downsample
for i in range(n_downsampling):
mult = 2**i
model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=0),
norm_layer(ngf * mult * 2), activation]
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#46 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AALHUqTgRz5EtXKsCXfwb5Aq3FDJh01Uks5vWCC0gaJpZM4VaRHY>
.
|
Hey @ksteinfe are you sure that code is right? it gives me an error when I try it. I changed the reflection type to 1 and it works for me.
|
@aviel08 the input is just the padding size: https://pytorch.org/docs/master/generated/torch.nn.ReflectionPad2d.html |
Hi,I have a problem in my trainning.It works well when I train 256x256 images using train_A to train_B,but when I train 512x1024 images using Label to image without instance map,I have a light spot in every generated images,what's wrong?
the red rectangle shows the light spot position.
Anyone has the same problems?
The text was updated successfully, but these errors were encountered: