Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving/restoring latent #86

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Saving/restoring latent #86

wants to merge 1 commit into from

Conversation

KiudLyrl
Copy link

it works by dumping/restoring the whole EMA and ADAM optimiser objects

exemple of how to restore :

model = Imagine(
text = TEXT,
save_every = SAVE_EVERY,
lr = LEARNING_RATE,
iterations = ITERATIONS,
save_progress = SAVE_PROGRESS,
out_folder = out_folder,
save_latents = True,
saved_latents_filepath = r"F:\my\path\magic_man.35.backup",
)

it works by dumping/restoring the whole EMA and ADAM optimiser objects

exemple of how to restore :

model = Imagine(
    text = TEXT,
    save_every = SAVE_EVERY,
    lr = LEARNING_RATE,
    iterations = ITERATIONS,
    save_progress = SAVE_PROGRESS,
    out_folder = out_folder,
    save_latents = True,
    saved_latents_filepath = r"F:\my\path\magic_man.35.backup",
)
@wolfgangmeyers
Copy link

can you also update cli.py so the parameter can be passed through the command prompt?

@wolfgangmeyers
Copy link

also should add dill to requirements.txt

@wolfgangmeyers
Copy link

I ran this and hit the following error:

>>> import big_sleep
>>> a = big_sleep.Imagine(num_cutouts=32, iterations=50, epochs=5, save_latents=True, text="a blue glass orb", save_every=5)
>>> a()
Imagining "a_blue_glass_orb" ...
loss: -17.00:   8%|██████████████████                                                                                                                                                                                                                | 4/50 [00:01<00:15,  2.91it/s]
      epochs:   0%|                                                                                                                                                                                                                                           | 0/5 [00:01<?, ?it/s]
Traceback (most recent call last):                                                                                                                                                                                                                 | 1/50.0 [00:01<01:07,  1.38s/it]
  File "<stdin>", line 1, in <module>
  File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\wolfg\Documents\big-sleep\big_sleep\big_sleep.py", line 526, in forward
    out, loss = self.train_step(epoch, i, image_pbar)
  File "C:\Users\wolfg\Documents\big-sleep\big_sleep\big_sleep.py", line 500, in train_step
    dill.dump(current_state_backup, file = open(f'./{self.text_path}.{num}{self.seed_suffix}.backup', "wb"))
UnboundLocalError: local variable 'num' referenced before assignment

@@ -472,6 +495,10 @@ def train_step(self, epoch, i, pbar=None):
num = total_iterations // self.save_every

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest moving num above this if block so it can be used if save_progress hasn't been set to True

@wolfgangmeyers
Copy link

So I've tried loading saved latents and changing the text to something completely different, but regardless of the text the image seems to develop in the exact same way. This was "a blue glass orb" after 5 epochs of 50 iterations each:

a_blue_glass_orb 49

After reloading the latents and running the same epochs and iterations again with text="a red fox" and text_min="a blue glass orb" this is the result:

a_red_fox_wout_a_blue_glass_orb 49

Seems to change in the same way regardless of the prompt though.

@wolfgangmeyers
Copy link

It would be nice to be able to resume with the saved latents either with the same optimizer (continue the same image but refine it) or with a new one (transform the existing image into something different)

@wolfgangmeyers
Copy link

I've added the suggested changes and submitted a second PR - #89

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants