Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhanced Colab notebook #72

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Enhanced Colab notebook #72

wants to merge 4 commits into from

Conversation

SMarioMan
Copy link

Added support for priming the model.
Data now persists in Google Drive.
Added the ability to resume from checkpoints.

Added support for priming the model.
Data now persists in Google Drive.
Added the ability to resume from checkpoints.
"hps.name = 'samples'\n",
"# Specifies the directory to save the sample in.\n",
"# We set this to the Google Drive mount point.\n",
"hps.name = '/content/gdrive/My Drive/samples'\n",
"chunk_size = 16 if model==\"5b_lyrics\" else 32\n",
"max_batch_size = 3 if model==\"5b_lyrics\" else 16\n",
"hps.levels = 3\n",
"hps.hop_fraction = [.5,.5,.125]\n",
"\n",
"vqvae, *priors = MODELS[model]\n",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just as a though: since Downloading from gce (Downloading the model, I assume..?) takes quite a long time, especially for the 5b -models, it might make sense to split those two. So you can edit the Hyperparams without reloading the model each time :)

@MysteryPancake
Copy link

MysteryPancake commented Jul 9, 2020

Sorry to bother, but I was wondering if there is a way to use v3 labels with this Colab?

@xandramax
Copy link

Which labels are used is tied to which model is used: 5b uses v2 labels and 1b uses v3 labels.

@ApexJohn
Copy link

Is it possible to upscale audio made with 1b with this while keeping artist data and such intact? I keep getting an error with it not recognizing the artist as it seems to try and use 5b and v2 artists, instead of v3.

@bsturk bsturk mentioned this pull request Sep 8, 2020
SMarioMan and others added 2 commits October 27, 2020 18:21
Updated comments to make it more clear '5b' is a valid model. The notebook also sets some sane defaults if '5b_lyrics' is the selected model. Updated the notebook so these defaults will also be used for '5b' which prevents out of memory exceptions from occurring on colab.

Co-authored-by: mtferry <39018371+mtferry@users.noreply.github.com>
Provide a warning about sample length of runtime
@Yonben
Copy link

Yonben commented Dec 19, 2020

Hi,
In the notebook while using '5b' and not '5b_lyrics', the level_2 generation returns the following error: jukebox Range is [2646000.0,26460000.0), got tensor([[2204928.],[2204928.],[2204928.]], device='cuda:0') I tried factory resetting etc... and it seems to be consistently reproducable.

@liamwazherealso
Copy link

liamwazherealso commented Dec 19, 2020

Facing the same issue as @Yonben.

Using primed 5b.
Only modifcation is.

metas = [dict(artist = "Yello",
            genre = "Electronic",
            total_length = hps.sample_length,
            offset = 0,
            lyrics=" "*40),
          ] * hps.n_samples```
  

Full Error Output

Sampling level 2
Sampling 8192 tokens for [0,8192]. Conditioning on 3445 tokens
Primed sampling 1 samples with temp=0.98, top_k=0, top_p=0.0

---------------------------------------------------------------------------

AssertionError                            Traceback (most recent call last)

<ipython-input-49-988948e1e679> in <module>()
     16   x = load_prompts(audio_files, duration, hps)
     17   zs = top_prior.encode(x, start_level=0, end_level=len(priors), bs_chunks=x.shape[0])
---> 18   zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
     19 else:
     20   raise ValueError(f'Unknown sample mode {sample_hps.mode}.')

8 frames

/usr/local/lib/python3.6/dist-packages/jukebox/prior/conditioners.py in forward(self, pos_start, pos_end)
     89         # Check if [pos_start,pos_end] in [pos_min, pos_max)
     90         assert len(pos_start.shape) == 2, f"Expected shape with 2 dims, got {pos_start.shape}"
---> 91         assert (self.pos_min <= pos_start).all() and (pos_start < self.pos_max).all(), f"Range is [{self.pos_min},{self.pos_max}), got {pos_start}"
     92         pos_start = pos_start.float()
     93         if pos_end is not None:

AssertionError: Range is [2646000.0,26460000.0), got tensor([[2204928.]], device='cuda:0')
          ] * hps.n_sample

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants