Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MemoryError: Unable to allocate 29.3 GiB for an array with shape (2211861,) and data type <U3551 #606

Open
rohankarande2023 opened this issue Nov 8, 2023 · 0 comments

Comments

@rohankarande2023
Copy link

Getting an error while creating a Character Level Tokenizer for PubMed_200k_RCT_numbers_replaced_with_at_sign NLP project.

#Create Character Level Tokenizer:

char_vectorizer=tf.keras.layers.TextVectorization(max_tokens=Num_Char_Tokens,output_sequence_length=char_per_sentence, name='char_vectorizer')

Adapt character vectorizer to training characters

char_vectorizer.adapt(train_chars)

memoryError2
memoryError

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant