Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Spanish translation of model_memory_anatomy.md #30885

Merged

Conversation

aaronjimv
Copy link
Contributor

What does this PR do?

Add the Spanish version of model_memory_anatomy.md to transformers/docs/source/es

#28936

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@stevhliu

@aaronjimv
Copy link
Contributor Author

cc: @tadeodonegana@gisturiz

Hello guys, I would appreciate if you could review the translation, it is a very technical documentation so I am open to any feedback. Thank you for your help.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link

@tadeodonegana tadeodonegana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work as always, @aaronjimv! I just left some minimal comments.

Thanks for your work 🥇

GPU memory occupied: 0 MB.
```

Parece estar bien: la memoria de la GPU no está ocupada como esperaríamos antes de cargar cualquier modelo. Si no es el caso en tu máquina, asegúrate de detener todos los procesos que estén utilizando la memoria de la GPU. Sin embargo, no toda la memoria libre de la GPU puede ser utilizada por el usuario. Cuando se carga un modelo en la GPU, también se cargan los kernels, lo que puede ocupar 1-2GB de memoria. Para ver cuánta es, cargemos un tensor diminuto en la GPU, lo que también desencadena la carga de los kernels.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest changing:

"Para ver cuánta es"

to

"Para ver cuánta memoria será ocupada por defecto",

to clarify that we are referring to the memory that will be taken by kernels and to make that more clear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your feedback! I really appreciate it.


## Utilización de la memoria en el entrenamiento

Vamos a utilizar el [`Trainer`] y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest referencing the Trainer docs page, as done in the original page.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, that seems fine to me. Thanks.

Copy link

@tadeodonegana tadeodonegana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@aaronjimv
Copy link
Contributor Author

LGTM

Thanks!

Hello @stevhliu, let me know anything about the changes. Thanks🤗

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! 👏

>>> ds.set_format("pt")
```

Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use this link so its not tied to any one version of the docs :)

Suggested change
Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares:
Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.


## Utilización de la memoria en el entrenamiento

Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4:
Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4:

@aaronjimv aaronjimv requested a review from stevhliu May 20, 2024 18:14
Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@stevhliu stevhliu merged commit 0df888f into huggingface:main May 20, 2024
8 checks passed
@aaronjimv aaronjimv deleted the translate_model_memory_anatomy.md branch May 21, 2024 14:57
itazap pushed a commit that referenced this pull request May 24, 2024
* add model_memory_anatomy to es/_toctree.yml

* copy model_memory_anatomy.md to es/

* translate first section

* translate doc

* chage forward activations

* fix sentence and and link to Trainer

* fix Trainer link
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants