Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update comprehensive_guide.ipynb #1029

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@
"1. Prune a custom Keras layer\n",
"2. Modify parts of a built-in Keras layer to prune.\n",
"\n",
"For an example, the API defaults to only pruning the kernel of the\n",
"For example, the API defaults to only pruning the kernel of the\n",
"`Dense` layer. The example below prunes the bias also.\n"
]
},
Expand Down Expand Up @@ -587,7 +587,7 @@
"\n",
"* Have a learning rate that's not too high or too low when the model is pruning. Consider the [pruning schedule](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/PruningSchedule) to be a hyperparameter.\n",
"\n",
"* As a quick test, try experimenting with pruning a model to the final sparsity at the begining of training by setting `begin_step` to 0 with a `tfmot.sparsity.keras.ConstantSparsity` schedule. You might get lucky with good results.\n",
"* As a quick test, try experimenting with pruning a model to the final sparsity at the beginning of training by setting `begin_step` to 0 with a `tfmot.sparsity.keras.ConstantSparsity` schedule. You might get lucky with good results.\n",
"\n",
"* Do not prune very frequently to give the model time to recover. The [pruning schedule](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/PruningSchedule) provides a decent default frequency.\n",
"\n",
Expand Down Expand Up @@ -723,7 +723,7 @@
"id": "yqk0jI49c1mw"
},
"source": [
"Once different backends [enable pruning to improve latency]((https://github.com/tensorflow/model-optimization/issues/173)), using block sparsity can improve latency for certain hardware.\n",
"Once different backends [enable pruning to improve latency](https://www.tensorflow.org/model_optimization/guide/pruning), using block sparsity can improve latency for certain hardware.\n",
"\n",
"Increasing the block size will decrease the peak sparsity that's achievable for a target model accuracy. Despite this, latency can still improve.\n",
"\n",
Expand Down