Skip to content

Can a pertained model be fine-tuned, or not ? #6185

Answered by maziyarpanahi
kgoderis asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @kgoderis

When you are using Spark NLP pre-trained models inside a pipeline or a pre-trained pipeline, the weights are constant, they cannot be fine-tuned nor be overwritten. If the entire pipeline is pre-trained models and rule-based annotators, then the .fit(df) stage will be skipped and it goes directly to .transform().

In Sparrk NLP the trainable annotators have Approach in their names, and the pretrained models only will be accessible with annotators with Model in their names. (99% of the time). This means, if you have a pretrained model let's say in NerDLModel, it won't be fine-tuned nor overwritten since it's not trainable. The trainable annotator for NER is called NerDLApproach…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@kgoderis
Comment options

@maziyarpanahi
Comment options

Answer selected by kgoderis
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants