You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, and sorry if this question comes across as naive. I'm aiming for a significantly smaller embedding size for my project. and I'm wondering if we could tweak the architecture to achieve a dimension of 100 or even less for my images, as opposed to the 384, 768... offered by the current model.
Is there a quick way to do this just to measure the impact on my results? Thanks in advance!
The text was updated successfully, but these errors were encountered:
benam2
changed the title
Question: can we have a model with different embedding size if we finetune on our data?
Can we have a model with different embedding size if we finetune on our data?
May 8, 2024
you can plug the model into a linear layer that outputs 100 channels, and learn that linear layer on your task (if your goal is to store a set of embeddings)
otherwise if you want a smaller transformer model with embedding dimension 100, then there is no quick way
@qasfb Sorry for opening this issue again. But based on your experience do you think adding a layer with 100 channels and learning that on my task can give the similar eprformance for the retrieval tasks? Just want to pick your brain on that.
Also, what would be not quick approach for a smaller transformer model with 100 embedding size; just giving a rouh idea also helps a lot. Thank you.
Hi, and sorry if this question comes across as naive. I'm aiming for a significantly smaller embedding size for my project. and I'm wondering if we could tweak the architecture to achieve a dimension of 100 or even less for my images, as opposed to the 384, 768... offered by the current model.
Is there a quick way to do this just to measure the impact on my results? Thanks in advance!
The text was updated successfully, but these errors were encountered: