Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XBM code regarding #43

Open
tejavoo opened this issue Jan 2, 2020 · 9 comments
Open

XBM code regarding #43

tejavoo opened this issue Jan 2, 2020 · 9 comments

Comments

@tejavoo
Copy link

tejavoo commented Jan 2, 2020

Hey when would the XBM code be released ? Is it possible to get it before you post it here (If it takes time) ? Thanks !

@mattzque
Copy link

I don't know myself but I'm really interested in this concept too, I started my own implementation of it if you want I can share it here when its done if you are still interested in it.

@bnu-wangxun any timeframe on release?

@ZhangHZ9
Copy link
Collaborator

XBM is simple and easy to implement following our pseudo-code. We may not release code very soon (in 1 month). But just as we said in our paper, we don't use any tricks in experiments, so reproducing our results is not difficult based on this repo.

Note that the learning rate is probably the most important hyper-parameter. Select the best learning rate by scanning it with log-scale (1e-5, 3e-5, 1e-4, 3e-4, etc.).

Thanks for your interest and feel free to ask any questions.

@bnu-wangxun
Copy link
Owner

@tejavoo @mattzque

@mattzque
Copy link

XBM is simple and easy to implement following our pseudo-code.

yes indeed its really not that difficult, thanks for the amazing work! Is the paper accepted yet? Last status was under review on arxiv? @ZhangHZ9 @bnu-wangxun

@bnu-wangxun
Copy link
Owner

@mattzque In fact, our XBM was submitted to CVPR and supposed to be accepted due high review scores .

@KevinMusgrave
Copy link

This is a pretty cool idea, so I tried implementing it. If anyone's interested, you can check it out here.

@bnu-wangxun
Copy link
Owner

@KevinMusgrave your pytorch-metric-learning is really a great repo for DML.

@sky186
Copy link

sky186 commented Nov 13, 2020

@KevinMusgrave
Hi ,your implementing is cool, but when I use this api have some question.

1、 the memory size, What is the value to set (my
2、 the dataloader sample ,I use triplet sample "RandomIdentitySampler"
3、I have seen others sayed " 我自己在已有框架下,triplet loss优化的网络模型中直接插入XBM进行实验,不知道是不是参数调整不到位还是其他原因,最后呈现的效果并不好"

`
loss = losses.ContrastiveLoss(pos_margin=0, neg_margin=0.5) #pos_margin neg_margin
loss_fn = losses.CrossBatchMemory(loss, 2048, memory_size=1024, miner=None) # memory_size

for epoch in epochs:
for n_iter,(inputs_bt,labels_bt) in enumerate(dataLoader):
inputs_bt, labels_bt = inputs_bt.cuda(), labels_bt.cuda()
score, feat = model()
loss_fn(feat, labels_bt, indices_tuple=None, enqueue_idx=None)
`

@sky186
Copy link

sky186 commented Nov 14, 2020

@tejavoo
作者您好^^,
我想把这个方法放在我的训练框架当中,我需要设置xbm参数
您可以说一下这几个参数如何设置吗, weight 是建议默认都是1 吗?
目前是 tripletloss + 分类损失,应该是加在tripletloss后面 ? 但是如果是(分类损失+tripletloss) 再更新我觉得也可以, 不知道您有没有做过相关实验?

XBM:
ENABLE: True
WEIGHT: 1.0
SIZE: 55000
START_ITERATION: 1000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants