Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for training logs and detailed settings #31

Closed
BotMan-Sz opened this issue Mar 11, 2024 · 5 comments
Closed

Request for training logs and detailed settings #31

BotMan-Sz opened this issue Mar 11, 2024 · 5 comments

Comments

@BotMan-Sz
Copy link

Dear authors,
Recently I have been studying your paper and code, however, even with your proceesed dataset, I can not repeat your reported results in your paper. Take the HAR dataset as example, there comes the following issues.

  1. When I choose the random_init training, the results(77.9) is much better than the scores you reported in Table 2(57.89±5.13). I don't know what random seed you used in your paper, but for 5 seed i tested, they all higher than your report.
    image
  2. When I use only 5% of trainining data for supervised trainining, the accuracy and macro F1(>0.89) is much higher than the reported scores Fig.2(MF1<0.55). Below are the results for training with 5% training dataset (with batch size of 8).
    image
  3. All thereported results are hard to reproduce.

Hope you could share me your experiment setting and training logs to remove doubts.

@BotMan-Sz
Copy link
Author

image
Add this in dataloader/dataloder.py/Load_Dataset() function to obtain certain ratio of tainining samples.

@emadeldeen24
Copy link
Owner

Hi,
thanks for experimenting with our code. Unfortunately, I don't have the logs anymore.
But I recall using the same parameters as yours, maybe the dropout was 0.5, but everything is the same. I'm not sure why you can not reproduce the results.

@BotMan-Sz
Copy link
Author

Hi,
Thanks for your response. As the authors, reproducing your own code should not be difficult, I hope you take a little time to recall your parameter configuration, etc., and run the code again, which is crucial for the reliability of your work.

@BotMan-Sz
Copy link
Author

If needed, I can update my projects with logs to github, and i didn't make any changes to the core codes. It seems that even the Random Initialization and supervised results are not reliable, not to say other baselines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants