Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model is predicting empty string for custom python dataset #124

Open
Tamal-Mondal opened this issue Jun 20, 2022 · 8 comments
Open

Model is predicting empty string for custom python dataset #124

Tamal-Mondal opened this issue Jun 20, 2022 · 8 comments

Comments

@Tamal-Mondal
Copy link

Hi @urialon,

As mentioned in one of the previous issues, I am trying to train and test Code2Seq for the code summarization tasks on our own python dataset. I am able to train the model but now the predictions/training doesn't seem to be correct. This issue seems to be similar to #62 which is also not properly resolved. Following are the things that I have tried:

  1. First time I tried to train with the same default config and after a couple of epochs, the predicted text for all cases was like "the|the|the|the|the|the".

  2. Following the suggestions of Code Captioning Task #17 and Hi, how could I reproduce results for code documentation as described in the paper #45, I updated the model config to make it suitable for predicting longer sequences. But then also the predictions were similar but the length of predicted texts was varying which might be because I changed MAX_TARGET_PARTS as part of the config.

  3. Next I have followed the suggestions in Empty hypothesis when periods are included in dataset #62 and make sure that there is no extra delimiter(",", "|" and " "), there is no punctuation and numbers, no non-alphanumeric characters(using str.isalpha() check over both doc and paths) and removing extra pipes(||). This time there was empty hypothesis for all the validation data points like Empty hypothesis when periods are included in dataset #62.

  4. To check if there is any issue in my setup, I tried to train the model using the python150k dataset and it's training properly on that so I am assuming it's some kind of dataset issue only.

  5. I have observed that during the first 1 or 2 epochs there are some texts in prediction but with more epochs it goes down to become empty for all data points.

Here are some of the training logs during my experiments.
training-logs-1.txt
training-logs-2(config change).txt
training-logs-3(alnum).txt

Thanks & Regards,
Tamal Mondal

@urialon
Copy link
Contributor

urialon commented Jun 22, 2022

Hi @Tamal-Mondal ,
When you wrote:

I updated the model config to make it suitable for predicting longer sequences.

Did you also re-train the model after updating the config?

I see that you get about F1=0.50 in the training-logs-2, so where do you see the empty predictions?

Uri

@Tamal-Mondal
Copy link
Author

Tamal-Mondal commented Jun 22, 2022

Thanks @urialon for the quick reply. Yes, I have started training from scratch after making the config changes. In case of "training-logs-2", I was still getting output like "the|the|the|the". I started getting empty predictions(check training-logs-3) from step 3 i.e. when applied more data cleaning steps.

One more thing is after applying so many constraints related to data cleaning like no punctuation, no numbers, etc. my training dataset size shrank to 1.6k, not sure if the small amount of training data can be the issue(I think the result still should not be this bad).

Regards,
Tamal Mondal

@Tamal-Mondal
Copy link
Author

Hi @urialon, sorry to bother you again. I still haven't understood the problem with my approach and am waiting for your reply. If you can please take a look into it and suggest something to me, it will be a great help.

Thanks & Regards,
Tamal Mondal

@urialon
Copy link
Contributor

urialon commented Jul 1, 2022

Hey @Tamal-Mondal ,
Sorry, for some reason I replied from my email and it does not appear in this thread.

The small number of examples can definitely be the issue.

You can try to train on the python150k first, and after convergence -- train on the additional 1600 examples.

As an orthogonal idea, in another project, we have recently released a multi-lingual model called PolyCoder: https://arxiv.org/pdf/2202.13169.pdf and code here: https://github.com/VHellendoorn/Code-LMs
PolyCoder us already trained on 12 language such as Java, C and python.
In C, we even managed to get better perplexity than OpenAI's Codex.
You can either use PolyCoder as is, or continue training it ("fine-tune") on your dataset.
So you might want to check it out as well.

Best,

@Tamal-Mondal
Copy link
Author

No problem @urialon, thanks for the suggestions. I will try and let you know.

@Tamal-Mondal
Copy link
Author

Hi @urialon,

Here are some updates on this issue.

  1. I was expecting the issue to be with either dataset size or data pre-processing so to investigate that I used the same pre-processing steps on CodeSearchNet(python) data for the summarization task. Even though it has some 2.2L data points in the training set, after adding constraints like no punctuations, numbers, etc in both AST and doc_string, the total training data point was 11k. This time there were no empty predictions. Following are some samples:

Original: Get|default|session|or|create|one|with|a|given|config , predicted 1st: Get|a
Original: Update|boost|factors|when|local|inhibition|is|used , predicted 1st: Remove|the
Original: Returns|a|description|of|the|dataset , predicted 1st: Returns|a|of|of|of
Original: Returns|the|sdr|for|jth|value|at|column|i , predicted 1st: Returns|the|for|for|for

As you can see those predictions are way too short and this is after convergence(just in 17 epochs). I changed the config for summarization as you suggested in some previous issues. The problem here can still be the dataset size, target summary length, etc. I think(do let me know if you have any other observations). I am attaching the logs.

logs.txt

  1. I am currently training Code2Seq on Python150k data and will fine-tune that on my own dataset as you suggested. Regarding this my understanding is I need to train Code2Seq with Python150k using standard config, then during fine-tuning, I just need to mention the saved model for the "--load" argument. And this just needs the file like "model_iter2.dict". Do let me know if something I missed.

Thanks & Regards,
Tamal Mondal

@urialon
Copy link
Contributor

urialon commented Jul 14, 2022

Yes, this sounds correct!
Good luck!

@urialon
Copy link
Contributor

urialon commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants