Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not showing predictions after training #12

Open
hienvantran opened this issue Jun 2, 2021 · 16 comments
Open

not showing predictions after training #12

hienvantran opened this issue Jun 2, 2021 · 16 comments

Comments

@hienvantran
Copy link

Describe the bug
There are 2 problems:

  1. After training ML backend model, I cannot find the model predictions in the UI when labelling.
  2. Often cannot train all 100 epochs, the system will crash at middle, 30-70 epochs although dataset is small (50) and have GPU.
  3. Error shows that: get latest job results from work dir doesn’t exist
  4. Sometimes, when 3 not occurs, other issue is that: unable to load weight from pytorch checkpoint file.
    image
    image

To reproduce
Steps to reproduce the behaviour

  1. Import pre annotated data
  2. Manually label some of them
  3. Go to ML UI in Setting, connect model (BERT classifier) and start training
  4. After finishing, come back to Label UI. In prediction tab, only the pre annotated predictions are shown.

Expected behaviour
ML training should be completed and new predictions should be shown in UI

@kbv72
Copy link

kbv72 commented Jun 23, 2021

Facing similar issue.
Note:
I am a beginner with Django and labelstudio so please forgive my naivety.

  1. Using BERT NER with sample data from here
  2. At the end of training, it says "POST /train HTTP/1.1←[0m" 201" Which I understand, is confirmation of successful training.
  3. On the UI, In the filters, I select prediction results and prediction score. [See screenshot]

Questions:

  1. Is it correct way of viewing the predictions?
  2. If so, how to get prediction scores?

image

@china-zyy
Copy link

check you code, whether the variable pred_labels and pred_scores in you code.

@makseq
Copy link
Member

makseq commented Aug 19, 2021

@kbv72 sorry for a late answer 🙁

Is it correct way of viewing the predictions?

yes, it looks ok.

If so, how to get prediction scores?

You have to put float "score" field into the root of the prediction dict.

@aczy99
Copy link

aczy99 commented Mar 6, 2022

Hello there, I am facing the same issue with the prediction scores.

image

As you can see, i have converted my scores to float but it is still not showing up in my frontend.

image

@makseq
Copy link
Member

makseq commented Mar 11, 2022

@aczy99 Try to set model version on Project Settings => Machine Learning. Also it's better to include "model_version" field to the prediction root too.

@35grain
Copy link

35grain commented Dec 29, 2022

Facing the same issue, scores are provided but not shown in the tasks list. Selecting the Model version doesn't seem to do anything - displays "Saved!", however is removed upon refreshing the page. It is also confusing where exactly the model version is retrieved from as it is not the same as I provide together with the predictions.

@makseq
Copy link
Member

makseq commented Jan 5, 2023

is not the same as I provide together with the predictions.

what do you mean by this?

Please show your tasks with predictions.

@35grain
Copy link

35grain commented Jan 5, 2023

what do you mean by this?

Please show your tasks with predictions.

I am using the format provided as example in docs: https://labelstud.io/guide/predictions.html#Example-JSON
Scores per result (individual label) are shown in the Labeling view, however the overall score of a task (predictions.score) is not displayed in the project Tasks view Prediction score column (at least this is where I would expect it to be displayed).

As for the model version, I would also expect the string set from predictions.model_version to be listed under Project Settings => Machine Learning, however there are instead some numeric combinations and 'INITIAL'. A new version (the numeric combination) appears any time I rebuild the backend Docker container.

@makseq
Copy link
Member

makseq commented Jan 5, 2023

Most likely you have a mistake in the prediction/task format.

@35grain
Copy link

35grain commented Jan 5, 2023

Most likely you have a mistake in the prediction/task format.

So I pulled out the list of predictions made on my tasks, and this is one example stored in the Label Studio database:

{
    "id": 2053,
    "model_version": "INITIAL",
    "created_ago": "1 day, 19 hours",
    "result": [
      {
        "from_name": "label",
        "image_rotation": 0,
        "original_height": 720,
        "original_width": 1280,
        "score": 0.7231900691986084,
        "to_name": "image",
        "type": "rectanglelabels",
        "value": {
          "height": 9.309395684136284,
          "rectanglelabels": [
            "Visual defects"
          ],
          "rotation": 0,
          "width": 8.777930736541748,
          "x": 35.5985426902771,
          "y": 25.251723395453556
        }
      },
      {
        "from_name": "label",
        "image_rotation": 0,
        "original_height": 720,
        "original_width": 1280,
        "score": 0.6727777123451233,
        "to_name": "image",
        "type": "rectanglelabels",
        "value": {
          "height": 9.74336412217882,
          "rectanglelabels": [
            "Visual defects"
          ],
          "rotation": 0,
          "width": 5.079851150512695,
          "x": 24.418318271636963,
          "y": 33.53565639919705
        }
      },
      ...
    ],
    "score": 0.5981751024723053,
    "cluster": null,
    "neighbors": null,
    "mislabeling": 0,
    "created_at": "2023-01-03T18:40:23.579377Z",
    "updated_at": "2023-01-03T18:40:23.579422Z",
    "task": 600
}

As we can see, the score is in fact stored in the database and therefore simply not displayed. This leads me to believe that the issue is not caused by some formatting mistake in the backend output. As the model version string is defined right after the score in my backend output, it is odd that this isn't stored in the database, though.

@makseq
Copy link
Member

makseq commented Jan 11, 2023

@35grain I see you missed "id" in results:

"result": [
      {
        "id": "random123", <====
        "from_name": "label",
        "image_rotation": 0,
        "original_height": 720,
        "original_width": 1280,
        "score": 0.7231900691986084,
        "to_name": "image",
        "type": "rectanglelabels",
        "value": {
          "height": 9.309395684136284,
          "rectanglelabels": [
            "Visual defects"
          ],
          "rotation": 0,
          "width": 8.777930736541748,
          "x": 35.5985426902771,
          "y": 25.251723395453556
        }
      },

@Developer66
Copy link

Developer66 commented Jan 23, 2023

Fixed it for me using this code:

        prediction.append({
            'result': [{
              "value": {
                "start": 1593866042000,
                "end": 1593966345000,
                "instant": False,
                "timeserieslabels": [
                  "test"
                ]
              },
              "id": "jT4DkFmczt",
              "from_name": "label",
              "to_name": "ts",
              "type": "timeserieslabels",
            }],
        })

But score is not working yet or I dont found out where to put the score right

@35grain
Copy link

35grain commented Jan 24, 2023

@35grain I see you missed "id" in results:

"result": [
      {
        "id": "random123", <====
        "from_name": "label",
        "image_rotation": 0,
        "original_height": 720,
        "original_width": 1280,
        "score": 0.7231900691986084,
        "to_name": "image",
        "type": "rectanglelabels",
        "value": {
          "height": 9.309395684136284,
          "rectanglelabels": [
            "Visual defects"
          ],
          "rotation": 0,
          "width": 8.777930736541748,
          "x": 35.5985426902771,
          "y": 25.251723395453556
        }
      },

Finally had time to play around with this and unfortunately adding an ID did not solve it. Both score and model version are still missing / invalid.

{
   "id":3353,
   "model_version":"INITIAL",
   "created_ago":"0 minutes",
   "result":[
      {
         "from_name":"label",
         "id":"0",
         "image_rotation":0,
         "original_height":720,
         "original_width":1280,
         "score":0.9651663303375244,
         "to_name":"image",
         "type":"rectanglelabels",
         "value":{
            "height":12.63888888888889,
            "rectanglelabels":[
               "Visual defects"
            ],
            "rotation":0,
            "width":12.65625,
            "x":30.390625,
            "y":24.444444444444443
         }
      },
      {
         "from_name":"label",
         "id":"1",
         "image_rotation":0,
         "original_height":720,
         "original_width":1280,
         "score":0.9236611127853394,
         "to_name":"image",
         "type":"rectanglelabels",
         "value":{
            "height":10.555555555555555,
            "rectanglelabels":[
               "Visual defects"
            ],
            "rotation":0,
            "width":7.8125,
            "x":36.796875,
            "y":74.86111111111111
         }
      },
      ...
      {
         "from_name":"label",
         "id":"23",
         "image_rotation":0,
         "original_height":720,
         "original_width":1280,
         "score":0.3958974778652191,
         "to_name":"image",
         "type":"rectanglelabels",
         "value":{
            "height":7.638888888888889,
            "rectanglelabels":[
               "Visual defects"
            ],
            "rotation":0,
            "width":4.84375,
            "x":52.5,
            "y":83.19444444444444
         }
      }
   ],
   "score":0.8146782057980696,
   "cluster":null,
   "neighbors":null,
   "mislabeling":0.0,
   "created_at":"2023-01-24T15:30:13.252432Z",
   "updated_at":"2023-01-24T15:30:13.252489Z",
   "task":77
}

@croche2574
Copy link

@35grain did you ever find a solution, i'm having the same issue of no prediction score despite the prediction result structure looking correct.

@35grain
Copy link

35grain commented Apr 26, 2023

@35grain did you ever find a solution, i'm having the same issue of no prediction score despite the prediction result structure looking correct.

Unfortunately not. Didn't have time to dig any deeper either :/

@hogepodge
Copy link

I've found that if you attach scores to both the result and alongside the result, the score will appear in the interface. I think that there are some recently introduced bugs that we're working on sorting out, and hopefully they will be cleared up soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants