Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[suggestion]use jsoncomment instead of json in decode #182

Open
bohea opened this issue Jun 30, 2023 · 6 comments
Open

[suggestion]use jsoncomment instead of json in decode #182

bohea opened this issue Jun 30, 2023 · 6 comments

Comments

@bohea
Copy link

bohea commented Jun 30, 2023

llm has a chance of giving the wrong json format, the most common of which is the addition of extra commas

example:

import json
text = '{"a": 1, "b":{"foo": 1, "bar": 2,}}'
data = json.loads(text)  # will raise Exception here, JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 34 (char 33) 

if use package jsoncomment

from jsoncomment import JsonComment
json = JsonComment()
text = '{"a": 1, "b":{"foo": 1, "bar": 2,}}'
data = json.loads(text)  # text is decode success here, data is {'a': 1, 'b': {'foo': 1, 'bar': 2}}
@eyurtsev
Copy link
Owner

Hi @bohea thanks for the suggestion!

I am open to adding another encoder that uses jsoncomment.

Two questions:

  1. are you including examples and prompt instructions to not include comments? if so, are the model outputs actually reasonable even though it fails to omit comments?

  2. Are you looking for JSON comment functionality only for the decoding path? Or does your use case benefit from adding comments during encoding?


Is JSON comment is maintained anymore: https://pypi.org/project/jsoncomment/#data

The project page leads to a 404. Let me know if you're able to find a project page or any other well maintained library that offers this functionality.

If not, jsoncomment implementation could be added directly to kor using lark. (Let me know if you have interest to work on this.)

An alternative approach would be to introduce another decoder that first strips away comments using a regexp and then delegates to JSON.

@bohea
Copy link
Author

bohea commented Jul 4, 2023

it's not about comments in json , but extra commas in llm's response json which fails in json decode

@bohea
Copy link
Author

bohea commented Jul 4, 2023

I don't find project page yet, it seems the author only upload package to pypi

@bohea
Copy link
Author

bohea commented Jul 4, 2023

by the way, I have found that removing the json tag in schema encode makes json decode work better, because llm raw response often miss the json tag("</json>") at the end(not due to token restrictions)

e.g.

<json>{"a": 1, "b":{"foo": 1, "bar": 2}}

json decoder will unwarp json tag first using a regexp, but if no </json> at the end, regexp match nothing, so decode fails

@eyurtsev
Copy link
Owner

eyurtsev commented Jul 6, 2023

@bohea apologies for delayed responses -- on vacation until end of July so i only have limited computer access.

A few questions:

  • do you have benchmarking results?
  • which LLMs are you testing with?
  • are you including examples?

My personal experience:

  • I experimented mostly with open AI text-davinci-003, gpt-3.5-turbo and Claude. My sense was that text-davinci-003 and Claude were significantly better than gpt-3.5-turbo.
  • I and wrappers was that it the open AI models often included explanations after the JSON section making it more difficult to identify the json section correctly (or requiring some sort of hacks). The and tags significantly reduced parsing errors as far as I could tell.

I unfortunately don't have any benchmark datasets, so all of my conclusions should be treated as anecdotal, but based on my experience I don't want to change the default behavior of including the tag without quantitative evidence that it improves results.

We should definitely make the presence of tag controllable by a flag though -- it will allow the user to determine how the data should be encoded.

it's not about comments in json , but extra commas in llm's response json which fails in json decode

This sounds like it could improve extraction in some cases and make it worse in other cases (extracting incorrect information). Is this not the case?

@bohea
Copy link
Author

bohea commented Jul 7, 2023

@eyurtsev thanks for your response

do you have benchmarking results? -- not yet, I did a simple statistic,chatgpt has a 50% chance of not adding the json tag, so I simply make use_tag = False when do json encode, and use_tag = True when do csv encode
which LLMs are you testing with? -- mostly gpt-3.5-turbo-16k
are you including examples? -- no examples included

You were right to ask llm to add csv/json tag, it's chatgpt's problem don't follow the instruction(may be my text is too long)
I didn't know text-davinci-003 may be better than chatgpt when do extractions, I need chatgpt's reasoning abilities,But I will make a try on text-davinci-003

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants