Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding instruction to "helper.literal_eval_extended" for comparing custom object #317

Open
atah1991 opened this issue May 20, 2022 · 4 comments

Comments

@atah1991
Copy link

atah1991 commented May 20, 2022

Hi! Thanks for this great lib! I happened to get a bug on a custom case when I compare dictionaries where key is the dataclass(simplified example below). The output of the DeepDiff shows the path to fields as None.

@dataclass(frozen=True)
class ClassA(object):
    a: int = 1
    

@dataclass(frozen=True)  
class ClassB(object):
    b: int = 2


left = {ClassA(): "ClassA"}
right = {ClassB(): "ClassB"}
DeepDiff(left, right, verbose_level=2)

The script above outputs to

stringify_param was not able to get a proper repr for "ClassB(b=2)". This object will be reported as None. Add instructions for this object to DeepDiff's helper.literal_eval_extended to make it work properly: malformed node or string: <_ast.Call object at 0x7f5f3abfa3a0>
stringify_param was not able to get a proper repr for "ClassA(a=1)". This object will be reported as None. Add instructions for this object to DeepDiff's helper.literal_eval_extended to make it work properly: malformed node or string: <_ast.Call object at 0x7f5f7845fc70>
{'dictionary_item_added': {None: 'ClassB'},
'dictionary_item_removed': {None: 'ClassA'}}

After surfacing the code, I understand that so far support is for the following cases in helper:

LITERAL_EVAL_PRE_PROCESS = [
    ('Decimal(', ')', _eval_decimal),
    ('datetime.datetime(', ')', _eval_datetime),
    ('datetime.date(', ')', _eval_date),
]

Is there a way for the user to add the custom instruction to this list and pass it in DeepDiff interface so the resulted diff shows the path (not None) to fields that are different? Or maybe there is a better way to have comparison for such objects? Note, I would like to compare them as dictionaries, instead of iterating over keys, values and comparing them

Thanks!

@atah1991
Copy link
Author

Any comments or suggestions on the above? That would extremely help for my current project!

@seperman
Copy link
Owner

Hi @atah1991
The problem here is that deepdiff wants to store some string presentation of your paths but also wants to make sure the string representation can be converted to your actual path. Due to security reasons, we don't allow it to "eval" arbitrary code.

Hence right now the eval is limited to very few things in the list you have posted (LITERAL_EVAL_PRE_PROCESS) other than what Python's restricted eval (literal_eval) allows.

The problem is not specific to the data classes. If you even remove the data class decorators, you will still run into this issue when your keys are custom objects.

Maybe in the future we can add a flag to allow eval instead of restricted eval. Then it will work better with cases like this when users are sure the data they are diffing doesn't contain Python code lines.

@emonsler
Copy link

Hi,

@atah1991 , did you find any workaround for this?

My use case has modifying helper.py as problematic. I was getting the same warning for classes deriving from enum.Enum.

I seem to be able to work around the issue by setting a repr for the class, returning

'"' + FULLY_QUALIFIED_PATH + "ModelB." + self.name + '"'

So it displays as

"some.org.atah1991.ModelB.a"

@seperman
Copy link
Owner

We do allow a "safe" list for Delta objects. We can use the same logic to allow eval to be run on paths too.
https://zepworks.com/deepdiff/current/delta.html#delta-safe-to-import-parameter
I don't currently have the bandwidth to look into it. PRs are welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants