Skip to content

Performance related bench marks for Serialization an Deserialization #89

Answered by eigenein
matrixbegins asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @matrixbegins,

This is barely a fair comparison, to be honest. :) You're comparing pure Python code performance in pure_protobuf to the compiled Rust code performance in orjson (and by the way, the fastest JSON package ever for Python AFAIK). 15x difference is quite reasonable here.

Could it be because we are compiling proto schema everytime we are calling obj.dumps()?
Can we some how Cache the schema generation if that's making it slow?

There's no such thing as «schema» in pure_protobuf. Basically, the @message decorator is already doing what you want: constructing a message serializer from a class definition. This only happens once as soon as Python executes the module. The dumps/l…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@matrixbegins
Comment options

@eigenein
Comment options

Answer selected by matrixbegins
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
2 participants
Converted from issue

This discussion was converted from issue #88 on October 17, 2021 10:37.