Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggest: add a table whether each type can be validated or not #1025

Open
samchon opened this issue Feb 8, 2023 · 3 comments
Open

Suggest: add a table whether each type can be validated or not #1025

samchon opened this issue Feb 8, 2023 · 3 comments

Comments

@samchon
Copy link
Contributor

samchon commented Feb 8, 2023

I think it is more important how reliably and accurately a type validator validates various types than how fast or slow it is.

However, it seems not a good way to add much more benchmark graphs for various types.

Therefore, how about add a table like below?

If you agree, I can provide much more test types to validate and also implementing validation schemas for each libaries, too.

Components typia TypeBox ajv io-ts zod C.V.
Easy to use
Object (simple)
Object (hierarchical)
Object (recursive)
Object (union, implicit)
Object (union, explicit)
Object (additional tags)
Object (template literal types)
Object (dynamic properties)
Array (rest tuple)
Array (hierarchical)
Array (recursive)
Array (recursive, union)
Array (R+U, implicit)
Ultimate Union Type
@sinclairzx81
Copy link
Contributor

sinclairzx81 commented Feb 8, 2023

@samchon Hi, Just going to chime in on this one.

While I certainly think this project could benefit from additional benchmarks and tests, I do not think that these should be submitted by library authors as the benefit and value of community projects like this comes primarily from external independent contributors submitting tests outside of author intervention. This specifically to try and establish an accurate lens into performance and mitigate potential for biasing.

On this point, if the typia benchmarks are being put forth (having reviewed them independently, as well as submitted TypeBox schematics to Typia here, here and here for alignment and comparative measurement) I do not feel these would be a good candidate for cross library benchmarking or testing for the following reasons:

  • The schematics are highly coupled to the performance and assertion criteria as implemented in typia.
  • The schematics are arbitrarily complex and not very helpful when trying to identify where performance disparities exist.
  • The schematics would rule out a significant amount of libraries contributed to this project.
  • This project (afaik) is a benchmarking project, NOT a validation unit test suite.

Also, for the reporting table, I do actually feel quite strongly about not showing RED marks next to each project listed here, particularly if the criteria for testing is dependent on each library adopting specific assertion criteria as implemented by typia (of which there is much room for interpreting validation semantics across libraries).

Again, while I'm certainly for the idea of seeing additional benchmarks (or tests) added here, I do feel these should ideally be defined independently (and openly) with the validation criteria made clear and set low enough such that all libraries currently submitted can participate. In addition, if more sophisticated schematics are deemed warranted (of which I've some interest), my preference would be to omit failing projects from result tables rather than marking them as RED (which may be publicly discouraging to project authors who have contributed their free time and effort to this arena)

For establishing "a minimum viable suite" of schematics, I think what's going to be less divisive is a collaborative effort where interested parties can define clearly what the schematics are, what they measure for, and what techniques may be applicable to attain better performance (possibly through GH discussions). This to set a fair and reasonable performance criteria and hopefully help other developers to attain robust high performance assertions in their respective projects, mine included.

Just some of my thoughts on this one.
S

@moltar
Copy link
Owner

moltar commented Feb 13, 2023

@samchon I appreciate your involvement and I know you have put a lot of effort into thinking about this. Thank you for your contribution!

I do largely agree with @sinclairzx81 that the idea is to keep tests as impartial as possible, that's why they remained so primitive up to this point.

I think the way to move forward is to discuss each independent test addition we'd like to do as a separate issue. And try to evaluate and understand what value each test adds and how it will affect the rest of the tests.

Again, I am not against change, but we need to think more holistically about it. Tbh, my initial tests uite were not thought out too much at all. I just cobbled together some rough ideas and off it went to be released.

@moltar
Copy link
Owner

moltar commented Feb 13, 2023

@marcj do you have any input, I remember you had some strong opinions too before. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants