Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simplifySloppy overestimating actual triangle count #133

Open
tomrosling opened this issue Apr 19, 2020 · 2 comments
Open

simplifySloppy overestimating actual triangle count #133

tomrosling opened this issue Apr 19, 2020 · 2 comments

Comments

@tomrosling
Copy link

tomrosling commented Apr 19, 2020

Hi, I'm working on a project which includes automatically simplifying arbitrary meshes to various triangle counts. I've been using simplifySloppy because it seems more reliable for various poor quality input meshes, and being able to achieve very low triangle counts for low LODs.

I found that when reducing a highly detailed mesh to very low triangle counts, the grid size being used sometimes ended up being bigger than it needed to be - this is because countTriangles() wasn't taking into account that some triangles become duplicates and then get eliminated during filterTriangles(), after quantization.

I've made a fix for it here: virtalis@62ab34f. Would you like me to open a pull request, or was it left this way intentionally for speed? It now requires creating a hash table of triangles during each loop iteration at the start of the algorithm, which will obviously slow it down a bit compared to the simple sum it was doing before.

--

I've also made some changes to prevent vertices being collapsed when their normals are drastically different, by expanding the grid into an additional 3 dimensions for normal space: https://github.com/virtalis/meshoptimizer/commits/simplifySloppy-normals. I'm not sure if these are changes you'd be interested in, but I can open another issue to discuss them if you are.

@zeux
Copy link
Owner

zeux commented Apr 20, 2020

The choice of filtering triangles after determining the grid size was intentional for performance; my recollection was that the delta I've commonly observed post-filter was ~5-10% reduction, but maybe that's different for really low triangle count results.

Because there are some changes to the simplifier that are planned, maybe the performance-ratio tradeoff will become insignificant enough to the point that we might as well use full filtering - unsure. One other thing I thought of after this issue was opened is that maybe it's sufficient to measure the precise triangle count at grid_size+1 after the binary search loop...

As for normals, I've briefly looked at your changes - I'm currently trying to restrict both simplification algorithms from generating new vertices, and without that you can't really include normals as a separate dimension in the grid because you won't be able to merge the positions of the vertices. I have some plans to make the sloppy simplification more cognizant of discontinuities in the future.

@tomrosling
Copy link
Author

Yeah, it still seems to be in the 5-10% ballpark - but if it gets pushed over the threshold, making grid_size one smaller can have quite a significant effect on the actual triangle count (at extreme simplification ratios, when grid_size < 10 or so). It's quite important for my use-case to get as close as possible to the target triangle count, and the performance difference isn't an issue since it's an offline step (and the algorithm is already very fast). Having said that, I think only checking grid_size+1 would probably be enough to alleviate the issue in most cases, or maybe even a linear search after the first loop to make it more robust.

Re normals: yes, having to modify the vertex data is a downside to it. I'm happy to keep my changes in the fork and check back in the future for any changes you might make.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants