Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Derivative quality "High" is always used #254

Open
sdumetz opened this issue Feb 27, 2024 · 3 comments
Open

Derivative quality "High" is always used #254

sdumetz opened this issue Feb 27, 2024 · 3 comments

Comments

@sdumetz
Copy link
Contributor

sdumetz commented Feb 27, 2024

Current derivative quality selection is based on a test over gl.MAX_TEXTURE_SIZE (as reported by WebGLRenderer). This test uses static values to determine the best model quality.

I the values used (up to 4096 for High quality) made sense in 2019 when they were created but nowadays, 4096 is a more or less a baseline and everyone ends up with the High quality model.

I couldn't find any definitive source to back this up, but see MDN recommendations for webGL or those openGL capabilities statistics for example. To get an idea, 4096 is supported by a Raspberry Pi 4.

Instead of updating those to a larger, more up to date value, which would break all existing scenes that may have been optimized to match (serving low quality content on reasonably capable devices), I'd try to design something more flexible.

Since the scene file is already designed to store texture sizes with assets (under Derivative[x].assets[x].imageSize ), it should be possible to select an appropriate quality based on actual size, instead of constants. This might not be ideal because it sets aside concerns of bandwidth and vertex count.

Another problem would be to compute the actual image size : Not only do we have to iterate over all models to check for individual sizes, we can't know beforehand if the model has just one diffuse or many maps (that might themselves have varying size, not matching the imageSize attribute).

Maybe some other implementation solved it in a better way?

I'm leaving this issue here to gather comments from anyone that has experience on the matter, implementations ideas or requirements not listed here.

@gjcope
Copy link
Collaborator

gjcope commented Feb 27, 2024

There's a significant difference between what is supported and what is a good user experience for a model being rendered in-browser. 4k textures add up quickly when in-memory. We've seen significant load issues with as few as 3 fairly low-poly models with 4k textures (3 channels), especially with mobile browsers. I haven't seen anything to suggest that 4k is not still a good choice for 'high' quality.

That being said, how the quality selection happens can definitely be improved. We have talked about something bandwidth driven but haven't had the time to pursue it.

@sdumetz
Copy link
Contributor Author

sdumetz commented Feb 28, 2024

The thing is, Afaik GL_MAX_TEXTURE_SIZE is a report of the max size of all combined textures to be loaded (though i'm not 100% clear on this, but my testing seems to support this). In any case it is generally a very large value (16k on any mobile with a reasonable GPU, 32k on most modern laptops).

In your example, 334k maps would be a 12k total square map, ie. not loading on any 8k device (low end smartphone) and near the graphical limit (so with lots of context loss) for mid-range smartphones, which might be what you observed?

Polygon count doesn't seem to be that much of a factor, I had no trouble loading up to a million polys on mobile as long as maps stays small.

To summarize, I would agree with you that 4k textures is a lot for mobile use, especially on multi-models scenes. The problem is that lower qualities never get selected so the 4k gets served even on low/mid range devices.

The best proposition I have for now is:

  • Compute the total expected map size of the scene for each Quality setting (using the declared imageSize of each model, multiplied by the expected number of textures, probably 3 or 4)
  • Compare this to GL_MAX_TEXTURE_SIZE, with a reasonable safety margin

I do not consider it a particularly good solution, nor do I think it fits every use case. In particular I would really like to be able to select the quality based on distance-to-camera for larger, "first person interior view" scenes that I'm working on.

@gjcope
Copy link
Collaborator

gjcope commented Feb 28, 2024

Yes, we were seeing context loss, but this was due to running out of graphics memory based on the texture load. My response was related to the suggestion that what we define as a 'high' derivative change. I'm totally on board with changing the way in which the initial derivative quality is chosen.

GL_MAX_TEXTURE_SIZE is the largest addressable texture dimension, so not necessarily tied to total memory available. I did not implement that check, so I don't know the rationale behind it, but my guess is that it is an attempt to tie quality to device capabilities instead of scene stats. We are currently looking into KTX2 compression support, which would make the difference between those two metrics wider if used.

Polygon count has definitely been an issue for us in providing acceptable load time and performance (mostly on mobile) but I think that's somewhat independent of this.

I don't see an issue with your proposal in general, it just seems very rough. I am wondering:

  1. In practice, how often would lower quality tiers actually get triggered?
  2. How much would this add to load time?

In particular I would really like to be able to select the quality based on distance-to-camera for larger, "first person interior view" scenes that I'm working on.

This should be doable now. You can load derivative qualities on-the-fly, similar to how we replace the 'thumb' with the 'high'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants