New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add attribute.divisor field for instanced rendering #10396
base: dev
Are you sure you want to change the base?
Conversation
This pull request is automatically built and testable in CodeSandbox. To see build info of the built libraries, click here or the icon next to each commit SHA. Latest deployment of this branch, based on commit 387ac12:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, 0 is valid? Oh, now I see how it can be used...
Good job
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot be done in webgpu. Ideas how to emulate it by changing the stride instead?
Maybe user can just multiply stride by their divisor?
I wasnt able to test non-zero obviously can be emulated by changing |
@ivanpopelyshev Could you share the code that is breaking for you? The Regarding WebGPU - I don't have much experience with it, but a cursory google search shows no way to increment a buffer once every N instances. Could |
OK, then divisor=0 is not our problem, its edge case where instanced:false can be used. Non-zero divisor: is it right that we can just multiply stride instead? |
I'm not sure I understand. The stride defines "how many bytes the buffer advances per instances", not "how many instances until the buffer advances". |
Oh wait.. I didnt understood divisor at all. ok. How do we do it on webgpu? |
Like I said, I don't really know webgpu (so maybe you could tag somebody more knowledgable here), but it seems like this is not possible. As I see it, there are two options: either disallow setting divisor entirely, and lose the interoperability with underlying webgl API for some niche edge cases, or make the divisor field webgl-only, preferably with some note about it in the docs. |
I like niche cases, I wonder whats the case for divisor. |
You mean for divisor other than 0 or 1? If you have attributes that advance on a slower rate than others, for example, changing colors every two instances, while switching transformation matrices every instance. |
OK, I understand it for fancy demos, but I cant imagine real case where colors advance exactly every two instances. |
If you want to render large amounts of elements using limited color palette, the most efficient way to do so is by using attribute divisor > 1, the only alternative being wasting large amounts of gpu memory on duplicating the same colors over and over. It's easy to dismiss any use case that you don't understand as "fancy demo", but I still haven't heard any actual problems with my implementation, which I would be happy to fix. This discussion seems to be getting rather unproductive, and I would like to hear some other maintainer's opinion on the issue @Zyie @GoodBoyDigital |
ah, yes, that's the real case :) I have that in current project, I solved it with float/int textures, I didnt even think about divisor like that. Currently float_textures/webgpu_storage_buffers aren't supported good enough in pixi :( Anyway, if you are going to that kind of optimizations - I recommend to clone pixi , v8 is much more "clonable" than v7, and use the build from your own git repo (can be set up for npm), update it sometimes from upstream. My stance on the problem - we should not support features that lead people to very limited dead ends, we have to give workarounds through better ways. For example, I would completely remove the use of big UBO's that our graphics geometry has, storage_buffers/float_textures are much better and have cleaner code. If @Zyie or @GoodBoyDigital support divisor, I'll approve too, but I will try to remove it to as soon as storage is introduced. |
Heya! thanks for the PR @codedpalette. Its interesting, WebGPU does not support divisors in its instancing, which means this would only work on WebGL. Was digging around It seems that some modern API like Vulkan have decided to not take this feature also. I guess storage_buffers/float_textures would be the alternative in this case! Either way, I see no harm in adding this. A couple of tweaks to make sure its does not catch WebGPU peeps out:
thanks! |
heya @codedpalette! just following up here, let me know if you are cool to modify this PR so we can merge in :) Cheers man! |
hey @GoodBoyDigital, thanks for constructive feedback! I've updated the docs on divisor field to specify that it is a WebGL only feature, and added a warning to WebGPU pipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, a couple of minor bits to adjust and we can get this merged! Thanks @codedpalette
@GoodBoyDigital Done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for your efforts on this one @codedpalette ! 👍
Description of change
This adds
divisor
field toAttribute
interface for instanced renderingPre-Merge Checklist
npm run lint
)npm run test
)