Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Animated Poses Per Model #701

Open
bjornbytes opened this issue Sep 11, 2023 · 4 comments
Open

Multiple Animated Poses Per Model #701

bjornbytes opened this issue Sep 11, 2023 · 4 comments

Comments

@bjornbytes
Copy link
Owner

Currently animated models only store a single copy of their vertices, and these get animated using a compute shader. This means that a single Model object can only be rendered using a single animation per frame. If you set up a different animated pose and draw the model a second time, it will use the animation from the first draw.

This is really confusing and annoying. Model:clone was recently added to alleviate this a bit, by allowing you to create lightweight copies of a Model, each with a separate set of vertices and animation state. However it's still not really acceptable to force people to manage multiple objects like this.

LÖVR should probably do the work behind the scenes to support multiple animated poses for a single Model object. It improves usability and LÖVR will probably do a better job at managing multiple mesh copies internally than Lua is able to do.

At a high level each animated pose that gets rendered during a frame would allocate and use its own dedicated set of vertices. In pathological scenarios this can start to use huge amounts of memory.

Random notes about implementation:

  • Try to separate animated data (positions, normals, tangents) from non-animated data (uv, color) and compress it as much as possible, so that these extra copies of meshes use as little memory as possible. f32x3 positions plus un10x3 normals/tangents would be 20 bytes per animated vertex. Although if UV/colors are small enough, the extra complexity may not be worth it.
  • Probably start to try to pool these animated vertices globally instead of per-model. This will reduce overall memory usage and might make it easier to batch the compute dispatches.
  • Culling information could be used to skip animation (and therefore vertex allocation) for models that aren't visible. However, culling information isn't reliable for animated models because animation changes the bounding box.
@immortalx74
Copy link

I've been using model:clone in my project and it has worked quite well. I can't tell about performance because I only have just a couple of same models "alive" at a time. The API as it stands makes sense. You create instances of the same model and animate each one however you want.
So I guess you mean you're planning just internal changes for performance reasons?

@bjornbytes
Copy link
Owner Author

It's helpful to hear that Model:clone is working for you! Yeah I think the main goal is improved usability. It's pretty easy to accidentally draw an animated model twice and get confused why the animations break. Model:clone provides a workaround, but I guess I was imagining people getting frustrated juggling all these model clones.

@bjornbytes
Copy link
Owner Author

This made me think of something else too: multithreading. Multiple threads trying to record render passes can't all animate the same Model object because it only has 1 animation state, so they'd need to use different clones of the model anyway. So Model:clone might be useful for that as well.

@bjornbytes
Copy link
Owner Author

Rough plan

  • Pass keeps list of "animated draws".
  • When an animated model is drawn, if it's got dirty joint transforms or blend shape weights, push an animated draw onto the list, tracking the model, joint matrices, and blend shape weights. Joints/weights could be immediately written to the pass's stream buffer maybe. Draw commands track a u32 representing the index of their animated draw.
  • At submit time, if a pass has animated draws, allocate a big chunk of animated vertices from a pool and do the compute dispatches all in a big group, in the pass's compute stream.
  • Can track an offset into the animated vertex buffer and use that to figure out the vertex buffer+offset to use for an animated draw, bumping it as you go.

With this, models would always be able to use their current animated pose when they're drawn, and Model:clone would only be necessary for thread safety.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants