Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sampler2D array #71

Open
JustLexxy opened this issue Jul 21, 2022 · 14 comments
Open

sampler2D array #71

JustLexxy opened this issue Jul 21, 2022 · 14 comments

Comments

@JustLexxy
Copy link

Hi it's me again!
Is it possible to have an array to store many iterations of tTriangleTexture and tAABBTexture?
I need to be able to add objects on the fly without adding new code. I was able to do it with normal objects such as boxes, quads and spheres, but it doesn't seem to work with loaded 3D objects.
The problem seems to come from texture arrays working differently than normal arrays, but in 2 days of researches i haven't been able to find what i need.

Only other solution (that i know) would be to make the JS generate the glsl code depending on its needs, but that would add a ton of other complications that i'd rather not have.

@JustLexxy
Copy link
Author

Also quick question: I have an object made of an extrude geometry. I send it to the pathTracing the same way you did it with the teapots (GLTF_Loader). It is possible to control the color of the material with the javascript, but is it possible to do so with a texture as well?

thanks!

@erichlof
Copy link
Owner

erichlof commented Jul 22, 2022

Hello @JustLexxy !

Yes there is in fact a built-in OpenGL/WebGL system for storing and reading from multiple data textures in an array. With raw WebGL, It is tricky and verbose to set all of this up, but thankfully, Three.js conveniently wraps everything up for us in a function called THREE.DataArrayTexture(dataArray, width, height, numDepthLayers). This is conceptually a stack of normal 2d textures (width, height), placed on top of each other (depthLayers). I usually imagine a stack of playing cards. Btw, the same thing can be achieved through an actual 3D texture (width, height, depth), but if I'm not mistaken, there are more WebGL size restrictions placed on a true 3D texture: even the width and height - not just the depth. With a 2D texture array, you can have larger width and height (which we typically use for storing BVH and model data for path tracing), and then you can just specify the number of 'playing cards' you want to have. Be aware that the dimensions (width and height) must be exactly the same for each texture that is placed on the stack. This restriction is actually helpful because we can calculate the lookup position of the texture we want by simply specifying the depth layer number, and then WebGL's built-in shader texture function, texelFetch(), does the rest for us. Say if you had 8 1024x1024 textures stacked on top of each other, you could just specify in the shader which layer you wanted to fetch (any integer between 0-7, remember first is '0'th index in most of computing).

The trickiest thing to get your (and my) mind around is that the 'dataArray' argument in the above THREE function call is a huge, huge flat 1D list of floating point numbers, so far as WebGL is concerned. This huge list of numbers spans all the way from the first channel (R of RGBA) of the first texel (0,0) of the 1st texture (depth index 0), all the way to the very last channel (A of RGBA) of the last texel (1023,1023) of the very last 8th texture (depth index 7)! Therefore, the tricky part is getting the offsets right when placing each unique model's BVH or triangle data into the 8-layer stack of textures. The first one is easy and the code will resemble what you do now to fill the data texture with data on the JavaScript side. But the 2nd model will have to start where the other one left off. This offset itself is quite large and has to account for RGBA channels (4) * width(1024) * height(1024) * depthNumber(0 to 7 in our example). As you long as you get this setup part right, you should be able to just look up (texelFetch) the data in the shader without much extra code in the shader.

In the case of 2D texture arrays, all of this is easier seen implemented than me talking about it. If you want to see how I did it, please check out my path traced remake of the classic Sentinel game from the 1980's:
The Sentinel - 2nd Look

In TheSentinel_setup.js, do a quick search for the terms '2D' and 'offset' and that will show you everything that is necessary on the JavaScript side. Then, in TheSentinel_Fragment.glsl , do a quick search for the term 'depth' and that should give you an idea of the small amount of code that is necessary to specify the depth_id of the model's data (how deep is it on your stack of 2D textures?) and its corresponding depth layer parameter in the GLSL built-in texelFetch() function.

This was pretty challenging for me personally, because I get nervous whenever I have to calculate offsets into anything, let alone stacks of 2d textures! But the effort was worth it, allowing me to have dozens of path traced models that can independently change their translation, rotation, and scale on each animation frame, if so desired. That is truly powerful, especially coming from a webpage inside a browser! 😉

Let me know after looking at my game project example if you have any more questions or need anything clarified. I will do the best I can, as I sort of understand it all, but just barely enough to have made it work in my game, ha.

Cheers,
-Erich

@erichlof
Copy link
Owner

erichlof commented Jul 22, 2022

@JustLexxy

About changing the model colors with a texture, could you please clarify what you are wanting to do?

If you are simply trying to apply a texture to color the model or give it a color pattern, you can just follow my example in the Bi-Directional_Difficult_Lighting demo. In that demo I have 3 different Utah Teapots, one of which has a neat marble texture applied, giving it an alternating white/black color pattern to its surface. The metal teapot has a surface normal distortion applied through a normal map, in order to give it a hand-hammered metal look. In order to apply colors via an albedo/diffuse color texture, the model has to contain uv coordinates for all its vertices (which most models out there come with these days). This holds true for the surface normal bump map as well, as it uses the exact same supplied uv coordinates to look up how to bend the surface normal at that location on the model. Although instead of 'seeing' the color of the map (as we did with the diffuse color map), we see the normal map's effects as detailed bumps on the model's surface (this effect is actually a 2D optical illusion, but a very compelling one!).

Also, the Billiard_Table Demo is good for seeing how to setup and apply (JavaScript) as well as lookup and render (GLSL fragment shader) various textures for different objects in the scene, like the table cloth and the table rail's/cue stick's wood grain pattern.

If however you are trying to do something else with the model's color and its texture map and I have misunderstood, please clarify and I'll try to answer the best I can, or at least point you in the right direction. 🙂

@JustLexxy
Copy link
Author

oh wow, thanks for the detailled comments!
For the textures, i was able to apply a texture from inside the glsl, but i'd like to know if it's possible to apply it in the javascript, before sending triangleDataTexture and AABBDataTexture to the glsl.
For example, in Bi-Directional_Difficult_Lighting.js, lines 244-246, you apply the rgb channels of the color property to the teapot. That allows to be able to control the object color from the javascript's side of the program, so i was wondering if it was possible to do the same things but with a texture instead of a color, which would make the task easier to change an object's texture on the fly without reloading everything.

@erichlof
Copy link
Owner

@JustLexxy
Oh ok, thanks for clarifying - I think I better understand what you are asking about.

A note about lines 244-246 that you mentioned: they do indeed set the color (rgb), but this is for vertex colors mainly, if the model is found to have them, or if they are being procedurally generated by us with JavaScript. Most modern glTF models do not have vertex colors, but instead have uv coordinates that are used to look up the correct position in a texture. You could actually override these 3 lines of code and put whatever values you want (a floating point number between 0.0-1.0 for each color.r, .g, and .b channel). The problem with manually doing it this way is that you have to know exactly where the vertex you're currently setting is located on the model. It is theoretically possible on a simple Minecraft-type character model for instance, but it would be almost impossible for the teapot, bunny, or dragon models, that have thousands of triangles, each having 3 vertices for every triangle! As the JavaScript is loading the model in, there's no easy way to find which number triangle you're on, or what part of the model it corresponds to. Often the glTF files have what's called a triangle 'soup' list, which means there's no rhyme or reason for the incoming vertex data as it's being loaded in.

The reason I mentioned the above situation is because this applies to textures as well. Say you had a simple black and white checkerboard texture that you wanted to manually apply on the JavaScript side as the model is loaded, and before you sent the triangle and BVH data to the GPU as data textures. You would run into the same problem - namely, which vertices should be white, and which vertices should be black? The triangles most often come into our program as a random stream.

For the above reasons, I hardly ever actually use the data that is set on lines 244-246, and instead just apply the texture on the GPU side with the texture() glsl function and the model's supplied uv texture lookup coordinates.

But having said that, all is not lost if you just want to swap textures for the model on the fly. In order for this to work, the model must have uv coordinates and must have a monolithic texture, meaning every part of the model is uv unwrapped and one big texture (even if it looks all exploded and in different pieces), will be all that is needed to wrap/cover every part of the model.

On the js side, using Three.js' built-in texture loading functions, just create a unique variable name for every possible texture you might want to wrap the model with. Say you had a checkerboard texture and a woodgrain texture. Load both with Three.js (you can follow their examples on texture loading), and start the model out with one texture (its default texture). The actual texture you are sending to the GPU should be called CurrentTexture (or something similar). Set CurrentTexture equal to the variable name that you chose for either the checkerboard texture or woodgrain texture. When the model is rendered in glsl, the texture function will sample whatever you passed into that texture uniform slot. The end user will see either the checkerboard texture or the woodgrain texture, depending on which you placed into CurrentTexture and sent to the GPU. Then, when the user performs an action like clicking on an html button ('change texture' for instance), you would on the JavaScript side, set the CurrentTexture equal to the other variable name. Then, most importantly, call Three.js' texture needsUpdate (or set it to true, I can't exactly recall). This will immediately change out the entire texture for the model. The speed at which it does this depends on how fancy the user's graphics card is. On my old laptop, there is a noticeable hiccup in the framerate, but it does eventually make the swap. I'm sure on a modern GPU, the texture change would be almost instantaneous. And since we preloaded all possible textures, they are in CPU/RAM memory, and then just get fed into the GPU quickly. There's no need to reload the app and no need to wait for something to load or download. It's all saved in RAM, like a 'pool' of texture objects.

That would be the fastest, easiest way to swap textures that I can think of. As mentioned earlier, yes you could manually set every single vertex color by hand, but this becomes impractical for models with more polys than old-school low-poly geometry. Actually, referring back to my Sentinel remake setup file, I do indeed manually set each vertex color for the low-poly terrain to either blue or green for a checkerboard pattern. But this is only because I also procedurally generate the terrain triangle geometry when the level is started, so I know exactly what triangle I'm on, what vertex of that triangle I'm on, and where that triangle is in relation to the whole terrain. If you don't have this kind of control over the models you're using however, then the CPU-GPU texture swap and using the supplied model's vertex uv coordinates is the way to go, in my honest opinion.

Hope this helps, and I hope I have understood your use-case more correctly. Let me know if you have any other questions. Good luck with the texture swapping! 🙂

@erichlof
Copy link
Owner

erichlof commented Jul 26, 2022

Update:
Just tried this out on my Billiard_Table demo and it's even easier than what I proposed! :D
While the demo is running, open up the browser console and type the following:
pathTracingUniforms.tClothTexture.value = lightWoodTexture;

Hit Enter on the keyboard, then click anywhere on the demo webpage to capture the mouse again and.. - voila, the table's cloth texture is magically turned to wood! Although, it's tinted green which looks unnatural, ha - simply because I had a white cloth texture and I wanted a dark blue/green felt color for the billiard table, so in the glsl shader, I scaled the blue channel up while sampling the pure white cloth texture image.

But anyway, this was even easier than I thought. As long as you can live with one big texture per model, you should be able to preload dozens of textures, then just do something like the above line of code on the js side, and you should be good to go!

@JustLexxy
Copy link
Author

JustLexxy commented Jul 26, 2022

That is indeed a great solution, thanks!
I think of all the solution i found this one would be the easiest, but maybe not the best of all.
Basically, I am working on a project where i need to create objects made from ExtrudeGeometries. I also have the possibility to modify each object's properties, color and texture as I want, with the possibility of having every objects different from each other. I already have everything running perfectly with only three.js, and i was tasked to add some path tracing to it. I got everything working, except the extrudeGeometries, that i pass to the glsl with your BVH algorythm (Instead of sending it a gltf model i send it an object3D containing my mesh).

Note: The textures are different wood textures, and i already have a JS algorythm that takes care of the UV, everything looks good, and textures are applied of a lot of different and irregular shapes. I can have the same texture for different objects, of variable shapes.

The only problem comes when i need multiple objects using different textures, since you can't access a sempler2D array using a variable index. I thought of using my UV algorythm, that would set up the color of each pixels to match the texture, then bundle everything up in 1 big object to send to the path tracing algorythm, so that's why i wanted to know if it was possible to process a texture before sending everything to the glsl.

Another solution would be being able to access a texture uniform using a variable index. I already have an array of tTriangleTexture and of tAABBTexture, so i would have another matching array stocking the texture of each object, and recreate my UV algorythm in glsl, and have a loop iterate through each of the objects to recreate them. I did it for other shapes like quads, spheres and boxes, but i can'T with the extrudeGeometries since you need a constant number index to access a texture array...

I'm gonna keep searching, try a few things with your solutions, but I sadly seem to be in an impass here.

@erichlof
Copy link
Owner

@JustLexxy
Ah, I think I see what you're up against now. Just to clarify for my understanding, what is the difference when you try texturing a sphere or a box (which is successful), vs. trying to texture an extrude geometry (which you said was unsuccessful)? As far as I can tell, by the time Three.js sends these basic geometries to the GPU (or when we extract the triangle vertex data before sending to the GPU, like for path tracing purposes), everything gets boiled down to a list of triangles, so far as WebGL is concerned. Let's use the woodgrain texture example again - you most likely can see a woodgrain texture on all the cubes, spheres, and other primitive geometry in your scene. But at this point in your app's development, can you even see the woodgrain texture correctly applied to an ExtrudeGeometry object? Or is it displaying incorrectly because there is either incorrect uv information about each vertex, or maybe there is basic uv info provided by Three.js, but when the extrude geometry changes or gets extruded more or less, the uv's somehow get messed up/stretched out/shrunk?

About the variable indexing of textures for different objects in the scene, I don't think you can easily pre-process each texture for each vertex uv coordinate on the JavaScript side before sending everything to the GPU, as outlined in your original idea. The GPU and glsl's sampler2d() does all this incredibly fast and efficiently for us at the hardware level in parallel, but I wouldn't want to attempt that same process manually one vertex at a time through js-side software algorithms (for reasons discussed in my previous post). Out of curiosity, how many different textures could there be at any given time while your app is running? If it is 20 or less, you could just use a bunch of if-else statements in glsl, like:
vec3 modelColor = vec3(0,0,0);
if (uExtrudedGeometry1_textureID == 1.0)
modelColor = texture(woodGrainTexture, uv).rgb;
else if (uExtrudedGeometry1_textureID == 2.0)
modelColor = texture(marbleTexture, uv).rgb;
else if (uExtrudedGeometry1_textureID == 3.0)
modelColor = texture(checkerboardTexture, uv).rgb;

Of course, the number of possible models you could have also plays a big part in this solution working or not. If you have hundreds of separate models and possibly hundreds of textures to choose from for each model, the above algo simply will not be feasible. This is partly why I have not ventured into this multi-texturing problem for my own path tracer here on this repo. I'm sure there's a clever way to make it happen, where you could avoid any 'if' statements in the shader (which shaders do not like, ha), but honestly I can't think of a tangible solution.

Having said this, I did 'skirt' around this problem in my Sentinel game, because the BVH needs to know which model texture data it needs to look up and deal with. I only have 7 possibilities in this simple game (which I'm sure is far less than what your app is needing), but the way I did it is that I encoded each objects transform with a uniform 4x4 matrix, which is very common. But then I physically change the very last element of that matrix [3][3] to be equal to a float number, between 0-6 (7 possibilities, remember). Then when the BVH extracts the transform data for the model that it is intersecting, it looks at that manually coded last element in the transform matrix (that I changed by hand on the js side), and that is used as the 'depth' parameter in the 2d texture array. The whole reason I used a texture array rather than just 7 single textures by themselves, is because of the same problem you're facing. I couldn't easily tell glsl which texture to fetch, without a bunch of annoying 'if' statements, due to the fact that glsl doesn't let you put a variable name in the texture() function. It has to be a constant. But with texture2DArrays, you can have a depth number that is a variable of your choosing on the fly, as long as it is a valid depth number (how deep is it down in the stack of your textures), and that your variable doesn't exceed the total range/depth of the texture2dArray stack.

Every GPU has limits on how deep you can go with texture arrays - I used to think 16 was the max, but who knows, nowadays it could be 256 textures deep. If you could find a way to encode/decode the simple lookup floating-point number textureID into a uniform, or an array of uniforms that could be fed to the GPU, then you should be able to grab the right texture in the shader for each and every model in the scene.

@JustLexxy
Copy link
Author

I can't really tell how much models or textures are gonna be loaded at the same time. I think the absolute maximum amount of textures is around 30, but the number can be highly variable so we can't rely on that.

As for the number of models, the user must be able to add, modify and remove models. It could vary between 10 and maybe 100-200, so that's why I needed to either pass everything to the GPU as an array, or bundle everything together in one big object/group, with every models having its own texture.

Finally, for the UV, there's no way of knowing the UV "pattern" in advance, since the shapes of the models is variable, but I already have an algorithm that takes care of that. Right now, in the path tracer, the texture shown (when applied to the model) is respecting the base texture properties (wrapS, wrapT, magFilter, minFilter, etc), but there's no UV algorithm to match it to the model like the one I have in JS. Hence my two solutions to either reproduce it in glsl, or having it map the texture in JS before sending the models to the GPU.

@erichlof
Copy link
Owner

erichlof commented Jul 26, 2022

Ok, if you have low level control over the uv coordinates as the model is being created or edited/manipulated, it might be possible to load the textures as an html Canvas element. And anything that is an html Canvas element, whether you loaded the image, or someone is doodling into it in real time with a mouse or touch gesture, or if you procedurally generated the image mathematically, like a checkerboard pattern, etc., this canvas element lets you look at its individual pixels and you can get the raw data.

Take a look at MDN's Canvas tutorial to see how you get at the raw pixel data. If you know which pixel in the texture that you want (maybe with your UV algo you mentioned), you can get each texel's R,G,B, and A value. Note that unlike the glsl convention of floating point values for each of the pixel's 4 channels (0.0-1.0), Canvas both stores and displays pixels as values between 0 and 255. So a simple calculation - the Canvas channel value divided by 255 will get it back into the WebGL/glsl range of 0.0 to 1.0. Or vice-versa, multiplying a glsl 0.0-1.0 range value by 255, will get it into correct Canvas element display range of 0-255.

If you end up doing things this way, then you would actually use what is on lines 244-246. All of that triangle vertex data gets stored on the GPU as tTriangleTexture. Then, when it's time to render it, the BVH will identify the triangle number, which is then used as the ID to look up the corresponding triangle color/material data in tTriangleTexture. The material color will be set to whatever 0.0-1.0 value is for all 3 vertices of the matching triangle for that ID. You can even have different colors for all 3 vertices of a single triangle, and my shader code computes the barycentric coordinates of the intersection point, which is a linear blend of all 3 vertices of that intersected triangle. The triangles' s surface normals are interpolated in exactly the same manner.

Also by using this manual route, you wouldn't have to even fool with extruded geometry uvs in glsl, and wouldn't have to use the glsl texture() fu crion to get the correct texel color for the currently intersected triangle - as it would be already stored in its corresponding slot inside tTriangleTexture. But since this is a lot of extra work up front, maybe it would be a good idea to only resort to this method for the extruded geometry that you're having issues with. All other normal primitives in three.js should always have valid uv coordinates. All that's left to do with those easier models is store the various dozen textures or so in a texture2d array, and then assign a uniform texture ID to each model (other than extruded geometry models of course), then change that single ID number on the JavaScript side when the user changes the material texture to something different.

@JustLexxy
Copy link
Author

Thanks for the idea, i'm gonna try that and see if it works!

@erichlof
Copy link
Owner

erichlof commented Jul 27, 2022

@JustLexxy
Great! Hope it works out for your project.

In my previous post, I mentioned following a Canvas getImageData() tutorial, so I just wanted to follow up with that further and give you an actual web resource. Normally MDN is the widely accepted place to get info/tutorials on all web-related programming topics. For some reason, everyone says "don't use the W3 schools website", and "use MDN instead". Although W3 schools gets a bad rap (in some cases I don't see why), in my honest opinion you can't beat their Canvas getImageData() info/tutorial page. MDN's has only one example, while W3's has several different examples, showing the different facets of getImageData(). Anyway, here's the link

Also what I like about W3 schools is that their small, focused examples are clickable and the demos run in a sandbox environment that runs right in the browser (our target platform after all! ha). You can edit their example code to experiment and increase your knowledge of the focused topic.

Just a heads up (that tripped me up when I was first learning about it) - if you loop through the pixel dataArray returned by getImageData(), keep in mind that every pixel has 4 pieces of data: R,G,B, and A. These are then spread out to a long, flat 1D array, which turns out to be just a big, long list of numbers. So, when looping, it might help mentally to do something like:

for (let k = 0; k < dataArray.data.length; k += 4)
{
modelMaterialColor.r = dataArray.data[k + 0] / 255; // R channel in 0.0-1.0 range
modelMaterialColor.g = dataArray.data[k + 1] / 255; // G channel in 0.0-1.0 range
modelMaterialColor.b = dataArray.data[k + 2] / 255; // B channel in 0.0-1.0 range
modelMaterialColor.a = dataArray.data[k + 3] / 255; // A channel - or alternatively, just always set this to 1.0 (opaque)
}

If you just keep in mind that each pixel in the source image expands out to 4 elements back-to-back in the big pixel dataArray that gets returned from getImageData(), you should be all set!

Good luck! Let me know how it goes :)

@JustLexxy
Copy link
Author

Turns out i was able to map the texture like that:

if (abs(nl.x) > 0.5) sampleUV = vec2(x.z, x.y);
else if (abs(nl.y) > 0.5) sampleUV = vec2(x.x, x.z);
else sampleUV = vec2(x.x, x.y);
texColor = texture(tOakTexture, sampleUV * 0.01);
hitColor *= pow(texColor.rgb, vec3(2.2));

It's a sample of code that i found in one of your example (i don't remember which one) that works almost perfectly, but is good enough for now.

There's another problem, unrelated to the texture but i still wasn't able to find a solution. In your Bi-Directional_Difficult_Lighting example, you load a GLTF model into meshGroup, and you transform it into a DataTexture to then send it as a uniform to the GLSL. It's possible to do it with a THREE.Group()/THREE.Object3D() as well, but every objects in it will have the same position. No matter which position i give to my objects, they will all end up having the same position as the last object added into the Object3D.
Do you know if it's possible for every children of the Object3D to have a different position?

@erichlof
Copy link
Owner

erichlof commented Aug 3, 2022

Ah yes that sample code is from the Bi-Directional Difficult Lighting demo's
fragment shader

I used that for the light wood texturing of the small coffee table that the 3 teapots are resting on. If I'm not mistaken, I believe this technique has a more formal name in CG: it's called 'Tri-Planar mapping'. The name reflects the fact that we use the 3 different cardinal axis-aligned planes to texture a model in 3-dimensional space that may be arbitrarily rotated or positioned. It works best on boxes, ellipsoids, and spheres. I used it here successfully on the rectangular flat box shape of the coffee table wood surface, and that's where the technique shines the most. It works pretty well for spheres as well, although there might be some slight stretching at the diagonal corners where one pair of planes (i.e. XY pair of planes) switches to another neighbor plane set (i.e. XZ pair of planes). It is most noticeable where there would be 'corners' or 'edges' on the sphere (even though there are no physical corners to spheres, ha), if you mentally picture the box shape and sphere shape overlapping. But anyway, glad it worked favorably for your use case!

Mmmm.. about THREE.Group() and positioning the final models to be path traced, I think I may know part of the answer, but I never have actually implemented this style of model loading (in groups with 'parents' and their 'children'). Speaking in general, every THREE.Group() contains 1 overall 'parent' or 'root' node transform (which is itself just Object3D with a 4x4 matrix). When you attach another model to this group, it multiplies all the childrens' personal matrices by their parents' matrix. For example, using the Solar system model, if I have a sphere as the Sun, this would be the root or parent node of the THREE.Group(). Then if I assigned a smaller sphere to the group, and calling it Earth, that would become a child of the Sun parent object. So if I pick up the Sun sphere and move it around the scene, the Earth sphere will follow, since its personal matrix is affected by the overall parent or root's transform matrix. But if I select the Earth and move it, the Sun will not follow. The chain of command is a one-way street and goes downward from parents to children.

This is a long lead-up to say that in order to position everything correctly on the data texture of the final scene to be path traced, you have to extract the models' matrices and use this data to manually manipulate the models' vertices, before packing everything inside the data texture to be sent to the GPU. So first you would extract the THREE.Group's parent or root-node matrix (wherever the group as a whole is positioned and rotated in the scene). Then, as you load each child object of that group, you would multiply its personal child matrix by the parent's matrix (that you had initially saved). Through the magic of matrix multiplication, the resulting matrix will contain the correct info for positioning, rotating, and scaling each child object. Finally, as you're saving each vertex's data into the data texture (such as positon, color, uv, etc), you multiply each vertex position (a unique point in 3D space) against the resulting matrix from the earlier THREE.Group parent-child multiplication.

In theory, this should position each and every vertex on all the children objects, in relation to how their group parent was transformed. As I mentioned, I haven't attempted this myself, but the closest actual source code implementation I can show you is here in the glTF Model Viewer demo js setup file which was contributed to this repo by someone else: n2k3. I believe what he is doing in these lines is close to what you need in order to get everything positioned correctly. Since I didn't write any of that code myself, I can't quite follow it exactly. If I would have to implement such a loading system, I might have written it differently, or who knows - maybe I would have inevitably arrived at the same solution?

Let me know if this is close to what you are attempting to do. Good luck with it!
-Erich

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants