Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Code Review] Possible improvement of webgl-util.js, why not use the data type from shader when setting attributes #444

Open
GuichiZhao opened this issue May 5, 2024 · 1 comment

Comments

@GuichiZhao
Copy link

When I read the chapter "WebGL - Less Code, More Fun", I read the code in webgl-util.js to figure out how it is possible to "write less code" by simply specifying the uniform/attribute something like

{
      u_color: [0.2, 1, 0.2, 1],
      u_textureSize: [width, height],
      u_reverseLightDirection: [1, 1, -1],
 }

The key takeaway from the source code is that the type of uniform can be parsed out from the shader code by gl.getActiveUniform so the type of uniform can be set using gl.uniformMatrix4fv,gl.uniformMatrix2fv ... correctly. I highly appreciate the idea because it exhibits the "single source of truth", the only source of data type is the shader code!

However, when writing less code the set attributes, we should write something like

      a_position: {
        buffer,
        numComponents: 3,
        type: gl.FLOAT,
        stride,
      },
      a_color: {
        buffer,
        numComponents: 3,
        type: gl.UNSIGNED_BYTE,
        normalize: true,
        stride,
        offset: 3 * 4,
      },

We should specify the data type explicitly! The source code of webgl-util.js does not make good use of the data type from shader at all, rather, it did a lot of guesswork on the data type like:

        attribs[attribName] = {
          buffer:        createBufferFromTypedArray(gl, array),
          numComponents: origArray.numComponents || array.numComponents || guessNumComponentsFromName(bufferName),
          type:          getGLTypeForTypedArray(gl, array),
          normalize:     getNormalizationForTypedArray(array),
        };
  function getGLTypeForTypedArray(gl, typedArray) {
    if (typedArray instanceof Int8Array)    { return gl.BYTE; }            // eslint-disable-line
    if (typedArray instanceof Uint8Array)   { return gl.UNSIGNED_BYTE; }   // eslint-disable-line
    if (typedArray instanceof Int16Array)   { return gl.SHORT; }           // eslint-disable-line
    if (typedArray instanceof Uint16Array)  { return gl.UNSIGNED_SHORT; }  // eslint-disable-line
    if (typedArray instanceof Int32Array)   { return gl.INT; }             // eslint-disable-line
    if (typedArray instanceof Uint32Array)  { return gl.UNSIGNED_INT; }    // eslint-disable-line
    if (typedArray instanceof Float32Array) { return gl.FLOAT; }           // eslint-disable-line
    throw 'unsupported typed array type';
  }

The type of array relies heavily on guesswork as well

My question is why not get the attributes data type directly from the shader code like what we did for uniforms. Is there any good reason I fail to understand? If not, I am glad to send a PR to change the code to what I see appropriate

@GuichiZhao
Copy link
Author

GuichiZhao commented May 5, 2024

For those who also have a similar concern, here is the conclusion I come to:
infering the data type for attribute from shader code is simply NOT possible

I tried:

attribute ivec4 a_color;
varying ivec4 v_color;

and the shader fail to compile

Error compiling shader '[object WebGLShader]':ERROR: 0:5: 'attribute' : cannot be bool or int
ERROR: 0:6: 'varying' : cannot be bool or int

Actually

The attribute qualifier can be used only with the data types float, vec2, vec3, vec4, mat2, mat3, and mat4. Attribute variables cannot be declared as arrays or structures.

However, when we specify the layout of the buffer, integer or byte is a legal option:

gl.vertexAttribPointer(a_positionLoc, positionNumComponents, gl.UNSIGNED_BYTE, false, 0, 0);

It seems that the gl engine converts all data types in the buffer to float somehow, which does not make much sense to me, maybe for the sake of hardware performance, I do not know

So, there is NO WAY to infer the individual data type in the buffer memory layout from the shader code, however, there is STILL something we can improve, we can infer the numComponents from shader! All the possible data type is listed below:
float, vec2, vec3, vec4, mat2, mat3, and mat4.
We can definitely infer numComponents

However, as mentioned in the article

Why don't we look at the attributes on the shader program to figure out the number of components? That's because it's common to supply 3 components (x, y, z) from a buffer but use a vec4 in the shader. For attributes WebGL will set w = 1 automatically. But that means we can't easily know the user's intent since what they declared in their shader might not match the number of components they provide.

So, inferring numComponents is also not possible!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant