Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No checks for NULL values in Matrix.c and Vector.c #19

Open
Waqar144 opened this issue Jul 9, 2015 · 7 comments
Open

No checks for NULL values in Matrix.c and Vector.c #19

Waqar144 opened this issue Jul 9, 2015 · 7 comments

Comments

@Waqar144
Copy link
Contributor

Waqar144 commented Jul 9, 2015

Not very important...... or

@MrNex
Copy link
Owner

MrNex commented Jul 9, 2015

Good thinking. That will definitely save somebody from a silly mistake. Absolutely should be checked. Thank you.

Should we also check for allocated, but uninitialized vectors and matrices?
These would be non-null but they would have a float* components field which is null.

@Waqar144
Copy link
Contributor Author

Waqar144 commented Jul 9, 2015

Should we also check for allocated, but uninitialized vectors and matrices?

Yes, and they would be non-null. Checking whether they are initialized, would require an extra field, something like int initialized=1; to check whether they were initialized or not since when a Matrix will be allocated, the components field will get assigned some random address i.e non-null.. even though its not allocated.

Matrix* mat = (Matrix*)malloc(sizeof(Matrix));
if(mat->components != NULL) printf("NULL %f", mat->components);
if(mat->components != NULL) printf("NULL %p", mat->components);

@MrNex
Copy link
Owner

MrNex commented Jul 9, 2015

You are correct.. Allocatedoes not guarantee fields being null. As a matter of fact, as you point out, most likely they are non-null..

Unless we go through each Allocate function for the different structures in the NGen and have them pre-set fields to 0. It would not guarantee something like:

Matrix* mat = (Matrix*)malloc(sizeof(Matrix));
if(mat->components == NULL) printf("Null");

to print Null
However it could potentially guarantee

Matrix* mat = Matrix_Allocate();
if(mat->components == NULL) printf("Null");

to print Null if the changes are made.

Do you think these changes would be worth it? It can be easily done by changing the Allocate functions to use calloc as opposed to malloc.

@Waqar144
Copy link
Contributor Author

Waqar144 commented Jul 9, 2015

Since the checks are going to be made at the beginning of each function that has pointers, I think changing Allocate or Initiazlize for that matter won't be necessary. Although if you want it, it may not hurt after all.

We could place the check in a separate function for e.g check_IsNull(void* ptr, char* func_name ) and call it whenever needed. Note that func_name will be the function name where the error occurred.

Similar checks could be placed for checking whether its a SquareMatrix or not OR maybe checking IndexOutofBounds etc, to further separate functionality. This may not be necessary however.

Last thing that I want to ask you is that we have the Matrix defined as such:

typedef struct Matrix
{
    int numRows;
    int numColumns;

    float* components;
}Matrix;

Here Matrix is 12 bytes. We could use int16 / uint16 in place of int to reduce this to 8 bytes. What do you think?

@MrNex
Copy link
Owner

MrNex commented Jul 9, 2015

I'll keep the changes to Allocate in mind-- Not something which is too necessary though, so let us not commit to it for now.

However, I don't think the check_IsNull function is a good idea. While we would clearly benefit from the encapsulation, these Matrix and Vector functions are called a lot. I think the extra stack frames and function calls which must be made 2 or more times per function would give us a bigger performance hit (Yes, I understand we are talking about a few hundred nanoseconds) than the encapsulation of the error checking would benefit us. The primary fact behind this decision is that the error checks often amount to an if statement or two each- I don't think the behavior is worth encapsulating.

That being said, I invite you to argue this point, because one of the reasons I left the array based function set accessible instead of creating purely structure based function sets was so there would be a safe version and a fast/flexible version of each function.

As for your other point, we should definitely switch over to uint16_t. I'm shocked I didn't make them unsigned int's to begin with... Shame on me. But, correct me if I'm wrong, I can't see anybody initializing a Matrix or a Vector which would need more than 65535 components/rows/columns.

However.. Could you think of any case where this would place a restriction on us? Perhaps it would be worth it to have different sized versions of the structure. One uint32_t for very large computations (This could potentially allow large groups of vectors to have their components stored contiguously by some manager-- and enable you to add two of these large groups together with ease), one uint16_t for moderate usage, and uint8_t for a platform dependent lightweight usage.

Would this be worth the trouble, or do you think it would just add clutter? If it would add clutter, I think it's reasonable to assume uint16_t is large enough to fit most purposes...

@Waqar144
Copy link
Contributor Author

Waqar144 commented Jul 9, 2015

Yes, performance will be affected if we check for NULL in every single function. I guess we should leave them as it is.
On a side note maybe we can add a debug branch where all the checks are present. :) Just a thought. So that if the developer enables #DEBUG_MODE all the checks are checked.
As for the other issue...
uint16_t == 65535 * 65535 matrix
I checked into the glm and some other game-engines. NOBODY has matrices > 4x4 or vec > 4.

So, does that mean we should do follow in their footsteps.. I think not. Who knows what possibilities lie behind usage of such large matrices. We won't know if we don't try.
We won't have any benefits from using uint8_t since it won't effect the structure size. It would still be 8 bytes. And the platforms on which the size would decrease to 6 bytes are quite hard to find(I am not sure they would even support GL3.)
That being said we can introduce a uint32_t based matrix. Although I am not sure who would need such a mammoth of matrix. But no, it won't amount to clutter if we implement it neatly.
👍 If you make the decision, I am with you.

@MrNex
Copy link
Owner

MrNex commented Jul 9, 2015

Ah, sorry I forgot about word alignment.. You are correct, uint8_t would have no benefits. I agree uint16_t and uint32_t types will be the way to go.

As for following in other engines footsteps, you are correct we should absolutely not. The beauty of the C programming language is the memory oriented aspects of it. I will admit, matrices larger than 4x4 (unless you are dealing with stress/strain matrices in order to facilitate finite element methods of soft body physics simulation) are very rare-- but the real beauty of allowing this is that you don't necessarily need to have one Vector (or Matrix) in order to utilize functionality. Err.. Let me provide an example:

Second order newton-euler integration states that:

X1 = X0 + V0 * dt + 0.5 * A0 * dt * dt

This must be performed for every GObject which contains a RigidBody component every timestep. There is no overlap of vectors here.. Say we have 30000 GObject's which we must perform this algorithm on.. If we had a RigidBodyManager component which would store ALL RigidBody's velocity and acceleration Vector's components in one DynamicArray allowing for contiguous storage of the data, you could create 2 Vector's with a dimension equal to the number of RigidBody's * 3, and point the components to the data field of the DynamicArray. This would allow you to effectively integrate all RigidBody's in the engine in one calculation!

Furthermore, in the future when I develop the AcceleratedVector and AcceleratedMatrix libraries again, as was partly avaiable in the (Now deprecated) Visual Studio version of the NGen, this could facilitate faster transport of the information to the GPU for massively accelerated calculations.

I see much potential in this sort of thing. I agree uint32_t might be a little bit large... but with each day memory limits are becoming more obsolete-- soon somebody might find practical reasons for such massive matrices. Perhaps if I dig up some old linear algebra notes I can look into Matrix Partitioning again, and come up with a use myself involving integration of angular mechanics.

However, for now it would be best to stick with uint16_t as the primary implementation and plan for uint32_t support in the future!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants