You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During dev, myself and colleagues always provide a seed via generic and all sources of randomisation (RandomPTypes, Coverage) should have their seed initialised via respective calls to "InitSeed" via some derivative of the generic. But we are all guilty at some point of forgetting an InitSeed somewhere, that means the randomised tests just become the same test and the seed has no effect on the test stimulus. This can give a false impression that a test is being randomised on each run with a different seed generic when it is not.
I would be useful if the RandomPType (which I also assume is the source of randomness for Coverage) have knowledge of whether it has been seeded, and should throw a WARNING (by default) if it has not been seeded. There should be a global option to change this WARNING into an ERROR to allow us to easily catch this problem during CI (we currently allow warnings to occur during tests).
The text was updated successfully, but these errors were encountered:
For Coverage using the singleton, when NewID is called, the seed is automatically initialized to:
InitSeed( NewCoverageID, Name & string'(GetAlertLogName(ParentID))) ;
So the only issue is with RandomPType.
When would a WARNING/ERROR be thrown by randomization with RandomPType? Would the check be done during randomization itself. That would be ugly as it implies additional overhead to every randomization. Not sure I would want to add that sort of overhead - just to make sure everyone called initseed.
We could have a central randomization object that is used to exclusively to randomize seeds at startup. That way rather than a static default, every randomization object would end up with a unique seed value. It would not be deterministic though as if the processes ran in different orders, the seed would be different for that process. So that is not too good.
We could have EndOfTestReports check and see if the seeds were every initialized and report that at the end of the test. That sort of overhead would be acceptable - oops that is not possible since the randomization objects are separate shared variables and not a singleton - but maybe if we keep throwing things at the wall something that is ok will emerge.
During dev, myself and colleagues always provide a seed via generic and all sources of randomisation (RandomPTypes, Coverage) should have their seed initialised via respective calls to "InitSeed" via some derivative of the generic. But we are all guilty at some point of forgetting an InitSeed somewhere, that means the randomised tests just become the same test and the seed has no effect on the test stimulus. This can give a false impression that a test is being randomised on each run with a different seed generic when it is not.
I would be useful if the RandomPType (which I also assume is the source of randomness for Coverage) have knowledge of whether it has been seeded, and should throw a WARNING (by default) if it has not been seeded. There should be a global option to change this WARNING into an ERROR to allow us to easily catch this problem during CI (we currently allow warnings to occur during tests).
The text was updated successfully, but these errors were encountered: