Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Metrics to be created outside of init context #3702

Open
xresch opened this issue Apr 22, 2024 · 3 comments
Open

Allow Metrics to be created outside of init context #3702

xresch opened this issue Apr 22, 2024 · 3 comments
Assignees
Labels

Comments

@xresch
Copy link

xresch commented Apr 22, 2024

Feature Description

Situation:
I want to create trend metrics dynamically during script execution, but I receive the following error:

ERRO[0000] GoError: metrics must be declared in the init context
        at start (file:///C:/Users/a42137/Desktop/StepMonitoringProjects/000_template_k6/src/main/resources/scripts/K6_AcmeScript.js:113:31(12))
        at file:///C:/Users/a42137/Desktop/StepMonitoringProjects/000_template_k6/src/main/resources/scripts/K6_AcmeScript.js:87:8(16)  executor=per-vu-iterations scenario=default source=stacktrace

Data:
In load and performance testing and monitoring it is a common best practice to create custom duration metrics to measure specific parts of a script. What I do in most tools, would look in K6 like this(code does not work because of this issue):

/************************************************************
* Globals
*************************************************************/
var MEASURE_START_TIMES = {};
var MEASURE_TRENDS = {};

/************************************************************
* Main method
*************************************************************/
export default function() {
	
	var success = true;
	var usecaseName = "010_AcmeWebsite"
	var measureName;
	
	//-----------------------------------
	// Call URL
	//-----------------------------------
	measureName = usecaseName+"_010_Open_Homepage";
	start(measureName);
		// calls normally if no authMethod is null
		var response = callURLWithAuth(URL, AUTH_METHOD, AUTH_USER, AUTH_PASSWORD); 
	stop(measureName);
	
	//-----------------------------------
	// Check HTTP Status
	//-----------------------------------
	measureName = usecaseName+"_020_Execute_Checks";
	start(measureName);
		checkHttpStatus(response, HTTP_STATUS);
		checkContains(response, CONTAINS);
	stop(measureName);
}

/***********************************************************************
 * Starts a measurement with the given name.
 ***********************************************************************/
function start(measureName) {
	MEASURE_START_TIMES[measureName] = Date.now();
	MEASURE_TRENDS[measureName] = new Trend(measureName);
}

/***********************************************************************
 * Stops a measurement with the given name and reports it to K6 as a 
 * trend. Does nothing if there was no start.
 ***********************************************************************/
function stop(measureName) {
	let trend = MEASURE_TRENDS[measureName];
	
	if(trend != null) {
		let startTime = MEASURE_START_TIMES[measureName];
		let stopTime = Date.now();
		trend.add(stopTime - startTime);
	}
}

The problem with doing it in the init context is that it makes the code harder to write and maintain.
If I want to create 15 custom duration metrics I have to initialize every single one of them, plus have to give every one of those constants to two methods(start/stop) in order to achieve proper duration metrics.
This will be prone to copy paste issues and unnecessarily blows up the code.
Here an example:

/************************************************************
* Globals
*************************************************************/
var MEASURE_START_TIMES = {};

var usecaseName = "010_AcmeWebsite"
const MEASURE_010_LOGIN  = new Trend(usecaseName+"_010_Login");
const MEASURE_020_OPEN_TRADES  = new Trend(usecaseName+"_020_Open_Trades");
const MEASURE_030_START_TRADE  = new Trend(usecaseName+"_030_Start_Trade");
const MEASURE_040_SEARCH_EQUITY  = new Trend(usecaseName+"_030_Search_Equity");
// [...] imagine another 10 of those declarations ... 

/************************************************************
* Main method
*************************************************************/
export default function() {
	
	var success = true;

	var measureName;
	
	//-----------------------------------
	// Login
	//-----------------------------------
	start(MEASURE_010_LOGIN);
		// login logic
	stop(MEASURE_010_LOGIN);
	
	//-----------------------------------
	// Check HTTP Status
	//-----------------------------------
	start(MEASURE_020_OPEN_TRADES);
		// open trades logic
	stop(MEASURE_020_OPEN_TRADES);
	
	//-----------------------------------
	// [...] Another 15 of those blocks
	//-----------------------------------
}

/***********************************************************************
 * Starts a measurement with the given name.
 ***********************************************************************/
function start(trend) {
	MEASURE_START_TIMES[trend.name] = Date.now();
}

/***********************************************************************
 * Stops a measurement with the given name and reports it to K6 as a 
 * trend. Does nothing if there was no start.
 ***********************************************************************/
function stop(trend) {
	let startTime = MEASURE_START_TIMES[trend.name];
	
	if(startTime != null) {
		let stopTime = Date.now();
		trend.add(stopTime - startTime);
	}
}

Suggested Solution (optional)

A solution would be to allow creating metrics anywhere and not only in the init context. This is supported by basically most of the tools I have used for testing so far (HP LoadRunner, Silk Performer, Playwright, Selenium etc...) and is considered often a best practice to create custom measure blocks.

Not sure how it works with K6, solutions I have seen in other tools:
a) Having a start(measureName)- and stop(measureName)-function that allows for easy measurement of time.
b) Having a Factory-method like "Metrics.getOrCreateMeasure(trendName)" that will return an instance to report metrics too.
c) Having a reporting mechanism, like "Measurements.reportData(measureName, value)" that allows you to add a value to a dataset that will later be added to the output.

I was actually quite surprised that I couldn't find anything like this in K6.
Maybe it is already there but I haven't looked good enough. 🤔

Already existing or connected issues / PRs (optional)

No response

@mstoykov
Copy link
Collaborator

mstoykov commented May 8, 2024

Hi @xresch, sorry for the slow reply 🙇 and thank you for brining this up.

This has been the case from the very first version of k6, and we have transported it over.

This is currently used to be able to do some threshold validation and other checking in some places. But is arguably not used through the codebase it seems.

From code perspective though will require quite a lot of refactoring to make it work.

cc @grafana/k6-core, @sniku and @dgzlopes on reasons why this is this way? Do we want to keep this definitely?

The recommended practice on this is to use tags for the different parts instead of having totally different metrics.

About the separate API for metric factory and co - arguably if you can only make metrics only in the init context, and you can only emit them outside, that seems less useful.

But I am not against a separate issue proposing additional API for metric definition

@mstoykov mstoykov removed the triage label May 8, 2024
@xresch
Copy link
Author

xresch commented May 8, 2024

Hi @mstoykov,

thanks a lot for having a look into this issue.
Additional API to create metrics during the actual script execution would also be a workable solution for me.

@codebien
Copy link
Collaborator

Hi @xresch,
it seems you're mostly measuring durations that are already an available metric in k6.

Isn't group API an option for you? It uses the same tagging principle as suggested by @mstoykov. It generates metrics by adding a group tag based on the assigned group name.

If not, can you report why, please? Do you expect to measure async code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants