You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the HTTP API, k6 generates six metrics measuring the different stages of the HTTP request. From HTTP built-in metrics docs:
http_req_blocked
http_req_connecting
http_req_tls_handshaking
http_req_sending
http_req_waiting
http_req_receiving
Also, it creates the http_req_duration metric that measures ONLY the time the SUT spent processing and responding to the request: the sum of http_req_sending + http_req_waiting + http_req_receiving (excluding the time spent during the connection).
This blog describes how other tools, such as JMeter and Gatling, measure response time. These tools measure the response time differently, including some of the stages establishing the connection.
This confusion about how we measure latency is frequent. To align with other tools and/or mitigate this confusion, we could explore creating a new HTTP metric that includes the sum of all the stages of the HTTP request - http_req_total_duration?
Suggested Solution (optional)
No response
Already existing or connected issues / PRs (optional)
No response
The text was updated successfully, but these errors were encountered:
There are a few problems with this, on both the technical and UX sides. But first, if anyone stumbles on this issue, there is an easy workaround you can do without waiting for us to add this to k6. Just create a simple Trendcustom metric and .add() to it all of the values described in the OP from the Response.timings object of the HTTP response.
To get to the UX issues first. In most cases it's probably a bad idea to add together the time it took the client to establish the connection (DNS resolution + network connection + TLS handshaking) with the time it actually took to send the HTTP request and receive the response back (which in k6 is measured by http_req_duration). The time to establish the connection can be quite noisy and inconsistent between different iterations of even the same VU, or even within the same VU iteration, unless users know what they are doing and what and how they are measuring. Some of the sources of inconsistency can be:
we'll soon have HTTP/3 with its own magic optimizations
All of these (and probably other) considerations might cause k6 to legitimately measure valid values for http_req_connecting or http_req_tls_handshaking in one request and then measure 0 values in the next one to the same host, if the connection got reused or something got optimized on the protocol level. FWIW, http_req_blocked currently includes the DNS lookup times (though we plan to have a separate metric just for that, see #1011), and it may also be similarly affected. So this inconsistency makes any analysis or thresholds on a value that includes http_req_connecting, http_req_tls_handshaking and http_req_blocked quite a bit error prone, unless people really know what they are doing and how k6 (and HTTP clients in general) work.
So, these were the UX/documentation issues - any such http_req_total_duration metric should come with a hefty warning that users should know what they are doing. On the k6 side, we probably don't want to introduce another built-in Trend metric until we have a way to not enable it by default (#1321). And ideally we should have also reduced the needed memory for Trend metrics (#763) before we add a new one with no new information 🤔
Feature Description
When using the HTTP API, k6 generates six metrics measuring the different stages of the HTTP request. From HTTP built-in metrics docs:
Also, it creates the
http_req_duration
metric that measures ONLY the time the SUT spent processing and responding to the request: the sum ofhttp_req_sending
+http_req_waiting
+http_req_receiving
(excluding the time spent during the connection).This blog describes how other tools, such as JMeter and Gatling, measure response time. These tools measure the response time differently, including some of the stages establishing the connection.
This confusion about how we measure latency is frequent. To align with other tools and/or mitigate this confusion, we could explore creating a new HTTP metric that includes the sum of all the stages of the HTTP request -
http_req_total_duration
?Suggested Solution (optional)
No response
Already existing or connected issues / PRs (optional)
No response
The text was updated successfully, but these errors were encountered: