Skip to content

Releases: grafana/k6

v0.29.0

11 Nov 13:32
v0.29.0
Compare
Choose a tag to compare

k6 v0.29.0 is here! 🎉 It's a feature-packed release with tons of much-requested changes and additions, a lot of them implemented by awesome external contributors! ❤️

As promised in the previous release notes, we're trying to stick to a roughly 8-week release cycle, so you can expect the next k6 version at the start of January 2021, barring any bugfix releases before that.

New features

Initial support for gRPC (#1623)

k6 now supports unary gRPC calls via the new k6/net/grpc built-in module. Streaming RPCs are not yet supported and the JS API is in beta, so there might be slight breaking changes to the API in future k6 versions, but it's a good start on the road to fully supporting this much-requested protocol!

This is a simple example of how the new module can be used with grpcb.in:

import grpc from "k6/net/grpc";

let client = new grpc.Client();
// Download addsvc.proto for https://grpcb.in/, located at:
// https://raw.githubusercontent.com/moul/pb/master/addsvc/addsvc.proto
// and put it in the same folder as this script.
client.load(null, "addsvc.proto");

export default () => {
    client.connect("grpcb.in:9001", { timeout: "5s" });

    let response = client.invoke("addsvc.Add/Sum", {
        a: 1,
        b: 2
    });
    console.log(response.message.v); // should print 3

    client.close();
}

You can find more information and examples how to use k6's new gRPC testing capabilities in our documentation.

Huge thanks to @rogchap for adding this feature!

New options for configuring DNS resolution (#1612)

You can now control some aspects of how k6 performs DNS resolution! Previously, k6 would have cached DNS responses indefinitely (#726) and always picked the first resolved IP (#738) for all connections. This caused issues, especially when load testing services that relied on DNS for load-balancing or auto-scaling.

For technical reasons explored in (#726), k6 v0.29.0 still doesn't respect the actual TTL value of resolved IPs, that will be fixed in a future k6 version. For now, it simply allows users to specify a global static DNS TTL value and resolution strategy manually. It also has better defaults! Now, by default, the global DNS TTL value is 5 minutes and, if the DNS resolution returned multiple IPs, k6 will pick a random (preferably IPv4) one for each connection.

You can also configure this behavior with the new --dns CLI flag, the K6_DNS environment variable, or the dns script/JSON option. Three DNS resolution options are exposed in this k6 version: ttl, select, and policy.

Possible ttl values are :

  • 0: no caching at all - each request will trigger a new DNS lookup.
  • inf: cache any resolved IPs for the duration of the test run (the old k6 behavior).
  • any time duration like 60s, 5m30s, 10m, 2h, etc.; if no unit is specified (e.g. ttl=3000), k6 assumes milliseconds. The new default value is 5m.

Possible select values are:

  • first - always pick the first resolved IP (the old k6 behavior).
  • random - pick a random IP for every new connection (the new default value).
  • roundRobin - iterate sequentially over the resolved IPs.

Possible policy values are:

  • preferIPv4: use IPv4 addresses, if available, otherwise fall back to IPv6 (the new default value).
  • preferIPv6: use IPv6 addresses, if available, otherwise fall back to IPv4.
  • onlyIPv4: only use IPv4 addresses, ignore any IPv6 ones.
  • onlyIPv6: only use IPv6 addresses, ignore any IPv4 ones.
  • any: no preference, use all addresses (the old k6 behavior).

Here are some configuration examples:

k6 run --dns "ttl=inf,select=first,policy=any" script.js # this is the old k6 behavior
K6_DNS="select=random,ttl=5m,policy=preferIPv4" k6 cloud script.js # new default behavior
# syntax for the JSON config file or for the exported script `options`:
echo '{"dns": {"select": "roundRobin", "ttl": "1h33m7s", "policy": "onlyIPv6"}}' > config.json
k6 run --config "config.json" script.js

Support for Go extensions (#1688)

After some discussions (#1353) and exploration of different approaches for Go-based k6 extensions, we've settled on adopting something very similar to caddy's extensions. In short, xk6 (modeled after xcaddy) is a small stand-alone tool that will be able to build custom k6 binaries with 3rd party extensions bundled in. The extensions can be simple Git repositories (no central infrastructure needed!) with Go modules. They will be fully compiled, not interpreted, a part of the final custom k6 binary users will be able to build with k6.

xk6 is not yet stable or documented, so any extension authors will struggle until we stabilize and document everything in the coming weeks. The important part is that the k6 changes that would allow xk6 to work were implemented in #1688, so k6 v0.29.0 is the first version compatible with xk6!

Expect more information soon, but for a brief example, xk6 will work somewhat like this:

xk6 build v0.29.0 --with github.com/k6io/xk6-k8s --with github.com/k6io/xk6-sql@v0.1.1

./k6 run some-script-with-sql-and-k8s.js

Thanks, @andremedeiros, for pushing us to add plugins in k6 and for making a valiant attempt to harness Go's poor plugin API! Thank you, @mardukbp, for pointing us towards the xcaddy approach and explaining its benefits!

Support for setting local IPs, potentially from multiple NICs (#1682)

You can now specify a list of source IPs, IP ranges and CIDRs for k6 run, from which VUs will make requests via the new --local-ips CLI flag or K6_LOCAL_IPS environment variable. The IPs will be sequentially given out to VUs, allowing you to distribute load between different local addresses. This option doesn't change anything on the OS level, so the IPs need to already be configured on the OS level in order for k6 to be able to use them.

The biggest use case for this feature is splitting the network traffic from k6 between multiple network adapters, thus potentially greatly increasing the available network throughput. For example, if you have 2 NICs, you can run k6 with --local-ips="<IP-from-first-NIC>,<IP-from-second-NIC>" to balance the traffic equally between them - half of the VUs will use the first IP and the other half will use the second. This can scale to any number of NICs, and you can repeat some local IPs to give them more traffic. For example, --local-ips="<IP1>,<IP2>,<IP3>,<IP3>" will split VUs between 3 different source IPs in a 25%:25%:50% ratio.

Thanks to @ofauchon, @srguglielmo, and @divfor for working on previous iterations of this!

New option for blocking hostnames (#1666)

You can now block network traffic by hostnames with the new --block-hostnames CLI flag / K6_BLOCK_HOSTNAMES environment variable / blockHostnames JS/JSON option. Wildcards are also supported at the beginning, allowing you to easily block a domain and all of its subdomains. For example, this will make sure k6 never attempts to connect to any k6.io subdomain (test.k6.io, test-api.k6.io, etc.) and www.example.com:

export let options = {
  blockHostnames: ["*.k6.io" , "www.example.com"],
};

Thanks to @krashanoff for implementing this feature!

UX and enhancements

  • HTTP: The gjson library k6 uses for handling the HTTP Response.json(selector) behavior was updated, so we now support more modifiers like @flatten and multipaths (#1626). Thanks, @sondnm!
  • HTTP: The status text returned by the server can now be accessed from the new Response.status_text field (#1649). Thanks, @lcd1232!
  • HTTP: --http-debug now emits extra UUID values that can be used to match HTTP requests and their responses (#1644). Thanks, @repl-david-winiarski!
  • Logging: A new allowedLabels sub-option is added to the Loki configuration (#1639).
  • Cloud: when aborting a k6 cloud test with Ctrl+C, k6 will now wait for the cloud service to fully abort the test run before returning. A second Ctrl+C will cause it to immediately exit (#1647), (#1705). Thanks, @theerapatcha!
  • JS: k6 will now attempt to recover from Go panics that occur in VU code, so they will be treated similarly to JS exceptions (#1697). This is just a precaution that should never be needed. panics should not happen and if one occurs, please report it in our issue tracker, since it's most likely a bug in k6.

Bugs fixed!

  • JS: goja, the JS runtime k6 uses, was updated to its latest version, to fix some issues with regular expressions after its previous update (#1707).
  • JS: Prevent loops with --compatibility-mode=extended when Babel can transpile the code but goja can't parse it (#1651).
  • JS: Fixed a bug that rarely caused a context canceled error message to be shown ([#1677](https://github.com/loadimpact/k6/pull...
Read more

v0.28.0

21 Sep 12:15
dee9c4c
Compare
Choose a tag to compare

k6 v0.28.0 is here! 🎉 It's a small release that adds some much requested features and a few important bugfixes!

Starting with this release, we'll be trying to stick to a new 8-week fixed release schedule for new k6 versions. This release comes ~8 weeks after v0.27.0 was released, and k6 v0.29.0 should be released in mid-November.

New features and enhancements!

Cloud execution logs (#1599)

Logs from distributed k6 cloud test runs will now be shown in the terminal that executed the k6 cloud command, as well as in the k6 cloud web app on app.k6.io! 🎉 This means that, if your script contains console.log() / console.warn() / etc. calls, or some of your requests or iterations fail, you'd be able to see that and debug them much more easily! Even --http-debug data should be proxied, up to 10000 bytes per message. To prevent abuse and not to overwhelm any user terminals, cloud logs are rate-limited at 10 messages per second per instance, for now, but that should be more than enough to debug most issues!

This feature is enabled by default, though you can disable it with k6 cloud --show-logs=false script.js.

Pushing k6 logs to loki (#1576)

k6 can now push its execution logs to a loki server! This can be done via the new --log-output CLI flag or the K6_LOG_OUTPUT environment variable option. For example, k6 run --log-output "loki=https://my-loki-server/loki/api/v1/push,limit=100,level=info,msgMaxSize=10000" will push up to 100 k6 log messages per second, of severity INFO and up, truncated to 10000 bytes, to https://my-loki-server.

Optional port to host mappings (#1489)

@calavera added an extension for the host mapping feature. Now you can specify different port numbers via the hosts option, like this:

import http from 'k6/http';

export let options = {
    hosts: {
        'test.k6.io': '127.0.0.1:8080',
    },
};

Support for specifying data types to InfluxDB fields (#1395)

@TamiTakamiya added support for specifying the data type (int/float/bool/string) of fields that are emitted to InfluxDB outputs.

In order to specify the data type, you should:

  • Use the environment variable K6_INFLUXDB_TAGS_AS_FIELDS, which is used to specify which k6 metric tag values should be sent as nonindexable fields (instead of tags) to an InfluxDB output. This is specified as a comma-separated string, and is now extended to optionally allow specifying a data type to each name.
  • Each pair of field name and its data type is represented in the format (name):(data_type), for example, event_processing_time:int.
  • One of four data types (int, float, bool and string) can be specified to one field name.
  • When the colon and a data_type are omitted, for example transaction_id, it is interpreted as a string field.

A complete example can look like this: export K6_INFLUXDB_TAGS_AS_FIELDS="vu:int,iter:int,url:string,boolField:bool,floatField:float"

Note: If you have existing InfluxDB databases that contain fields whose data types are different from the ones that you want to save in future k6 test executions, you may want to create a new database or change field names as the current InfluxDB offers limited support for changing fields' data type. See the InfluxDB documentation for more details.

Support for automatic gzip-ing of the CSV output result (#1566)

@thejasbabu added support to gzip archiving the file emitted by the CSV output on the fly. To use it, simply append .gz at the end of the file name, like this: k6 run --out csv=test.csv.gz test.js

UX

  • Various spacing and progress bar rendering issues were improved (#1580).
  • The k6 ASCII logo was made a bit more proportional (#1615). Thanks, @rawtaz!
  • The docker-compose example setup from the k6 repo now contains a built-in simple dashboard (#1610). Thanks, @jeevananthank!
  • Some logs now have a source field specifying if a log comes from console, http-debug or stacktrace (when an exception has bubbled to the top of the iteration).

Bugs fixed!

  • Network: IPv6 support was fixed as a part of the new hosts port mapping (#1489). Thanks, @calavera!
  • Metrics: Fixed the wrong name metric tag for redirected requests (#1474).
  • UI: Fixed a divide by zero panic caused by some unusual execution environments that present a TTY, but return 0 for the terminal size (#1581).
  • Config: Fixed the parsing of K6_DATADOG_TAG_BLACKLIST (#1602).
  • Config: Fixed marshaling of tlsCipherSuites and tlsVersion (#1603). Thanks, @berndhartzer!
  • WebSockets: Fixed a ws.SetTimeout() and ws.SetInterval() panic when float values were passed (#1608).

Internals

  • goja, the JavaScript runtime k6 uses, was updated to the latest version. This means that k6 with --compatibility-mode=base now supports some standard library features from ES6 (goja's PR), though no new syntax yet. In future versions we plan to drop some current core.js modules that are no longer needed, which should greatly reduce memory usage per VU (#1588).
  • Go modules are now used to manage the k6 dependencies instead of dep (#1584).

Breaking changes

  • k6 cloud will now proxy execution logs back to the client machine. To disable this behavior, use k6 cloud --show-logs=false.

  • --http-debug request and response dumps are now emitted through the logging sub-system, to facilitate the cloud log proxying (#1577).

v0.27.1

30 Jul 10:22
v0.27.1
Compare
Choose a tag to compare

k6 v0.27.1 is a minor release with a few bugfixes and almost no functional changes compared to v0.27.0.

The biggest fix was resolving a panic (and some k6 login errors) when k6 was ran through git bash / Mintty on Windows (#1559).

k6 will now work in those terminals, however, if you're using git bash or Mintty as your terminal on Windows, you might not get the best user experience out of k6. Consider using a different terminal like Windows Terminal, PowerShell or Cmder. Alternatively, to work around the issues with the incompatible terminals, you can try running k6 through winpty, which should already be preinstalled in your git bash environment: winpty k6 run script.js.

If you're using the Windows Subsystem for Linux (WSL), you are probably going to get better experience by using the official Linux k6 binary or .deb package. For all other cases of running k6 on Windows, the normal k6 Windows binary / .msi package should work well.

Other minor fixes and changes:

  • The Go version that k6 is compiled with was updated to 1.14.6, to incorporate the latest Go fixes (#1563).
  • If the throw option is enabled, warnings for failed HTTP requests will no longer be logged to the console (#1199).
  • Metric sample packets sent to the cloud with k6 run --out cloud can now be sent in parallel via the new K6_CLOUD_METRIC_PUSH_CONCURRENCY option, with a default value of 1 (#1569).
  • The gracefulRampDown VU requirement calculations for the ramping-vus executor were greatly optimized for large test runs (#1567).
  • Fixed a rare bug where dropped_iterations wouldn't be emitted by the per-vu-iterations executor on time due to a race (#1357).
  • Metrics, including checks, from setup() and teardown(), were not correctly shown in local k6 runs (#949).

v0.27.0

14 Jul 12:46
v0.27.0
Compare
Choose a tag to compare

k6 v0.27.0 is here! 🎉

This is a milestone release containing a major overhaul to the execution subsystem of k6, along with many improvements and bug fixes.

New features and enhancements!

New execution engine (#1007)

After 1.5 years in the making, the k6 team is proud to release the first public version of the new execution engine, offering users new ways of modeling advanced load testing scenarios that can more closely represent real-world traffic patterns.

These new scenarios are entirely optional, and the vast majority of existing k6 scripts and options should continue to work the same as before. There are several minor breaking changes and fixes of previously undefined behavior, but please create a new issue if you find some issue we haven't explicitly noted as a breaking change.

See the documentation for details and examples, or keep reading for the summary.

New executors

Some of the currently possible script execution patterns were formalized into standalone executors:

  • shared-iterations: a fixed number of iterations are "shared" by all VUs, and the test ends once all iterations are executed. This executor is equivalent to the global vus and iterations (plus optional duration) options.
  • constant-vus: a fixed number of VUs execute as many iterations as possible for a specified amount of time. This executor is equivalent to the global vus and duration options.
  • ramping-vus: a variable number of VUs execute as many iterations as possible for a specified amount of time. This executor is equivalent to the global stages option.
  • externally-controlled: control and scale execution at runtime via k6's REST API or the CLI.

You'd still be able to use the global vus, iterations, duration, and stages options, they are not deprecated! They are just transparently converted to one of the above executors underneath the hood. And if your test run needs just a single, simple scenario, you may never need to use more than these shortcut options. For more complicated use cases however, you can now fine-tune any of these executors with additional options, and use multiple different executors in the same test run, via the new scenarios option, described below.

Additionally, besides the 4 "old" executor types, there are 3 new executors, added to support some of the most frequently requested load testing scenarios that were previously difficult or impossible to model in k6:

  • per-vu-iterations: each VU executes a fixed number of iterations (#381).
  • constant-arrival-rate: iterations are started at a specified fixed rate, for a specified duration. This allows k6 to dynamically change the amount of active VUs during a test run, to achieve the specified amount of iterations per period. This can be very useful for a more accurate representation of RPS (requests per second), for example. See #550 for details.
  • ramping-arrival-rate: a variable number of iterations are executed in a specified period of time. This is similar to the ramping VUs executor, but instead of specifying how many VUs should loop through the script at any given point in time, the iterations per second k6 should execute at that point in time can be specified.

It's important to also note that all of these executors, except the externally-controlled one, can be used both in local k6 execution with k6 run, and in the distributed cloud execution with k6 cloud. This even includes "old" executors that were previously unavailable in the cloud, like the shared-iterations one. Now, you can execute something like k6 cloud --iterations 10000 --vus 100 script.js without any issues.

Execution scenarios and executor options

Multiple execution scenarios can now be configured in a single test run via the new scenarios option. These scenarios can run both sequentially and in parallel, and can independently execute different script functions, have different executor types and execution options, and have custom environment variables and metrics tags.

An example using 3 scenarios:

import http from 'k6/http';
import { sleep } from 'k6';

export let options = {
    scenarios: {
        my_web_test: { // some arbitrary scenario name
            executor: 'constant-vus',
            vus: 50,
            duration: '5m',
            gracefulStop: '0s', // do not wait for iterations to finish in the end
            tags: { test_type: 'website' }, // extra tags for the metrics generated by this scenario
            exec: 'webtest', // the function this scenario will execute
        },
        my_api_test_1: {
            executor: 'constant-arrival-rate',
            rate: 90, timeUnit: '1m', // 90 iterations per minute, i.e. 1.5 RPS
            duration: '5m',
            preAllocatedVUs: 10, // the size of the VU (i.e. worker) pool for this scenario
            maxVUs: 10, // we don't want to allocate more VUs mid-test in this scenario

            tags: { test_type: 'api' }, // different extra metric tags for this scenario
            env: { MY_CROC_ID: '1' }, // and we can specify extra environment variables as well!
            exec: 'apitest', // this scenario is executing different code than the one above!
        },
        my_api_test_2: {
            executor: 'ramping-arrival-rate',
            startTime: '30s', // the ramping API test starts a little later
            startRate: 50, timeUnit: '1s', // we start at 50 iterations per second
            stages: [
                { target: 200, duration: '30s' }, // go from 50 to 200 iters/s in the first 30 seconds
                { target: 200, duration: '3m30s' }, // hold at 200 iters/s for 3.5 minutes
                { target: 0, duration: '30s' }, // ramp down back to 0 iters/s over the last 30 second
            ],
            preAllocatedVUs: 50, // how large the initial pool of VUs would be
            maxVUs: 100, // if the preAllocatedVUs are not enough, we can initialize more

            tags: { test_type: 'api' }, // different extra metric tags for this scenario
            env: { MY_CROC_ID: '2' }, // same function, different environment variables
            exec: 'apitest', // same function as the scenario above, but with different env vars
        },
    },
    discardResponseBodies: true,
    thresholds: {
        // we can set different thresholds for the different scenarios because
        // of the extra metric tags we set!
        'http_req_duration{test_type:api}': ['p(95)<250', 'p(99)<350'],
        'http_req_duration{test_type:website}': ['p(99)<500'],
        // we can reference the scenario names as well
        'http_req_duration{scenario:my_api_test_2}': ['p(99)<300'],
    }
};

export function webtest() {
    http.get('https://test.k6.io/contacts.php');
    sleep(Math.random() * 2);
}

export function apitest() {
    http.get(`https://test-api.k6.io/public/crocodiles/${__ENV.MY_CROC_ID}/`);
    // no need for sleep() here, the iteration pacing will be controlled by the
    // arrival-rate executors above!
}

As shown in the example above and the documentation, all executors have some additional options that improve their flexibility and facilitate code reuse, especially in multi-scenario test runs:

  • Each executor has a startTime property, which defines at what time, relative to the beginning of the whole test run, the scenario will start being executed.
  • Executors have a new gracefulStop property that allows for iterations to complete gracefully for some amount of time after the normal executor duration is over (#879, #1033). The ramping-vus executor additionally also has gracefulRampDown, to give iterations time to finish when VUs are ramped down. The default value for both options is 30s, so it's a slight breaking change, but the old behavior of immediately interrupting iterations can easily be restored by setting these options to 0s.
  • Different executors can execute different functions other than the default exported one. This can be specified by the exec option in each scenarios config, and allows for more flexibility in organizing your tests, easier code reuse, building test suites, etc.
  • To allow for even greater script flexibility and code reuse, you can specify different environment variables and tags in each scenario, via the new env and tags executor options respectively.
  • k6 may now emit a new dropped_iterations metric in the shared-iterations, per-vu-iterations, constant-arrival-rate and ramping-arrival-rate executors; this is done if it can't run an iteration on time, depending on the configured rates (for the arrival-rate executors) or scenario maxDuration (for the iteration-based executors), so it's generally a sign of a poor config or an overloaded system under test (#1529).

We've also introduced new --execution-segment and --execution-segment-sequence options, which allow for relatively easy partitioning of test runs across multiple k6 instances. Initially this applies to the test execution (all new executor types are supported!), but opens the door to test data partitioning, an often requested feature. See #997 for more details.

UX

  • CLI: There are separate descriptions and real-time thread-safe progress bars for each individual executor.
  • CLI: Improve module import error message ([#1439](https://github.com/loadimpact/...
Read more

v0.26.2

18 Mar 12:00
Compare
Choose a tag to compare

k6 v0.26.2 is a minor release that updates the used Go version for the Windows builds to Go 1.13.8. Due to an oversight, previous v0.26 k6 builds for Windows used an old Go version, while builds of other OSes used the correct one. This is meant to address an issue in the Go net/http package: golang/go#34285 .

There are no functional changes compared to v0.26.1.

v0.26.1

24 Feb 08:16
v0.26.1
Compare
Choose a tag to compare

k6 v0.26.1 is here! This is a minor release that supports the rebranding of LoadImpact to k6, the new k6.io website, and the new k6 cloud service! 🎉

In practical terms, all that it means for k6 is that the URLs for cloud tests will point to https://app.k6.io, instead of https://app.loadimpact.com. The old URLs (and old k6 versions) will still continue to work - for the next 3 months the old app and the new one would work in parallel, and after that period the old app will redirect to the new one. Nothing changes in regards to the k6 open source project and our commitment to it!

You can find more information about the rebranding in our blog post about it: https://k6.io/blog/load-impact-rebranding-to-k6

Changes in this release compared to v0.26.0:

  • Fix how HTTP request timeouts are specified internally. This is not a bug in current k6 releases, it only affects k6 if it is compiled with Go 1.14, which at this time is still not officially released. (#1261)
  • Improve the official docker image to use an unprivileged user. Thanks, @funkypenguin! (#1314)
  • Fix the unintentional sharing of __ENV between VUs, which could result in data races and crashes of k6. (#1329)
  • Update cloud URLs to point to https://app.k6.io instead of https://app.loadimpact.com. (#1335)

v0.26.0

16 Dec 11:04
v0.26.0
Compare
Choose a tag to compare

k6 v0.26.0 is here! 🎉

This release contains mostly bug fixes, though it also has several new features and enhancements! They include a new JS compatibility mode option, exporting the end-of-test summary to a JSON report file, speedups to the InfluxDB and JSON outputs, http.batch() improvements, a brand new CSV output, multiple layered HTTP response body decompression, being able to use console in the init context, a new optional column in the summary, and Docker improvements!

Thanks to @Sirozha1337, @openmohan, @MMartyn, @KajdeMunter, @dmitrytokarev and @dimatock for contributing to this release!

New features and enhancements!

A new JavaScript compatibility mode option (#1206)

This adds a way to disable the automatic script transformation by Babel (v6.4.2) and loading of core-js (v2) polyfills, bundled in k6. With the new base compatibility mode, k6 will instead rely only on the goja runtime and what is built into k6.
This can be configured through the new --compatibility-mode CLI flag and the K6_COMPATIBILITY_MODE environment variable. The possible values currently are:

  • extended: this is the default and current compatibility mode, which uses Babel and core.js to achieve ES6+ compatibility.
  • base: an optional mode that disables loading of Babel and core.js, running scripts with only goja's native ES5.1+ compatibility. If the test scripts don't require ES6 compatibility (e.g. they were previously transformed by Babel), this option can be used to reduce RAM usage during test runs.

More info what this means can be found in the documentation.

Our benchmarks show a considerable drop in memory usage - around 80% for simple scripts, and around 50% in the case of 2MB script with a lot of static data in it. The CPU usage is mostly unchanged, except that k6 initializes test runs a lot faster. All of those benefits will be most noticeable if k6 is used with big number of VUs (1k+). More performance comparisons can be found in #1167.

JSON export of the end-of-test summary report (#1168)

This returns (from the very early days of k6) the ability to output the data from the end of test summary in a machine-readable JSON file.
This report can be enabled by the --summary-export <file_path> CLI flag or the K6_SUMMARY_EXPORT environment variable. The resulting JSON file will include data for all test metrics, checks and thresholds.

New CSV output (#1067)

There is an entirely new csv output that can be enabled by using the --out csv CLI flag. There are two things that can be configured: the output file with K6_CSV_FILENAME (by default it's file.csv), and the interval of pushing metrics to disk, which is configured with K6_CSV_SAVE_INTERVAL (1 second by default). Both of those can be configured by the CLI as well: --out csv=somefile.csv will output to somefile.csv and --out file_name=somefile.csv,save_interval=2s will output again to somefile.csv, but will flush the data every 2 seconds instead of every second.

The first line of the output is the names of columns and looks like:

metric_name,timestamp,metric_value,check,error,error_code,group,method,name,proto,status,subproto,tls_version,url,extra_tags
http_reqs,1573131887,1.000000,,,,,GET,http://httpbin.org/,HTTP/1.1,200,,,http://httpbin.org/,
http_req_duration,1573131887,116.774321,,,,,GET,http://httpbin.org/,HTTP/1.1,200,,,http://httpbin.org/,
http_req_blocked,1573131887,148.691247,,,,,GET,http://httpbin.org/,HTTP/1.1,200,,,http://httpbin.org/,
http_req_connecting,1573131887,112.593448,,,,,GET,http://httpbin.org/,HTTP/1.1,200,,,http://httpbin.org/,

All thanks to @Sirozha1337!

JSON output optimizations (#1114)

The JSON output no longer blocks the goroutine sending samples to the file, but instead (like all other outputs) buffers the samples and writes them at regular intervals (100ms and is currently not configurable). It also uses a slightly faster way of encoding the data, which should decrease the memory usage by a small amount.

Another improvement is the ability to compress the generated JSON file by simply adding .gz to the end of the file name. Compressed files are typically 30x smaller.

InfluxDB output improvements (#1113)

The InfluxDB output has been updated to use less memory and try to send smaller and consistent chunks of data to InfluxDB, in order to not drop packets and be more efficient. This is primarily done by sending data in parallel, as this seems to be better from a performance perspective, and more importantly, queuing data in separate packets, so that we don't send the data for a big time period all at once. Also, the used library was updated, which also decreased the memory usage.

Two new options were added:

  • K6_INFLUXDB_PUSH_INTERVAL - configures at what interval the collected data is queued to be sent to InfluxDB. By default this is "1s".
  • K6_INFLUXDB_CONCURRENT_WRITES - configures the number of concurrent write calls to InfluxDB. If this limit is reached the next writes will be queued and made when a slot is freed. By default this is 10.

console is now available in the init context (#982):

This wasn't supported for the longest time, which made debugging things outside of VU code much harder, but now it's here! 🎉

In order to get this feature shipped in a timely manner, it currently has a known bug. The output of console calls in the init context will always be written to the stderr, even if the --console-output option is specified. This bug is tracked in #1131

HTTP response body decompression with multiple layered algorithms (#1125)

In v0.25.0 compressing bodies was added and it had support for multiple layered algorithms. Now this is also true for decompressing bodies when k6 gets them as responses.

New optional count column in the end-of-test summary (#1143)

The --summary-trend-stats now also recognizes count as a valid column and will output the count of samples in all Trend metrics. This could be especially useful for custom Trend metrics, since with them you no longer need to specify a separate accompanying Counter metric.

Docker Compose refactor (#1183)

The example docker-compose that enabled easy running of InfluxDB+Grafana+k6 was refactored and all the images were updated to use the latest stable versions.

Thanks, @KajdeMunter!

Also the k6 Dockerfile Alpine version was bumped to 3.10. Thanks @dmitrytokarev!

http.batch() improvements and optimizations (#1259)

We made several small improvements to the mechanism for executing multiple HTTP requests simultaneously from a single VU:

  • Calling http.batch() should now be more efficient, especially for many requests, because of reduced locking, type conversions, and goroutine spawning.
  • The default value for batchPerHost has been reduced from 0 (unlimited) to 6, to more closely match browser behavior. The default value for the batch option remains unchanged at 20.
  • Calling http.batch(arg), where arg is an array, would now return an array. Previously, this would have returned an object with integer keys, as explained in #767... Now http.batch() will return an array when you pass it an array, and return an object when you pass an object.

UX

  • Better timeout messages for setup and teardown timeouts, including hints on how to fix them. (#1173)
  • When a folder is passed to open(), the resulting error message will now include the path to the specified folder. (#1238)
  • The k6 version output will now include more information - the git commit it was built from (in most cases), as well as the used Go version and architecture. (#1235)

Bugs fixed!

  • Cloud: Stop sending metrics to the cloud output when the cloud returns that you have reached the limit. (#1130)
  • JS: Fail a check if an uncaught error is thrown inside of it. (#1137)
  • HTTP: Replace any user credentials in the metric sample tags with * when emitting HTTP metrics. (#1132)
  • WS: Many fixes:
    • return an error instead of panicking if an error occurs during the making of the WebSocket connection (#1127)
    • calling the error handler on an error when closing the WebSocket, instead of calling with a null (#1118)
    • correctly handle server initiated close (#1186)
  • JSON: Better error messages when parsing JSON fails. Now telling you at which line and row the error is instead of just the offset. Thanks, @openmohan! (#905)
  • HTTP: Use Request's GetBody in order to be able to get the body multiple times for a single request as needed in 308 redirects of posts and if the server sends GOAWAY with no error. (#1093)
  • JS: Don't export internal go struct fields of script options.(#1151)
  • JS: Ignore minIterationDuration for setup and teardown. (#1175)
  • HTTP: Return error on any request that returns 101 status code as k6 currently doesn't support any protocol upgrade behavior. (#1172)
  • HTTP: Correctly capture TCP reset by peer and broken pipe errors and give them the appropriate error_code metric tag values. (#1164)
  • Config: Don't interpret non-K6_ prefixed environment variables as k6 configuration, most notably DURATION and ITERATIONS. (#1215)
  • JS/html: Selection.map was not wrapping the nodes it was outputting, which lead to wrongly using the internal Goquery.Selection instead of k6's Selection. Thanks to @MMartyn for reporting this! (#1198)
  • HTTP: When there are redirects, k6 will now correctly set the cookie for the current URL, instead of for the one the current response is redirecting to. Thanks @dimatock! (#1201)
  • Cloud: Add token to make calls to the cloud API idempotent. (#120...
Read more

v0.25.1

13 Aug 10:37
v0.25.1
Compare
Choose a tag to compare

A minor release that fixes some of the issues in the v0.25.0 release.

Bugs fixed!

  • Config: Properly handle the systemTags JS/JSON option and the K6_SYSTEM_TAGS environment variable. Thanks, @cuonglm! (#1092)
  • HTTP: Fix how request bodies are internally specified so we can properly handle redirects and can retry some HTTP/2 requests. (#1093)
  • HTTP: Fix the handling of response decoding errors and slightly improve the digest auth and --http-debug code. (#1102)
  • HTTP: Always set the correct Content-Length header for all requests. (#1106)
  • JS: Fix a panic when executing archive bundles for scripts with unsuccessfull import / require() calls. (#1097)
  • JS: Fix some issues related to the handling of exports corner cases. (#1099)

v0.25.0

31 Jul 10:28
v0.25.0
Compare
Choose a tag to compare

k6 v0.25.0 is here! 🎉

This release contains mostly bug fixes, though it also has a few new features, enhancements, and performance improvements. These include HTTP request compression, brotli and zstd support, massive VU RAM usage and initialization time decreases, support for importing files via https and file URLs, and opt-in TLS 1.3 support.

Thanks to @THoelzel, @matlockx, @bookmoons, @cuonglm, and @imiric for contributing to this release!

New features and enhancements!

HTTP: request body compression + brotli and zstd decompression (#989, #1082)

Now k6 can compress the body of any HTTP request before sending it (#989). That can be enabled by setting the new compression option in the http.Params object. Doing so will cause k6 to transparently compress the supplied request body and correctly set both Content-Encoding and Content-Length, unless they were manually set in the request headers by the user. The currently supported algorithms are deflate, gzip, brotli and zstd, as well as any combination of them separated by commas (,).

k6 now also transparently decompresses brotli and zstd HTTP responses - previously only deflate and gzip were supported. Thanks, @imiric! (#1082)

import http from 'k6/http';
import { check } from "k6";

export default function () {
    // Test gzip compression
    let gzippedReqResp = http.post("https://httpbin.org/post", "foobar".repeat(1000), { compression: "gzip" });
    check(gzippedReqResp, {
        "request gzip content-encoding": (r) => r.json().headers["Content-Encoding"] === "gzip",
        "actually compressed body": (r) => r.json().data.length < 200,
    });

    // Test br decompression
    let brotliResp = http.get("https://httpbin.org/brotli", {
        headers: {
            "Accept-Encoding": "gzip, deflate, br"
        },
    });
    check(brotliResp, {
        "br content-encoding header": (r) => r.headers["Content-Encoding"] === "br",
        "br confirmed in body": (r) => r.json().brotli === true,
    });

    // Test zstd decompression
    let zstdResp = http.get("https://facebook.com/", {
        headers: {
            "Accept-Encoding": "zstd"
        },
    });
    check(zstdResp, {
        "zstd content-encoding header": (r) => r.headers["Content-Encoding"] === "zstd",
        "readable HTML in body": (r) => r.body.includes("html"),
    });
};

Performance improvement: reuse the parsed core-js library across VUs (#1038)

k6 uses the awesome core-js library to support new JavaScript features. It is included as a polyfill in each VU (i.e. JS runtime) and previously, it was parsed anew for every VU initialization. Now, the parsing result is cached after the first time and shared between VUs, leading to over 2x reduction of VU memory usage and initialization times for simple scripts!

Thanks, @matlockx, for noticing this opportunity for massive optimization!

JS files can now be imported via https and file URLs (#1059)

Previously, k6 had a mechanism for importing files via HTTPS URLs, but required that the used URLs not contain the https scheme. As a move to align k6 more closely with the rest of the JS ecosystem, we now allow and encourage users to use full URLs with a scheme (e.g. import fromurlencoded from "https://jslib.k6.io/form-urlencoded/3.0.0/index.js") when they want to load remote files. file URLs are also supported as another way to load local modules (normal absolute and relative file paths still work) from the local system, which may be especially useful for Windows scripts.

The old way of importing remote scripts from scheme-less URLs is still supported, though except for the GitHub and cdnjs shortcut loaders, it is in the process of deprecation and will result in a warning.

Opt-in support for TLS 1.3 and more TLS ciphers (#1084)

Following its opt-in support in Go 1.12, you can now choose to enable support for TLS 1.3 in your k6 scripts. It won't be used by default, but you can enable it by setting the tlsVersion (or it's max sub-option) to tls1.3:

import http from 'k6/http';
import { check } from "k6";

export let options = {
    tlsVersion: {
        min: "tls1.2",
        max: "tls1.3",
    }
};

export default function () {
    let resp = http.get("https://www.howsmyssl.com/a/check");
    check(resp, {
        "status is 200": (resp) => resp.status === 200,
        "tls 1.3": (resp) => resp.json().tls_version === "TLS 1.3",
    });
};

Also, all cipher suites supported by Go 1.12 are now supported by k6 as well. Thanks, @cuonglm!

Bugs fixed!

  • JS: Many fixes for open(): (#965)

    • don't panic with an empty filename ("")
    • don't make HTTP requests (#963)
    • correctly open simple filenames like "file.json" and paths such as "relative/path/to.txt" as relative (to the current working directory) paths; previously they had to start with a dot (i.e. "./relative/path/to.txt") for that to happen
    • windows: work with paths starting with / or \ as absolute from the current drive
  • HTTP: Correctly always set response.url to be the URL that was ultimately fetched (i.e. after any potential redirects), even if there were non http errors. (#990)

  • HTTP: Correctly detect connection refused errors on dial. (#998)

  • JS: Run imports once per VU. (#975, #1040)

  • Config: Fix blacklistIPs JS configuration. Thanks, @THoelzel! (#1004)

  • HTTP: Fix a bunch of HTTP measurement and handling issues (#1047)

    • the http_req_receiving metric was measured incorrectly (#1041)
    • binary response bodies could get mangled in an http.batch() call (#1044)
    • timed out requests could produce wrong metrics (#925)
  • JS: Many fixes for importing files and for URL imports in archives. (#1059)

  • Config: Stop saving and ignore the derived execution values, which were wrongly saved in archive bundles' metadata.json by k6 v0.24.0. (#1057, #1076)

  • Config: Fix handling of commas in environment variable values specified as CLI flags. (#1077)

Internals

  • CI: removed the gometalinter check in CircleCI, since that project was deprecated and now exclusively rely on golangci-lint. (#1039)
  • Archive bundles: The support for URL imports included a lot of refactoring and internal k6 changes. This included significant changes in the structure of .tar archive bundles. k6 v0.25.0 is backwards compatible and can execute bundles generated by older k6 versions, but the reverse is not true. (#1059)
  • Archive bundles: The k6 version and the operating system are now saved in the archive bundles' metadata.json file. (#1057, #1059)

Breaking changes

  • Previously, the Content-Length header value was always automatically set by k6 - if the header value was manually specified by the user, it would have been ignored and silently overwritten. Now, k6 would set the Content-Length value only if it wasn't already set by the user. (#989, #1094)

v0.24.0

20 Mar 10:12
v0.24.0
f597e5f
Compare
Choose a tag to compare

v0.24.0 is here! 🎉

Another intermediary release that was mostly focused on refactoring and bugfixes, but also has quite a few new features, including the ability to output metrics to StatsD and Datadog!

Thanks to @cheesedosa, @ivoreis, @bookmoons, and @oboukili for contributing to this release!

New Features!

Redirect console messages to a file (#833)

You can now specify a file to which all things logged by console.log() and other console methods will get written to. The CLI flag to specify the output file path is --console-output, and you can also do it via the K6_CONSOLE_OUTPUT environment variable. For security reasons, there's no way to configure this from inside of the script.

Thanks to @cheesedosa for both proposing and implementing this!

New result outputs: StatsD and Datadog (#915)

You can now output any metrics k6 collects to StatsD or Datadog by running k6 run --out statsd script.js or k6 run --out datadog script.js respectively. Both are very similar, but Datadog has a concept of metric tags, the key-value metadata pairs that will allow you to distinguish between requests for different URLs, response statuses, different groups, etc.

Some details:

  • By default both outputs send metrics to a local agent listening on localhost:8125 (currently only UDP is supported as a transport). You can change this address via the K6_DATADOG_ADDR or K6_STATSD_ADDR environment variables, by setting their values in the format of address:port.
  • The new outputs also support adding a namespace - a prefix before all the metric names. You can set it via the K6_DATADOG_NAMESPACE or K6_STATSD_NAMESPACE environment variables respectively. Its default value is k6. - notice the dot at the end.
  • You can configure how often data batches are sent via the K6_STATSD_PUSH_INTERVAL / K6_DATADOG_PUSH_INTEVAL environment variables. The default value is 1s.
  • Another performance tweak can be done by changing the default buffer size of 20 through K6_STATSD_BUFFER_SIZE / K6_DATADOG_BUFFER_SIZE.
  • In the case of Datadog, there is an additional configuration K6_DATADOG_TAG_BLACKLIST, which by default is equal to `` (nothing). This is a comma separated list of tags that should NOT be sent to Datadog. All other metric tags that k6 emits will be sent.

Thanks to @ivoreis for their work on this!

k6/crypto: random bytes method (#922)

This feature adds a method to return an array with a number of cryptographically random bytes. It will either return exactly the amount of bytes requested or will throw an exception if something went wrong.

import crypto from "k6/crypto";

export default function() {
    var bytes = crypto.randomBytes(42);
}

Thanks to @bookmoons for their work on this!

k6/crypto: add a binary output encoding to the crypto functions (#952)

Besides hex and base64, you can now also use binary as the encoding parameter for the k6 crypto hashing and HMAC functions.

New feature: unified error codes (#907)

Error codes are unique numbers that can be used to identify and handle different application and network errors more easily. For the moment, these error codes are applicable only for errors that happen during HTTP requests, but they will be reused and extended to support other protocols in future k6 releases.

When an error occurs, its code is determined and returned as both the error_code field of the http.Response object, and also attached as the error_code tag to any metrics associated with that request. Additionally, for more details, the error metric tag and http.Response field will still contain the actual string error message.

Error codes for different errors are as distinct as possible, but for easier handling and grouping, codes in different error categories are also grouped in broad ranges. The current error code ranges are:

  • 1000-1099 - General errors
  • 1100-1199 - DNS errors
  • 1200-1299 - TCP errors
  • 1300-1399 - TLS errors
  • 1400-1499 - HTTP 4xx errors
  • 1500-1599 - HTTP 5xx errors
  • 1600-1699 - HTTP/2 specific errors

For a list of all current error codes, see the docs page here.

Internals

  • Improvements in the integration with loadimpact.com. (#910 and #934)
  • Most of the HTTP request code has been refactored out of the js packages and is now independent from the goja JS runtime. This was done mostly so we can implement the error codes feature (#907), but will allow us more flexibility in the future. (#928)
  • As a preparation for the upcoming big refactoring of how VUs are scheduled in k6, including the arrival-rate based execution, we've added the future execution configuration framework. It currently doesn't do anything besides warn users that use execution option combinations that won't be supported in future k6 versions. See the Breaking Changes section in these release notes for more information. (#913)
  • Switched to golangci-lint via golangci.com for code linting in this repo. The gometalinter check in CircleCI is still enabled as well, but it will be removed in the following few weeks. (#943)
  • Switched to Go 1.12.1 for building and testing k6, removed official support for 1.10. (#944 and #966)

Bugs fixed!

  • JS: Consistently report setup/teardown timeouts as such and switch the error message to be more expressive. (#890)
  • JS: Correctly exit with non zero exit code when setup or teardown timeouts. (#892)
  • Thresholds: When outputting metrics to loadimpact.com, fix the incorrect reporting of threshold statuses at the end of the test. (#894)
  • UX: --quiet/-q doesn't hide the summary stats at the end of the test. When necessary, they can still be hidden via the explicit --no-summary flag. Thanks, @oboukili! (#937)

Breaking changes

None in this release, but in preparation for the next one, some execution option combinations will emit warnings, since they will no longer be supported in future k6 releases. Specifically, you won't be able to simultaneously run k6 with stages and duration set, or with iterations and stages, or with duration and iterations, or with all three. These VU schedulers (and much more, including arrival-rate based ones!) will still be supported in future k6 releases. They will just be independent from each other, unlike their current implementation where there's one scheduler with 3 different conflicting constraints.