Skip to content

Lightweight multi-threading capable execution time analyzer / profiler

License

Notifications You must be signed in to change notification settings

valango/timewaste

Repository files navigation

timewaste

Build Status Code coverage

Lightweight multi-threading capable execution time analyzer / profiler.

This package:

  1. provides clean and focused reports immune to code bundling / optimization;
  2. reports execution statistics of any code sequences, not just functions;
  3. can handle concurrent threads / server endpoints;
  4. can handle callback-based code, like express.js middleware;
  5. monitors execution consistency by reporting "return leaks";
  6. in case of nested calls, both aggregated and self time can be reported;
  7. supports both Node.js and browser environments;
  8. allows performance monitoring and reporting to be controlled/analyzed programmatically.

Note: Node.js built-in profiler, as well as Developer Tools of many browsers may provide all you need without installing anything - so please check those out if you haven't done so.

Install

   yarn add timewaste       # Either so...
   npm i -S timewaste       # or so.

Usage

In order to do something, timewaste needs you to waste some time on spicing your code with its API calls, mostly (profBeg and profEnd).

Here's an example:

import {profBeg, profEnd, profTexts} from 'timewaste'

function a() {
  const h = profBeg('a')
  //  Do something and sometimes call b().
  profEnd(h)
}

function b () {
  const h = profBeg('b')
  //  Do something.
  profEnd(h)
}

for (let i = 0; i < 1000; ++i) a()

console.log(profTexts())

Will produce an output similar to:

*** LEAKS (1):
tag                                                   count
-----------------------------------------------------------
a b c d e f                                             766
a b c d e f g h i j k l m n o p                       10000

RESULTS:
tag   count     avg     time   total_avg   total_time
-----------------------------------------------------
e     10000   16.20   162036       16.20       162036
o     10000    9.89    98884        9.89        98884
 . . .
f       234    4.00        4      936.00          936
j     10000    0.42     4203       14.03       140277
p         0    0.00        0        0.00            0

This tells us that code sequence e, although gobbling up a gargantuan chunk of execution time, might not be the main culprit, because some of it's sub-parts or functions called, did the most.

Also, there have been some problem with function f which sometimes (not always, as count is not 0) has terminated (by throwing exception, returning or yielding a thread) without calling profEnd() first. When it comes to p then it has never terminated as appropriate.

The leaks are most likely caused by missing something when spicing the target code, and not by target code bugs as such.

API

Constants

P_TAG P_TIME P_AVG P_TOTAL P_TOT_AVG P_COUNT P_THREADS numeric indexes of measures record columns, also applicable to sortByField and fieldMask arguments.

P_NONE special value to prevent any sorting (mostly used internally).

P_HEADERS string array containing headers for profTexts() function.

Functions

In the text below, the jsdoc {...} syntax is used sparingly; [type] means that argument is optional or value may be undefined.

profBeg(tag: string, threadId: [*]) : {number | boolean | undefined}

Creates a call entry for the main or secondary thread. Returns numeric handle on success, false on failure and undefined when profiler disabled.

profEnd(handle: number, threadId: [*]) : {boolean | undefined}

Closes an entry created by profBeg(). Returns true on success, false on failure and undefined when profiler disabled.

profEnable(yes=) : {boolean | undefined}

Query and possibly switch profiler enabled status, which is initially true. While false, the profBeg() and profEnd() functions do nothing and return false. Calling profEnable(false) while there are call pending will be ignored, and internal error will be registered and undefined returned; see profStatus() for details.
Profiling can be switched off and back on using this function.

profResults(sortByField: [number], earlierResults: [Object]) : Object

Closes any pending measure entries, then computes and returns array of measurement result objects, possibly combined with earlierResults. Unless sortByField is -1, measures fill be sorted according to its value, which should be one of exported F_... constants; default is F_AVG. The returned object has properties:

  • [errors]: Array<Error>,
  • [leaks]: Array<[path, count]> - cases when profBeg was called but profEnd was not.
  • measures: Array<[]> - profiling results sorted by sortByField.

profSetup(options:Object=) : Object

Query and possibly change profiler general options, which can be:

  • getTime: {function():*} - function for querying current timestamp;
  • hook: {function():*} - callback to be called before registering an internal error;
  • timeScale: {number|bigint} - must be the same type as getTime() return value;
  • precision: {number=3} - positions after decimal point, used by profText();

In Node.js environment, getTime defaults to microtime, in browser environment to Date.now; timeScale to BigInt(1e3) (microseconds) or 1 (milliseconds), respectively.

Calling this function with an argument will reset all internal data structures. Doing so repeatedly may significantly slow down the code.

profStatus(details : boolean=) : Object

Query for general status of profiling engine. The returned object has the following properties:

  • callDepth: number - number of open synchronous entries;
  • enabled: boolean - internal flag controllable via profEnable();
  • errorCount: number;
  • leakCount: number;
  • threadCount: number;

With details truey, the following extra properties will be available, too:

  • errors: Error[] - errors caused by bad API calls;
  • openCalls: string[] - currently open call stack as array of tags;
  • openThreads: string[] - list of currently running threads as 'tag>id'.

profText(sortByField: [number], data: [Object], fieldMask: [number[]]) : string[]

Turns data into line-by-line text array representing a pretty-printable table. If data are omitted, then calls profResults() is called to get one.

The sortByField argument here is analogous to the one of profResults() function.

When present, the fieldMask leaves on only the fields mentioned or excludes fields with negative index, e.g. like in profTexts([-P_TOTAL, -P_TIME]). It is not possible to exclude the tag field.

In-depth

Environment control

In production mode (e.g. process.env.NODE_ENV === 'production'), the profiler code gets never loaded, and a do-nothing API will be exported instead. So, no noticeable computational overhead nor extra code will be added.

To enable timewaste even in production build, set TIMEWASTE environment variable true.

Error handling

Code failures and improper use of timewaste may result in errors. Exceptions get thrown on bad calls (e.g. invalid argument type or format). Other failures, like mismatched profBeg/profEnd pairs, will be silently accumulated in internal collections available via profStatus().errors, resettable via profStatus().

Multi-threading

When profiling code with actual multi-threading (like when using web workers) or just asynchronously executed code (like Node.js web server end points), it is likely (profBegin(), profEnd()) call pairs get un-matched, effectively disabling the profiler.

Developer notes

Every feedback and assistance will be appreciated. Use github issues list first, and feel free to make pull request of development branch in order to contribute.

There is some documentation about its inner parts available in src/*.md files.

Instrumental scripts in package.json

  • clean: erase all temporary files but not installed dependencies,
  • lint: ...sure,
  • purge: erase everything, expect stuff kept in VCS,
  • speeds: speed benchmark,
  • test: module tests,
  • test:coverage: "codecov",
  • test1: e.g.: yarn test1 numbers to check for failures w/o generating coverage.