Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(earliest): Handlers that execute ASAP in the req/res lifecycle #1756

Merged
merged 17 commits into from Mar 18, 2019

Conversation

retrohacker
Copy link
Member

@retrohacker retrohacker commented Mar 8, 2019

Usage

var restify = require('restify')
var errors = require('restify-errors')

const server = restify.createServer()

var error = new Error();

server.earliest(function(req, res, next) {
  if(server._inflightRequests > 50) {
    res.statusCode = 503
    res.end()
    return next(error);
  }
  return next();
})

server.get('/hello', function(req, res, next) {
  setTimeout(function() {
    res.send(200, { hello: 'world' })
    return next()
  }, 200)
})

server.listen(8000)

Performance for Above Example

Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼──────────┼───────────┤
│ Latency │ 3 ms │ 4 ms │ 8 ms  │ 16 ms │ 5.94 ms │ 18.37 ms │ 387.53 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%    │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 13071   │ 13071   │ 32655  │ 33631   │ 31852.14 │ 3575.94 │ 13069   │
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.52 MB │ 1.52 MB │ 3.8 MB │ 3.91 MB │ 3.71 MB  │ 416 kB  │ 1.52 MB │
└───────────┴─────────┴─────────┴────────┴─────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

6602 2xx responses, 948986 non 2xx responses
956k requests in 30.06s, 111 MB read

Attached flamegraph leads me to believe there isn't much room for improvement over this.

@hekike
Copy link
Member

hekike commented Mar 10, 2019

@retrohacker would it make sense to use the same middleware API that we use for othes (pre, use etc.)?

So turning this into a new Chain?

Copy link
Member

@rajatkumar rajatkumar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this deviates a little bit from existing chained middleware behavior that users have come to expect.

Also, the use-case (shedding load immediately) is very specific in nature. I feel the better approach is to allow users to add a single synchronous function returning true/false that determines whether restify server should take any more requests for further processing or not. So basically I am proposing that we should not try to introduce a chained middleware stye function list but a synchronous hook for users to latch onto and determine if the server should process the request or not. This proposal is specific to code example in the PR.

lib/server.js Outdated
@@ -117,6 +117,7 @@ function Server(options) {

this.onceNext = !!options.onceNext;
this.strictNext = !!options.strictNext;
this.earliestChain = [];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be a new Chain() instead of an array?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or the reason for using an array is because the earliest chain middlewares are expected to be synchronous and always return a boolean which is not the same behavior for Chain?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, even more so now though. We've discovered that (unsurprisingly in retrospect) breaking the event loop between the load shedding evaluation and inflight request accounting introduces a race condition. Depending on how the accounting done, this will result in either too many requests being shed or not enough.

Forcing them to be synchronous appears to be effective at maintaining the correct inflight request count under load.

lib/server.js Outdated
* return true;
* })
*/
Server.prototype.earliest = function earliest() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unclear about what is the benefit of introducing another phase called earliest. I feel the pre phase should be sufficient. The order in which the pre middlewares are setup should be sufficient to solve this use case. Maybe I am misunderstanding something here. Do you mind helping me understand the reasoning behind it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sorry this wasn't clear. I followed up with a comment about the performance delta between the two handlers.

lib/server.js Outdated
// this request before we do any work on it
var earliestChain = self.earliestChain;
for (var i = 0; i < earliestChain.length; i++) {
if (earliestChain[i](req, res) === false) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking the example you mentioned in code comments :

sever.earliest(function(req, res) {
  if(server.inflightRequests > 100) {
     res.statusCode = 503;
     res.end();
     return false
   }

return true;
});

If we call earliestChain functions here and say inflightRequests count is 101, then the subsequent requests will always be sent a 503 . And there won't be a way to even track these incoming requests and no lifecycle events will be generated. Is this what we are expecting here?

I think at least after event should be emitted (even if we return false) to help keep track of such an immediate shedding.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comes down to performance, I agree in principal, but the trade-off is pretty severe.

@hekike
Copy link
Member

hekike commented Mar 11, 2019

I'm wodnering if better approach would be to expose handle like Express does
https://github.com/expressjs/express/blob/master/lib/application.js#L158 and do this:

const http = require('http');
const restify = require('restify');

const httpServer = http.createServer();
const server = restify.createServer();
const PORT = 3000;
const INFLIGHT_REQUESTS_LIMIT = 100;

httpServer.listen(PORT,  function onHttpListen(req, res) {
   if (server.inflightRequests > INFLIGHT_REQUESTS_LIMIT) {
      return res.send(302);
   }

  return server.handle(req, res);
}); 

@DonutEspresso
Copy link
Member

DonutEspresso commented Mar 12, 2019

If we call earliestChain functions here and say inflightRequests count is 101, then the subsequent requests will always be sent a 503 . And there won't be a way to even track these incoming requests and no lifecycle events will be generated. Is this what we are expecting here?

+1 to @rajatkumar's comments. I think more than anything we want consistency in the metrics and lifecycle events, even if it comes at the cost of some throughput. @retrohacker are we able to quantify the delta between the implementation in this PR vs shedding in the first installed pre handler?

@retrohacker retrohacker reopened this Mar 12, 2019
@retrohacker
Copy link
Member Author

retrohacker commented Mar 12, 2019

@DonutEspresso @rajatkumar

even if it comes at the cost of some throughput.

The difference between pre and earliest when there aren't any other handlers installed isn't super pronounced in overall throughput, but for 200 status codes it very much is.

On my machine, a server under heavy load (200 concurrent connections and an inflight limit of 50) pre drops performance down to about 27 RPS while earliest maintains 225 RPS. For comparison, a server without any load shedding checks handles roughly 230 RPS w/ 50 concurrent connections. earliest has a throughput impact of ~2%, while .pre has an impact of over 88%.

It's important to note that these overheads are calculated without after handlers, so this is just the cost of pushing the request down into restify. Any additional amount of work you attempt to perform for logging/tracing/metrics will cut into the remaining 27RPS.

I worry the trade-off we would make to get consistency in the metrics and life-cycle events is that we would be unable to withstand any significant increase in traffic, irrespective of how load shedding is tuned. If we want effective load shedding, I don't see a way forward that exercises the restify framework.

.pre

var restify = require('restify')
var errors = require('restify-errors')

const server = restify.createServer()

server.pre(restify.plugins.inflightRequestThrottle({
  limit: 50,
  res: new errors.InternalServerError(),
  server
}))

server.get('/hello', function(req, res, next) {
  setTimeout(function() {
    res.send(200, { hello: 'world' })
    return next()
  }, 200)
})

server.listen(8000)
Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬─────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev   │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼─────────┼───────────┤
│ Latency │ 5 ms │ 6 ms │ 11 ms │ 14 ms │ 7.27 ms │ 7.44 ms │ 329.44 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴─────────┴───────────┘
┌───────────┬─────────┬─────────┬─────────┬────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%  │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 8703    │ 8703    │ 27087   │ 29631  │ 25763.34 │ 4026.31 │ 8701    │
├───────────┼─────────┼─────────┼─────────┼────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.96 MB │ 1.96 MB │ 6.12 MB │ 6.7 MB │ 5.82 MB  │ 910 kB  │ 1.96 MB │
└───────────┴─────────┴─────────┴─────────┴────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

828 2xx responses, 772047 non 2xx responses
773k requests in 30.07s, 175 MB read

.earliest

var restify = require('restify')
var errors = require('restify-errors')

const server = restify.createServer()

server.earliest(function(req, res) {
  if(server._inflightRequests >= 50) {
    res.statusCode = 503
    res.end()
    return false
  }
  return true
})

server.get('/hello', function(req, res, next) {
  setTimeout(function() {
    res.send(200, { hello: 'world' })
    return next()
  }, 200)
})

server.listen(8000)
Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼──────────┼───────────┤
│ Latency │ 2 ms │ 4 ms │ 9 ms  │ 17 ms │ 5.76 ms │ 18.19 ms │ 384.46 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec   │ 11727   │ 11727   │ 32863   │ 34527   │ 31961.2 │ 3948.17 │ 11723   │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.37 MB │ 1.37 MB │ 3.82 MB │ 4.02 MB │ 3.72 MB │ 459 kB  │ 1.37 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

6750 2xx responses, 952075 non 2xx responses
959k requests in 30.07s, 112 MB read

none

var restify = require('restify')
var errors = require('restify-errors')

const server = restify.createServer()

server.get('/hello', function(req, res, next) {
  setTimeout(function() {
    res.send(200, { hello: 'world' })
    return next()
  }, 200)
})

server.listen(8000)
Running 30s test @ http://localhost:8000/hello
50 connections

┌─────────┬────────┬────────┬────────┬────────┬───────────┬──────────┬───────────┐
│ Stat    │ 2.5%   │ 50%    │ 97.5%  │ 99%    │ Avg       │ Stdev    │ Max       │
├─────────┼────────┼────────┼────────┼────────┼───────────┼──────────┼───────────┤
│ Latency │ 201 ms │ 212 ms │ 266 ms │ 275 ms │ 216.54 ms │ 16.69 ms │ 299.43 ms │
└─────────┴────────┴────────┴────────┴────────┴───────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec   │ 173     │ 173     │ 240     │ 250     │ 229.74  │ 23.01   │ 173     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 28.7 kB │ 28.7 kB │ 39.9 kB │ 41.5 kB │ 38.1 kB │ 3.82 kB │ 28.7 kB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

7k requests in 30.1s, 1.14 MB read

@retrohacker
Copy link
Member Author

Assuming the load shedding implementation handles it's own accounting, it is still possible to capture statistics about the behavior of load shedding.

For example, you can keep a long running counter of the number of requests that are being rejected at the front door, and you can log specific headers (since those are parsed before the Node.js request event fires), all from the earliest handler.

It's also worth noting that we already capture a lot of information about these requests at our cloud gateway, and would be able to track load shedding with those metrics as well.

There are alternative methods of gathering insight which are lower overhead than pushing the request through restify.

@retrohacker retrohacker changed the title feat(earliest): Handlers that execute before restify feat(handle): Allow external servers to invoke the restify middleware chain Mar 13, 2019
@retrohacker
Copy link
Member Author

I've updated this PR to match @hekike feedback, using the more generic handle implementation.

@retrohacker retrohacker changed the title feat(handle): Allow external servers to invoke the restify middleware chain feat(earliest): Handlers that execute before restify Mar 13, 2019
@retrohacker
Copy link
Member Author

Reverting to the original proposed API earliest due to the scope of work necessary to decouple the http server from the restify server in existing code bases. Requiring that work to add in load shedding seems high friction.

I still like the handle API, and think we should expose it, but as a more generic alternative to this API where new codebases can opt in to that pattern before heavily investing in restify's internal http server.

@retrohacker
Copy link
Member Author

I've been reflecting on your feedback @rajatkumar and @DonutEspresso

I explored Chain as a possible way to stay as similar to the existing chains as possible (pre and after). I uncovered something weird!

Using chain carries about a 50% impact to overall throughput, but outperforms healthy traffic. I was trying to dig into why, and discovered that making earliest asynchronous prevents server._inflightRequests from being incremented prior to the load shedding evaluation. So we would accept all 200 concurrent requests, start completing them and shed load until we dropped below 50 again, and then shoot back up to 200 concurrent requests!

Moving the inflight request logic up around the earliest chain in server.js fixed this.

Keeping a consistent interface doesn't appear to have a massive overhead.

Seems like Chain may be a decent trade-off!

chain w/o fix

Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬────────┬────────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5%  │ 99%    │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼────────┼────────┼─────────┼──────────┼───────────┤
│ Latency │ 0 ms │ 1 ms │ 210 ms │ 229 ms │ 9.73 ms │ 41.94 ms │ 384.13 ms │
└─────────┴──────┴──────┴────────┴────────┴─────────┴──────────┴───────────┘
┌───────────┬────────┬────────┬─────────┬─────────┬──────────┬──────────┬────────┐
│ Stat      │ 1%     │ 2.5%   │ 50%     │ 97.5%   │ Avg      │ Stdev    │ Min    │
├───────────┼────────┼────────┼─────────┼─────────┼──────────┼──────────┼────────┤
│ Req/Sec   │ 1018   │ 1018   │ 17199   │ 36639   │ 19453.97 │ 10426.39 │ 1018   │
├───────────┼────────┼────────┼─────────┼─────────┼──────────┼──────────┼────────┤
│ Bytes/Sec │ 148 kB │ 148 kB │ 2.04 MB │ 4.29 MB │ 2.29 MB  │ 1.21 MB  │ 148 kB │
└───────────┴────────┴────────┴─────────┴─────────┴──────────┴──────────┴────────┘

Req/Bytes counts sampled once per second.

21084 2xx responses, 562536 non 2xx responses
584k requests in 30.17s, 68.8 MB read

chain w/ fix

Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼──────────┼───────────┤
│ Latency │ 3 ms │ 4 ms │ 8 ms  │ 16 ms │ 5.94 ms │ 18.37 ms │ 387.53 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%    │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 13071   │ 13071   │ 32655  │ 33631   │ 31852.14 │ 3575.94 │ 13069   │
├───────────┼─────────┼─────────┼────────┼─────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.52 MB │ 1.52 MB │ 3.8 MB │ 3.91 MB │ 3.71 MB  │ 416 kB  │ 1.52 MB │
└───────────┴─────────┴─────────┴────────┴─────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

6602 2xx responses, 948986 non 2xx responses
956k requests in 30.06s, 111 MB read

synchronous

Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼──────────┼───────────┤
│ Latency │ 3 ms │ 4 ms │ 13 ms │ 23 ms │ 6.02 ms │ 18.88 ms │ 438.67 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 10911   │ 10911   │ 32111   │ 35551   │ 30608.67 │ 5280.69 │ 10909   │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.27 MB │ 1.27 MB │ 3.74 MB │ 4.14 MB │ 3.56 MB  │ 613 kB  │ 1.27 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

6718 2xx responses, 911566 non 2xx responses
918k requests in 30.07s, 107 MB read

@retrohacker
Copy link
Member Author

No matter what I do with async stuff, I end up in a situation where the accounting for load shedding is wrong. Unless inflight accounting is handled by load shedding, any break in the eventloop introduces race conditions.

With inflight accounting happening before the earliest chain, inflight will never go above limit, but if you receive limit requests in one tick of the event loop, you will shed 100% of them and your service will handle nothing.

With inflight accounting working the way it works today, you can let too much traffic through in one tick of the event loop, saturating the server.

Starting to lean towards the original form of the PR. I've been noodling on using only a single synchronous handler, though that seems to complicate plugins. With a single handler, we will only have one place in the codebase where we can attach an earliest handler, anything else will displace the original handler.

@retrohacker
Copy link
Member Author

retrohacker commented Mar 14, 2019

Alright, hope this is the final version of this PR. Sorry for the churn!

After digging in, it seems any attempt at asynchronous accounting of inflight requests is going to introduce race conditions. This appears to be true for our current implementation of InflightRequestThrottle as well, which let's through more traffic than it should.

The only effective way I can find to hold the door on the inflight limit is to make the inflight request accounting and load shedding synchronous, so the evaluation for both happen on a single request before yielding the event loop.

The current form of this PR has strict synchronous load shedding and inflight request accounting.

Here are the benchmarks for this run:

Running 30s test @ http://localhost:8000/hello
200 connections

┌─────────┬──────┬──────┬───────┬───────┬─────────┬──────────┬───────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%   │ Avg     │ Stdev    │ Max       │
├─────────┼──────┼──────┼───────┼───────┼─────────┼──────────┼───────────┤
│ Latency │ 3 ms │ 3 ms │ 8 ms  │ 15 ms │ 5.22 ms │ 17.58 ms │ 380.21 ms │
└─────────┴──────┴──────┴───────┴───────┴─────────┴──────────┴───────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 13839   │ 13839   │ 35359   │ 37375   │ 34218.54 │ 4235.28 │ 13833   │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 1.61 MB │ 1.61 MB │ 4.11 MB │ 4.35 MB │ 3.98 MB  │ 492 kB  │ 1.61 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

6750 2xx responses, 1019790 non 2xx responses
1027k requests in 30.06s, 119 MB read

@retrohacker retrohacker changed the title feat(earliest): Handlers that execute before restify feat(earliest): Handlers that execute super early Mar 14, 2019
@retrohacker retrohacker changed the title feat(earliest): Handlers that execute super early feat(earliest): Handlers that execute ASAP in the req/res lifecycle Mar 14, 2019
Copy link
Member

@DonutEspresso DonutEspresso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solid! Barring comments I think this is close to being ready.

Now the real discussion comes - I am not a fan of the earliest naming 😂 (don't hate me). I'm not sure why, but the name just doesn't resonate with me. Some other alternatives: server.first(), server.firstSync() (if we want to emphasize the sync nature of it).

Thoughts?

* })
*/
Server.prototype.earliest = function earliest() {
var args = Array.prototype.slice.call(arguments);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we dropped Node6, any reason not to use rest parameters here?

assert.arrayOfFunc(...func);
this.earliestChain = this.earliestChain.concat(...func);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Linting fails.

I originally used const and let everywhere but make prepush failed, so I changed everything back to good ol' ES5.

});
});

test('earliest chain should allow multiple handlers', function(t) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we should add a test that confirms inflight numbers are accurate after an earliest rejection event?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good point 👍

@retrohacker
Copy link
Member Author

👍 on first

In my head firstSync implies this is an alternative to a non-sync method (like writeSync).

lib/server.js Outdated
* Takes one or more functions.
* @returns {Object} returns self
* @example
* sever.first(function(req, res) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/sever/server

lib/server.js Outdated
* @returns {Object} returns self
* @example
* sever.first(function(req, res) {
* if(server._inflightRequests > 100) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the sample code be using private vars? Any reason not to use the public inflightRequests() API?

lib/server.js Outdated
*
* The only work restify does for a first handler is to increment the number of
* inflightRequests prior to calling the chain, and decrement that value if the
* handler returns false.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can set more context around this feature and and the "bare metal" nature of it. e.g., using server.first() has some trade offs:

  • Standard restify server lifecycle events like after are not triggered for these requests
  • You do not have access to restify req/res APIs (because we modify the prototype they're there, but without state being setup it's unlikely to work correctly for many scenarios)
  • No req.timers for server.first
  • Domains do not apply, even if you have set the flag

etc, etc. There might be others but I think it's worth calling these out (DANGER: BREAK GLASS ONLY IN EMERGENCY).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants