New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Support more than 100 long-lived streams #623
Changes from 3 commits
bdd3ce4
4d5250a
0ffaee6
52ff6d3
18c1a0e
1c4d016
cc8d75d
75fc446
3a529b1
29caec6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,6 +31,7 @@ import { | |
Timestamp, | ||
} from '../src'; | ||
import {autoId} from '../src/util'; | ||
import {Deferred} from '../test/util/helpers'; | ||
|
||
const version = require('../../package.json').version; | ||
|
||
|
@@ -913,6 +914,37 @@ describe('DocumentReference class', () => { | |
maybeRun(); | ||
}); | ||
}); | ||
|
||
it('handles more than 100 concurrent listeners', async () => { | ||
const ref = randomCol.doc('doc'); | ||
|
||
const emptyResults: Array<Deferred<void>> = []; | ||
const documentResults: Array<Deferred<void>> = []; | ||
const unsubscribeCallbacks: Array<() => void> = []; | ||
|
||
// A single GAPIC client can only handle 100 concurrent streams. We set | ||
// up 100+ long-lived listeners to verify that Firestore pools requests | ||
// across multiple clients. | ||
for (let i = 0; i < 150; ++i) { | ||
emptyResults[i] = new Deferred<void>(); | ||
documentResults[i] = new Deferred<void>(); | ||
|
||
unsubscribeCallbacks[i] = randomCol | ||
.where('i', '>', i) | ||
.onSnapshot(snapshot => { | ||
if (snapshot.size === 0) { | ||
emptyResults[i].resolve(); | ||
} else if (snapshot.size === 1) { | ||
documentResults[i].resolve(); | ||
} | ||
}); | ||
} | ||
|
||
await Promise.all(emptyResults.map(d => d.promise)); | ||
ref.set({i: 1337}); | ||
await Promise.all(documentResults.map(d => d.promise)); | ||
unsubscribeCallbacks.forEach(c => c()); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This test verifies that all 150 listeners succeed but doesn't verify that everything has been properly released to the pool. Is it possible to check that pool.size is 150 once the listeners are started and then get back to zero after? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I worry that this test can succeed even if you remove the line that resolves the lifetime promise. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I added a "shutdown" block to each tests that verifies that the operation count goes back to zero. I had to change some of the unit tests to make this work. |
||
}); | ||
}); | ||
}); | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are streams guaranteed to emit the
close
event? What happens in the case of an error?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to https://nodejs.org/api/stream.html, yes:
"A Writable stream will always emit the 'close' event if it is created with the emitClose option."
"A Readable stream will always emit the 'close' event if it is created with the emitClose option."
emitClose
defaults to true.I originally trusted this, but I spent more time and added test asserts. It turns out that the
close
event is not always emitted. To make the unit and system tests pass, I also have to wait for error/end and finish on writeable streams.