Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU overload due to wrapper launching new Node processes when an unhandled error occurs #375

Open
jkomoda opened this issue Apr 29, 2024 · 8 comments

Comments

@jkomoda
Copy link
Contributor

jkomoda commented Apr 29, 2024

Issue:

When running port scans on nodejs apps running via node-windows, the server CPU and memory is being overloaded to 100% due to node-windows continuously launching new processes when receiving the error:

events.js:183
      throw er; // Unhandled 'error' event
      ^

Error: read ECONNRESET
    at _errnoException (util.js:1022:11)
    at TCP.onread (net.js:615:25)

The scanner client connects to server, sends TCP packet data, then disconnects. Each time the disconnect happens, the wrapper catches this error and launches a new process here:

launch('warn', err.message);

How To Reproduce:

  1. Locally, use node-windows to install a node application that runs a simple http server listening on a specific port
  2. Install Nessus Expert trial version locally and run a scan that targets the application port
  3. Observe in task manager that multiple processes are being created from the wrapper each time the scanner TCP client disconnects

Expected Behavior:
The wrapper to handle the ECONNRESET error gracefully and not launch more processes without killing the previous one

Screenshots:

  1. Create the node server and run as node-windows service

Screenshot 2024-04-25 184934

  1. Run the Nessus scans targeting the port and observe the daemon logs showing the TCP clients connecting, sending data, then disconnecting. Then new processes try to start up but are unable to due to the original process running on the same port.

image

image

  1. In Event Viewer, observe the read ECONNRESET error being logged from wrapper.js

Screenshot 2024-04-25 183526

  1. Observe Node processes continuously being launched over and over

Screenshot 2024-04-25 183955

@jkomoda
Copy link
Contributor Author

jkomoda commented Apr 29, 2024

@coreybutler is there a way to handle these types of errors properly? For the sake of our port scan tests, I've added this snippet for a work-around

Screenshot 2024-04-25 184806

@jkomoda jkomoda changed the title CPU overload due to wrapper infinitely launching new Node processes when an unhandled error occurs CPU overload due to wrapper launching new Node processes when an unhandled error occurs Apr 29, 2024
@coreybutler
Copy link
Owner

@jkomoda something seems off here. The wrapper won't relaunch the child process unless it exits. Even when it does, there is exponential backoff. This looks suspiciously like the server is being closed when the client disconnects or when an error occurs, but the process isn't exiting when the server closes (which it should). That would be part of the application code.

I'm not entirely clear on the environment this is operating in, but I'd try running without node-windows to compare. Ultimately, it seems like the errors should be handled in the app though. The uncaughtException is just a failsafe.

@jkomoda
Copy link
Contributor Author

jkomoda commented Apr 30, 2024

@coreybutler thanks for the reply. One important thing I left out was that the process runs fine in the console (non node-windows). It doesn't crash we don't seem to be getting any ECONNRESET errors

Screenshot 2024-04-30 091114

I see this PR - https://github.com/coreybutler/node-windows/pull/277/files which seems similar to our case. Can you elaborate on these changes?

@jkomoda
Copy link
Contributor Author

jkomoda commented Apr 30, 2024

@coreybutler I have tested the code changes in the PR and it now seems to be working fine. So basically we are leaking the 'ghost' servers and aren't closing the previous one before creating another one correct?

Screenshot 2024-04-30 094329

Is it possible to merge that and create a new release so we're able to use it?

@coreybutler
Copy link
Owner

Honest truth, I completely forgot about that PR. My comments still stand... I don't want to flood the logs unnecessarily. Aside from that, I'm fine with merging that PR. If you want to apply the updates to that PR, I should be able to include them and cut a new release.

Yes, I believe you are correct re: ghost servers.

@jkomoda
Copy link
Contributor Author

jkomoda commented May 1, 2024

@coreybutler awesome! I've created a PR which adds an option to ignore warning logs when creating the daemon service - Add daemon option to suppress warning logs #376

I wanted to add it to the existing PR - Prevent security scanner from killing wrapper with ECONNRESET #277 but I have no access to push to his branch, and not sure if/when he will be active.

If you can merge both of these PRs, and create a new release that would much appreciated! Thanks!

@jkomoda
Copy link
Contributor Author

jkomoda commented May 2, 2024

@coreybutler just checking up, is it possible to merge these PRs and cut the new release?

@jkomoda
Copy link
Contributor Author

jkomoda commented May 6, 2024

@coreybutler any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants