You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the issue
I'm a backend developer. I've been utilizing mock server extensively to facilitate testing of my backend services by returning specific dummy responses. However, I've encountered latency and CPU utilization related bottleneck as I attempt to scale up my testing to handle up to 30M requests per minute.
What you are trying to do
Till now I had been using a relatively small json file for expectations which had 8 expectations with a combined size of 2KB and when I load tested it, it did perform reasonably well with p99 response time being around 5 ms with a load of 3.3 Million requests per minute(I'm using 4 pods each one having 4GB RAM and 6 CPUs)
But when my json file has close to 270 mocks and the combined size is around 4.5 MB with 10-12 large sized mocks where size of some mock responses is in the range of MB, I experience extremely high latency and CPU utilization. I get a p99 of 15s with a load just above 1 million requests per minute. Also my CPU usage hits 100%. I also experience API timeouts with some requests taking more than 30s.
MockServer version
5.14.0
To Reproduce
Steps to reproduce the issue:
I'm running the mockserver by directly using the JAR.
This is the config I'm using:
-Dmockserver.propertyFile=/usr/local/mockserver.properties
-Dmockserver.maxExpectations=2000
-Dmockserver.disableSystemOut=true
-Dmockserver.disableLogging=true
-Dmockserver.nioEventLoopThreadCount=200
-Dmockserver.clientNioEventLoopThreadCount=200
-Dmockserver.actionHandlerThreadCount=200
-Dmockserver.maxLogEntries=0
-Dmockserver.initializationJsonPath=/usr/local/expectations/all/expectations.json
-Dmockserver.watchInitializationJson=true
Expected behaviour
I expect the mockserver to stay performant in terms of response time and CPU usage.
Any assistance with respect to decreasing response time and CPU usage without increasing hardware will be highly appreciated
The text was updated successfully, but these errors were encountered:
Describe the issue
I'm a backend developer. I've been utilizing mock server extensively to facilitate testing of my backend services by returning specific dummy responses. However, I've encountered latency and CPU utilization related bottleneck as I attempt to scale up my testing to handle up to 30M requests per minute.
What you are trying to do
Till now I had been using a relatively small json file for expectations which had 8 expectations with a combined size of 2KB and when I load tested it, it did perform reasonably well with p99 response time being around 5 ms with a load of 3.3 Million requests per minute(I'm using 4 pods each one having 4GB RAM and 6 CPUs)
But when my json file has close to 270 mocks and the combined size is around 4.5 MB with 10-12 large sized mocks where size of some mock responses is in the range of MB, I experience extremely high latency and CPU utilization. I get a p99 of 15s with a load just above 1 million requests per minute. Also my CPU usage hits 100%. I also experience API timeouts with some requests taking more than 30s.
MockServer version
5.14.0
To Reproduce
Steps to reproduce the issue:
I'm running the mockserver by directly using the JAR.
This is the config I'm using:
-Dmockserver.propertyFile=/usr/local/mockserver.properties
-Dmockserver.maxExpectations=2000
-Dmockserver.disableSystemOut=true
-Dmockserver.disableLogging=true
-Dmockserver.nioEventLoopThreadCount=200
-Dmockserver.clientNioEventLoopThreadCount=200
-Dmockserver.actionHandlerThreadCount=200
-Dmockserver.maxLogEntries=0
-Dmockserver.initializationJsonPath=/usr/local/expectations/all/expectations.json
-Dmockserver.watchInitializationJson=true
Expected behaviour
I expect the mockserver to stay performant in terms of response time and CPU usage.
Any assistance with respect to decreasing response time and CPU usage without increasing hardware will be highly appreciated
The text was updated successfully, but these errors were encountered: