Substantial overhead when logging Dynamic Metadata in Access Logging #9364
Labels
daily-update
Issues that require a daily update
Prioritized
Indicating issue prioritized to be worked on in RFE stream
Type: Bug
Something isn't working
Gloo Edge Product
Enterprise
Gloo Edge Version
v1.16.6
Kubernetes Version
?
Describe the bug
We found that every
DYNAMIC_METADATA
log field penalty is ~10 microseconds in P50th and 20+ microseconds in P99th. The regular “key: value” log field penalty is ~1.7 microseconds in P50 and 3.3 in P99th.The tests were done on the same cluster using k6 loadgen tool and nginx (openResty) simulated backend. The results were almost similar while set_metadata and extProc filters. It was found that the hard disk is not a bottleneck in this case.
As can be seen in the table, the gateway-proxy CPU is growing very much as a factor of dynamic metadata fields.
Expected Behavior
Logging dynamic metadata in accesslogging should not cause substantial processing overhead.
Steps to reproduce the bug
n.a
Additional Environment Detail
No response
Additional Context
No response
┆Issue is synchronized with this Asana task by Unito
The text was updated successfully, but these errors were encountered: