Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stats issue: mapped < active when HPA is enabled #2520

Open
pdokoupil opened this issue Aug 14, 2023 · 3 comments
Open

Stats issue: mapped < active when HPA is enabled #2520

pdokoupil opened this issue Aug 14, 2023 · 3 comments

Comments

@pdokoupil
Copy link

We are observing some mismatch in reported statistics where Jemalloc reports stats.mapped is smaller than stats.active. So far, we have only observed this after enabling hpa:true. I managed to reproduce this issue without our internal code, by adjusting one of the Jemalloc's stress tests to use different configuration, see the patch below.

Any idea what can be the root cause behind this? Thanks!

diff --git a/jemalloc/test/stress/fill_flush.c b/jemalloc/test/stress/fill_flush.c
index a2db044dd5d..191f9ef2ffa 100644
--- a/jemalloc/test/stress/fill_flush.c
+++ b/jemalloc/test/stress/fill_flush.c
@@ -5,6 +5,8 @@
 #define LARGE_ALLOC_SIZE SC_LARGE_MINCLASS
 #define NALLOCS 1000

+const char *malloc_conf = "hpa_hugification_threshold_ratio:1.0,hpa_dirty_mult:0.03,hpa:true,metadata_thp:always,thp:default";
+
 /*
  * We make this volatile so the 1-at-a-time variants can't leave the allocation
  * in a register, just to try to get the cache behavior closer.
@@ -40,6 +42,20 @@ TEST_BEGIN(test_array_vs_item_small) {
 }
 TEST_END

+static void
+validate_stats(void) {
+       size_t sz, active, mapped;
+       sz = sizeof(size_t);
+       expect_d_eq(mallctl("stats.active", (void *)&active, &sz, NULL, 0),
+           0, "Unexpected mallctl() result");
+       expect_d_eq(mallctl("stats.mapped", (void *)&mapped, &sz, NULL, 0),
+           0, "Unexpected mallctl() result");
+
+       expect_zu_lt(active, mapped,
+               "active should be less than mapped");
+
+}
+
 static void
 array_alloc_dalloc_large(void) {
        for (int i = 0; i < NALLOCS; i++) {
@@ -47,6 +63,7 @@ array_alloc_dalloc_large(void) {
                assert_ptr_not_null(p, "mallocx shouldn't fail");
                allocs[i] = p;
        }
+       validate_stats();
        for (int i = 0; i < NALLOCS; i++) {
                sdallocx(allocs[i], LARGE_ALLOC_SIZE, 0);
        }
@Svetlitski
Copy link
Contributor

Not all of the HPA stats are propagated upwards right now, this is a known deficiency. I'd just like to clarify that the HPA is currently experimental (and hence not documented), so it's reasonable to expect there to be more issues than with Jemalloc's stable features.

@pdokoupil
Copy link
Author

Not all of the HPA stats are propagated upwards right now, this is a known deficiency. I'd just like to clarify that the HPA is currently experimental (and hence not documented), so it's reasonable to expect there to be more issues than with Jemalloc's stable features.

Thanks for the reaction, it is definitely understandable there might be issues in the experimental feature, I just was not sure if this is a known issue or something that has not popped out yet. Since it is a known issue, are you aware of any alternative stats for measuring memory fragmentation/utilization with HPA? One of the things I am curious about is evaluating how would HPA help us in reducing fragmentation and improving utilization, but estimating utilization from active and mapped is no longer reliable.

@Svetlitski
Copy link
Contributor

Svetlitski commented Aug 14, 2023

are you aware of any alternative stats for measuring memory fragmentation/utilization with HPA?

Yes. In the "Merged arena stats" section of the Jemalloc statistics output, look at the "HPA Shard Stats" subsection. npageslabs is in units of Hugepages (internally Jemalloc refers to a Hugepage as a Pageslab sometimes). nactive, ndirty, and nretained are in units of the normal page size (usually the system page size – like 4 KiB on x86-64 – unless you explicitly configured Jemalloc to use a different page size).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@Svetlitski @pdokoupil and others