Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] Add cpu alloc/free callback to support customlize memory alloctor APIs. #1898

Open
xuhancn opened this issue May 7, 2024 · 3 comments
Assignees
Labels
enhancement A feature or an optimization request

Comments

@xuhancn
Copy link

xuhancn commented May 7, 2024

Summary

During our pytorch development, we found Windows system memory alloctor is worse performance, and slow down the whole pytorch performance. After add third party memory alloctor, pytorch improved its tensor alloction performance. Detailed please take reference: pytorch/pytorch#102534

As pytorch submodule, I found oneDNN still using system memory alloctor to malloc some buffer for reorder/resharp options.
Related code as here:

oneDNN/src/common/utils.cpp

Lines 146 to 170 in 11f5558

void *malloc(size_t size, int alignment) {
void *ptr;
if (memory_debug::is_mem_debug())
return memory_debug::malloc(size, alignment);
#ifdef _WIN32
ptr = _aligned_malloc(size, alignment);
int rc = ptr ? 0 : -1;
#else
int rc = ::posix_memalign(&ptr, alignment, size);
#endif
return (rc == 0) ? ptr : nullptr;
}
void free(void *p) {
if (memory_debug::is_mem_debug()) return memory_debug::free(p);
#ifdef _WIN32
_aligned_free(p);
#else
::free(p);
#endif
}

I add some debug log to confirmed also.

(build_pytorch) D:\xuhan\build_pytorch\pytorch\third_party\ideep\mkl-dnn>git diff
diff --git a/src/common/utils.cpp b/src/common/utils.cpp
index 37659a5d3e..1d1db40337 100644
--- a/src/common/utils.cpp
+++ b/src/common/utils.cpp
@@ -46,6 +46,38 @@
 #include "cpu/platform.hpp"
 #endif

+#ifdef _WIN32
+#include <debugapi.h>
+#define MAX_MESSAGE_SIZE 4096
+void D4D(LPCSTR szFormat, ...)
+{
+       const CHAR * p_ModuleName = "[pytorch] ";
+       char szMsg[MAX_MESSAGE_SIZE] = { 0 };
+       LPSTR lpsz = szMsg;
+       size_t nLen = 0;
+       int nReturnValue = 0;
+       va_list va;
+       va_start(va, szFormat);
+
+       lstrcatA(lpsz, p_ModuleName);
+
+       nLen = lstrlenA(szMsg);
+       lpsz = szMsg;
+       lpsz += nLen;
+
+       nReturnValue = _vsnprintf_s(lpsz, MAX_MESSAGE_SIZE - nLen, MAX_MESSAGE_SIZE, szFormat, va);
+
+       lstrcatA(szMsg, "\n");
+
+       OutputDebugStringA(szMsg);
+}
+#else
+void D4D(LPCSTR szFormat, ...)
+{
+
+}
+#endif
+
 namespace dnnl {
 namespace impl {

@@ -151,6 +183,7 @@ void *malloc(size_t size, int alignment) {
 #ifdef _WIN32
     ptr = _aligned_malloc(size, alignment);
     int rc = ptr ? 0 : -1;
+    D4D("dnnl malloc: %p - %x", ptr, size);
 #else
     int rc = ::posix_memalign(&ptr, alignment, size);
 #endif
@@ -164,6 +197,7 @@ void free(void *p) {

 #ifdef _WIN32
     _aligned_free(p);
+    D4D("dnnl free: %p", p);
 #else
     ::free(p);
 #endif

(build_pytorch) D:\xuhan\build_pytorch\pytorch\third_party\ideep\mkl-dnn>

On Windows, I tested resnet18 it has more than 360k times malloc/free via system malloc/free.
Shows as below:
image

Problem statement

For slow memory alloction on Windows OS, I also write a malloc benchmark: https://github.com/xuhancn/bench_malloc
The other third party memory malloc libraries can improve the performance.
It is also works well on pytorch: pytorch/pytorch#102534 (comment)

So, we need an idea to let oneDNN use some third party memory alloctor for performance improvement.

Option 1: Add some memory alloction library as a submodule.

Acturally, It is not a good option:

  1. Additional library will bring in more lisence, security issues.
  2. It is hard to selected a memory alloction library for all usage cases.

Option 2: Add cpu alloc/free callback to support customlize memory alloctor APIs.

It is a light method to change the memory alloction implemention.

  1. Add a optional cpu alloc/free callback registeration API.
  2. If we registered callback functions, It will use the customlize memory alloctor.
  3. If we not registered callback functions, oneDNN will use the default system memory alloctor.

Preferred solution

For above option 2:
First, we can define the callback funtions:

// void* alloc_cpu(size_t size, int alignment);
typedef void* (*t_dnnl_cpu_aligned_malloc)(size_t, int);

// void free_cpu(void* data);
typedef void (*t_dnnl_cpu_free)(void*);

The registeration API as below:

static t_dnnl_cpu_aligned_malloc                   g_dnnl_cpu_malloc;
static t_dnnl_cpu_free                             g_dnnl_cpu_free;

bool register_dnnl_cpu_memory_alloction_apis(t_dnnl_cpu_aligned_malloc p_malloc, t_dnnl_cpu_free p_free)
{
    if(!p_malloc || !p_free)
        return false;

    g_dnnl_cpu_malloc = p_malloc;
    g_dnnl_cpu_free = p_free;

    return true;
}

Reference implemention:

void *malloc(size_t size, int alignment) {
    void *ptr;
    if (memory_debug::is_mem_debug())
        return memory_debug::malloc(size, alignment);

    // malloc callback
    if(g_dnnl_cpu_malloc)
        return g_dnnl_cpu_malloc(size, alignment);

#ifdef _WIN32
    ptr = _aligned_malloc(size, alignment);
    int rc = ptr ? 0 : -1;
#else
    int rc = ::posix_memalign(&ptr, alignment, size);
#endif

    return (rc == 0) ? ptr : nullptr;
}

void free(void *p) {
    if (memory_debug::is_mem_debug()) return memory_debug::free(p);

    // free callback
    if(g_dnnl_cpu_free)
        return g_dnnl_cpu_free(p);

#ifdef _WIN32
    _aligned_free(p);
#else
    ::free(p);
#endif
}

Additional question:
oneDNN has two piece of malloc/free implemention:

  1. Common:

    oneDNN/src/common/utils.cpp

    Lines 146 to 170 in 11f5558

    void *malloc(size_t size, int alignment) {
    void *ptr;
    if (memory_debug::is_mem_debug())
    return memory_debug::malloc(size, alignment);
    #ifdef _WIN32
    ptr = _aligned_malloc(size, alignment);
    int rc = ptr ? 0 : -1;
    #else
    int rc = ::posix_memalign(&ptr, alignment, size);
    #endif
    return (rc == 0) ? ptr : nullptr;
    }
    void free(void *p) {
    if (memory_debug::is_mem_debug()) return memory_debug::free(p);
    #ifdef _WIN32
    _aligned_free(p);
    #else
    ::free(p);
    #endif
    }
  2. Graph:
    void *cpu_allocator_t::malloc(size_t size, size_t alignment) {
    void *ptr = nullptr;
    const size_t align = alignment == 0 ? DEFAULT_ALIGNMENT : alignment;
    #ifdef _WIN32
    ptr = _aligned_malloc(size, align);
    int rc = ((ptr) ? 0 : errno);
    #else
    int rc = ::posix_memalign(&ptr, align, size);
    #endif /* _WIN32 */
    return (rc == 0) ? ptr : nullptr;
    }
    void cpu_allocator_t::free(void *p) {
    #ifdef _WIN32
    _aligned_free((void *)p);
    #else
    ::free((void *)p);
    #endif /* _WIN32 */
    }

    Whether we need to add callback for both them?

CC: @jgong5, @chunyuan-w, @Guobing-Chen

@xuhancn xuhancn added the enhancement A feature or an optimization request label May 7, 2024
@jgong5
Copy link

jgong5 commented May 7, 2024

cc @mgouicem

@mgouicem
Copy link
Contributor

mgouicem commented May 7, 2024

Hi @xuhancn and thanks for the proposal. Some time ago, we decided to rely on pointers pre-allocated by users instead of malloc/free callbacks. There was 2 main reasons for this:

  • to not introduce global state to the library
  • to simplify usage (scratchpad is just another memory object, it saves users from writing wrapper function to their allocators to pass to oneDNN).

In general, the memory allocation in oneDNN happens in four places:

  • for memory object allocation. Here users can typically provide their own handle (see this constructor).
  • for scratchpad memory allocation. oneDNN already provides means for user to pass their own memory handle as well (see scratchpad_mode primitive attributes in this documentation).
  • for small temporary buffers not covered by scratchpad. This mostly affects gemm functionality as it is not a primitive. We encourage user to rely on matmul primtiive as it has more features. In particular it is compatible with user managed scratchpad.
  • for jitted executable allocation. We typically don't expose this to the user since it involves non-conventional allocations and setting page properties (it does not use the malloc function you highlighted).

Could you clarify if you are already using the mechanisms above and still see allocation overheads?

@xuhancn
Copy link
Author

xuhancn commented May 22, 2024

Hi @mgouicem
Thanks for your comment, and sorry to reply delay.
Acturally I took some time wrote a POC and collected some performance data.

My proposal indeed to optimize your mentioned item: for small temporary buffers not covered by scratchpad. This mostly affects gemm functionality as it is not a primitive. We encourage user to rely on matmul primtiive as it has more features. In particular it is compatible with user managed scratchpad.

The POC PR is here: pytorch/pytorch#126049 which contains:

  1. Add register malloc/free function to dnnl: xuhancn/oneDNN@f5ff0a6...c4d40c6#diff-f41d3e0deddfc14df260aa568dda8ffe7a22b8f6d5db94711fa2b7a64cc0855b
  2. pytorch will register its mimalloc to dnnl for temprary buffer allocation.

The performance comparsion as following:
image

After mimalloc registered, the mkldnn_convolution performance improved about 0.3s. Could you please help on designed a memory allocation callback mechanism? It will help on pytorch Windows get better performance, much appreciated.
CC: @jgong5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A feature or an optimization request
Projects
None yet
Development

No branches or pull requests

3 participants