Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add function to get the amount of memory used by torch tensors #676

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 5 additions & 0 deletions lib/TH/THGeneral.c
Original file line number Diff line number Diff line change
Expand Up @@ -268,3 +268,8 @@ double THLog1p(const double x)
return log1p(x);
#endif
}

long THGetHeapSize()
{
return heapSize;
}
1 change: 1 addition & 0 deletions lib/TH/THGeneral.h.in
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ TH_API void* THAlloc(long size);
TH_API void* THRealloc(void *ptr, long size);
TH_API void THFree(void *ptr);
TH_API void THSetGCHandler( void (*torchGCHandlerFunction)(void *data), void *data );
TH_API long THGetHeapSize(void);
// this hook should only be called by custom allocator functions
TH_API void THHeapUpdate(long size);

Expand Down
7 changes: 7 additions & 0 deletions utils.c
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,12 @@ static int torch_updateerrorhandlers(lua_State *L)
return 0;
}

static int torch_getmemory(lua_State *L)
{
lua_pushinteger(L, THGetHeapSize());
return 1;
}

static const struct luaL_Reg torch_utils__ [] = {
{"getdefaulttensortype", torch_lua_getdefaulttensortype},
{"isatty", torch_isatty},
Expand All @@ -250,6 +256,7 @@ static const struct luaL_Reg torch_utils__ [] = {
{"pointer", luaT_lua_pointer},
{"setheaptracking", torch_setheaptracking},
{"updateerrorhandlers", torch_updateerrorhandlers},
{"getmemory", torch_getmemory},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about memoryused? It might be more self-explanatory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that simply getmemory is not a good name.
memoryused is indeed better.
Any other opinions ? cutorch has a getMemoryUsage (which prints the total memory available and the memory used for the GPU). Maybe it would be good to have some uniformity in there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getMemoryUsage sounds good to me as well 😄 I agree that it might be better to keep it similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but there is a difference between getMemoryUsage and this PR in the sense that this PR aims at showing the memory used by torch, and not the total memory used. BTW, the same PR could be done in cutorch, in which case we would have different functions which behave differently but with a similar purpose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, ok, you're right. Do you have any suggestions for the name? getTorchMemory[Usage]?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.getHeapSize ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds a bit as if it would return Lua's heap size as well, but I like it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.tensormemoryusage_if_youre_not_using_too_many_threads_plusOrMinus100MB()

{NULL, NULL}
};

Expand Down