Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimizing memory usage #38

Open
ThomasWaldmann opened this issue May 20, 2018 · 2 comments
Open

optimizing memory usage #38

ThomasWaldmann opened this issue May 20, 2018 · 2 comments

Comments

@ThomasWaldmann
Copy link
Collaborator

ThomasWaldmann commented May 20, 2018

py-esp32-ulp is able to assemble about 3kB of source code (when it is running on a standard ESP32 "WROOM" chip) - more gives a MemoryError (out of memory). Bigger amounts of source code can be assembled when running on a device with more RAM (like on a development machine running the UNIX port of micropython).

PR #36 improved this a bit by calling the garbage collector now and then.

The limit looks rather low, but considering that the ULP has only a total of 4kiB of RAM and that we will likely only be allowed to use ~2kiB of it from micropython and that usually some of it is needed as a buffer (thus: not for code), the limit doesn't look too bad.

(moved from #35)

We can collect ideas here about how to improve memory usage in case we need to.

@ThomasWaldmann
Copy link
Collaborator Author

ThomasWaldmann commented May 20, 2018

The place where it currently usually runs out of memory is a list comprehension doing a transformation

  • from A) a list of strings (source lines, comments already were removed)
  • to B) an equivalent list of tuples (label, opcode, args).

Having both A and B in memory seems to be too much. We could avoid this by restructuring the code to be more line-by-line (doing everything that is needed to process one source line to binary in one go, so it does not have to collect intermediate results in such big lists).

OR, we could try to free the memory needed for elements of A while building B - and use B as input for pass1 and pass2.

Also we (or the caller of our code) could (after A is computed) try to free the memory where the full source code is stored as one big string.

@mattytrentini
Copy link
Sponsor

yielding on a line-by-line basis seems like it could be efficient...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants