You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to extract a very large pdf (300mb) and 900+ pictures and it keeps crashing because it filled the RAM.
I will take a look very the memory leak happens, but it is very annoying.
The text was updated successfully, but these errors were encountered:
pdfextract is quite an old script, written before I implemented object enumerators or lazy parsing. As such it probably does a lot of object copies and does not handle big files well.
Could you share the file you are trying to parse? I will try to see if I can do something. Also which version of Ruby are you using?
I tried to extract a very large pdf (300mb) and 900+ pictures and it keeps crashing because it filled the RAM.
I will take a look very the memory leak happens, but it is very annoying.
The text was updated successfully, but these errors were encountered: