Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cfb_add and write performance issues #2

Open
rossj opened this issue Apr 7, 2018 · 6 comments
Open

cfb_add and write performance issues #2

rossj opened this issue Apr 7, 2018 · 6 comments

Comments

@rossj
Copy link
Contributor

rossj commented Apr 7, 2018

Hi there,

I'm working on a program which converts .pst files to .msg files, primarily in Node but also in the browser, and it uses this library in a very write-heavy way for saving the .msg files. Through testing and profiling, I've noticed a couple write-related performance issues that I wanted to share.

With some modifications, I've been able to reduce the output generation time of my primary "large" test case (4300 .msg files from 1 .pst) by a factor of 8 from about 16 minutes to 2 minutes (running on Node).

The 1st issue, which may just be a matter of documentation, is that using cfb_add repeatedly to add all streams to a new doc is very slow, as it calls cfb_gc and cfb_rebuild every time. We switched from using cfb_add to directly pushing to cfb.FileIndex and cfb.FullPaths (and then calling cfb_rebuild once at the end) which reduced the output time from 16 minutes to 3.5 minutes.

The 2nd issue is that the _write and WriteShift functions do not utilize Buffer capabilities when it is available. By using Buffer.alloc() for the initial creation, which guarantees a 0-filled initialization, along with Buffer.copy for content streams, Buffer.write for hex / utf16le strings, and Buffer's various write int / uint methods, we were able to further reduce the output time from 3.5 minutes to 2 minutes.

If you wish, I would be happy to share my changes, or to work on a pull request which uses Buffer functions when available. My current changes don't do any feature detection, and rather just rely on Buffer being available, as even in the browser we use feross/buffer, so it would need some more work to maintain functionality in non-Buffer environments.

Thanks

@SheetJSDev
Copy link
Contributor

On performance: none of our tests or workflows deal with hundreds of files, let alone thousands of files. The function that sees the most write activity reshapes XLS VBA blobs to XLSB form, and the most extreme case I've seen involved about 250 extremely small files, so it's not at all surprising that there are ways to improve performance when adding a large number of files. When this was built, node was still in the 0.8.x series and the node developers were still working out performance kinks.

  1. the original intention was to ensure that the representation was valid and "complete" after each operation. Among other things, it ensures that all "parent directory" entries are created. But those are re-created at the end anyway, so removing the GC call makes sense

  2. Older versions of node didn't automatically zero-fill. I agree Buffer.alloc should be used when available (it is not available in the 4.x series IIRC, so a check is necessary). As for the individual utilities like __writeUInt32LE, at the time the buffer utility functions were dramatically slower than the hand-rolled solution (this was in the 0.8.x and 0.10.x series), and the performance may have improved since then.

Contributions would be awesome :). To start, we'd accept a PR that just removed the call to cfb_gc within the cfb_add function.

P.S.: Here's a throwback issue about node function performance nodejs/node-v0.x-archive#7809 affecting the 0.8.x and 0.10.x series

@rossj
Copy link
Contributor Author

rossj commented Apr 7, 2018

Great, thank you for the quick response and background info. I'm sure that my use case is outside of what was originally intended / tested, so no fault to the library for not being optimized just for me :).

Am I correct in thinking that cfb.flow.js is the primary source file, and the other .js files are derived from it in a build step?

@SheetJSDev
Copy link
Contributor

The primary source files are the bits files, which are concatenated in a build step. The approach is a bit weird given what people use in 2018, but if you make the changes to cfb.flow.js directly we can always amend the commit to update the bits files.

SheetJSDev added a commit that referenced this issue Apr 9, 2018
- `unsafe` option to `cfb_add` for bulk write (see #2)
- use `lastIndexOf` to save operations in BFP queue
@SheetJSDev
Copy link
Contributor

@rossj we just pushed 1.0.6 with the first part guarded behind the option unsafe:true:

CFB.utils.cfb_add(cfb, path, content, {unsafe:true});

Runkit unfortunately puts a memory limit, but https://runkit.com/5acb0cf21599f20012a3e001/5acb0cf2aeee9400120ba682 should demonstrate 4000 files. It uses a small test script for adding 5000 byte files to the FAT and 500 byte files to the mini FAT:

var CFB = require('./');
var cfb = CFB.utils.cfb_new();
var cnt = 20000;
console.log("alloc", new Date());
var bufs = [];
for(var i = 0; i < cnt; ++i) bufs[i] = [Buffer.alloc(500, i), Buffer.alloc(5000, i)];
console.log("start", new Date());
for(var i = 0; i < cnt; ++i) {
	if(!(i%100)) console.log(i, new Date());
	CFB.utils.cfb_add(cfb, "/short/" + i.toString(16), bufs[i][0], {unsafe:true}); 
	CFB.utils.cfb_add(cfb, "/long/"  + i.toString(16), bufs[i][1], {unsafe:true}); 
}
console.log("prewrite", new Date());
CFB.utils.cfb_gc(cfb);
CFB.writeFile(cfb, "out.bin");
console.log("done", new Date());
var cfb2 = CFB.read("out.bin", {type:"file"});
console.log("read", new Date());
for(var i = 0; i < cnt; i += 100) {
	var file = CFB.find(cfb2, "/short/" + i.toString(16));
	if(0 != Buffer.compare(file.content, bufs[i][0])) throw new Error("short " + i);
	file = CFB.find(cfb2, "/long/" + i.toString(16));
	if(0 != Buffer.compare(file.content, bufs[i][1])) throw new Error("short " + i);
}

@SheetJSDev
Copy link
Contributor

@rossj Before a new release is cut, is there any other change you recommend?

@rossj
Copy link
Contributor Author

rossj commented Sep 5, 2021

First, sorry for being slow with my PRs. I just submitted a change that uses Buffer.copy for the file contents, which shows performance improvements for my use case. It relies on the output Buffer being zero-filled beforehand. I'm now wondering if it may be faster to not rely on the pre-fill and instead just use allocUnsafe() with Buffer.fill() to just zero out those extra padding / byte-alignment bytes between the file entries. I'll see if I can run some benchmarks to see if one is obviously better than the other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants