New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batching calls within a lock / scope #28
Comments
For some reason, this doesn't work with all kinds of values... or something. I'm trying to make the whole ctx.Create use one handle scope, but started getting fun errors:
^ This was while trying to create a Function (using a Bind function I made that doesn't acquire the scope), but I've seen it with a Date object too. |
I was wrong, apparently that doesn't keep the locker or any other scope alive. It appears you don't need the locker or anything of the sort to create all kinds of values except Date and Function. I don't think I can prevent the locker and scope destructors from firing when calling a different function. My C++ is pretty awful! |
Update time: I was getting the scope errors because of switching threads. I made a type handleScope chan func()
func (hs handleScope) Do(f func()) {
done := make(chan struct{}, 1)
hs <- func() {
f()
done <- struct{}{}
}
<-done
}
func (hs handleScope) End() {
close(hs)
} Example usage: func (ctx *Context) Create(val interface{}) (*Value, error) {
scope := ctx.WithScope()
var v *Value
var err error
scope.Do(func() {
v, _, err = ctx.create(reflect.ValueOf(val))
})
scope.End()
return v, err
} This, is not super fast for 1 value. Hell, it's slower and allocates a bit more. However, for a few values, it makes things faster. func BenchmarkNewObjectWithoutScope(b *testing.B) {
ctx := NewIsolate().NewContext()
b.ResetTimer()
for n := 0; n < b.N; n++ {
ctx.NewObjectWoutScope()
ctx.NewObjectWoutScope()
ctx.NewObjectWoutScope()
ctx.NewObjectWoutScope()
ctx.NewObjectWoutScope()
ctx.NewObjectWoutScope()
}
}
func BenchmarkNewObjectWithScope(b *testing.B) {
ctx := NewIsolate().NewContext()
scope := ctx.WithScope() // don't need the chan here
todo := func() {
ctx.NewObjectWScope()
ctx.NewObjectWScope()
ctx.NewObjectWScope()
ctx.NewObjectWScope()
ctx.NewObjectWScope()
ctx.NewObjectWScope()
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
scope.Do(todo)
}
}
There is a lot more lock contention with the scoped version. There's a for range on a channel that is single-threaded waiting for data. For my use cases, this is pretty interesting. I have complex objects to pass and as soon as there are 2-3 sub-values needing creation, it can get pretty slow. I can push to a branch if you're interested in seeing what's happening. |
Ok, I woke up realizing that benchmark was unfair. I was not getting and putting the scope back during the benchmark, which the other one does. With that in place, ns/op is very similar, but total ops is almost 10x lower with the batch mode. Which surprises me a lot! It is technically a few more cgo calls, but all in all, 2x less time is spent in cgo calls with the batch mode. It's spent elsewhere though, usually in It's a bit hard to know which one is better in this very specific benchmark. I would think it'd be faster to prevent lock and scope changing threads all the time. |
While I was fiddling with bindings in a different languages, I figured what was expensive was all the locking and scoping for every call. If you can instead callback into go, holding on a lock and/or a scope, you can really speed things up.
My thinking is we can batch all value creation (for structs and objects, etc.) or all logically grouped operations within a isolate lock and/or context scope with this.
Example implementation:
The C++ used:
Benchmarks:
Results:
Example usage:
edit: posted comment too quick by mistake, added code and examples.
The text was updated successfully, but these errors were encountered: