Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-2998 native store memory overflow performance #4974

Open
wants to merge 10 commits into
base: develop
Choose a base branch
from

Conversation

hmottestad
Copy link
Contributor

GitHub issue resolved: #2998

Briefly describe the changes proposed in this PR:


PR Author Checklist (see the contributor guidelines for more details):

  • my pull request is self-contained
  • I've added tests for the changes I made
  • I've applied code formatting (you can use mvn process-resources to format from the command line)
  • I've squashed my commits where necessary
  • every commit message starts with the issue number (GH-xxxx) followed by a meaningful description of the change

@hmottestad
Copy link
Contributor Author

hmottestad commented May 9, 2024

# JMH version: 1.37
# VM version: JDK 22, OpenJDK 64-Bit Server VM, 22+36
# VM invoker: /Library/Java/JavaVirtualMachines/zulu-22.jdk/Contents/Home/bin/java
# VM options: -Xms500M -Xmx500M -XX:+UseParallelGC
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: <none>
# Measurement: 10 iterations, 1 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData

# Run progress: 0.00% complete, ETA 00:00:00
# Fork: 1 of 1
Iteration   1: 13:14:30.942 [org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG syncing at 450000 triples. max block size: 62357688
13:14:30.944 [org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12752373344023074604
13:15:03.288 [org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG overflow synced to disk
49493.922 ms/op
Iteration   2: 13:15:26.215 [org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] WARN  Rolling back transaction due to connection close
<failure>

java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.base/java.util.Arrays.copyOfRange(Arrays.java:3852)
	at java.base/java.lang.StringLatin1.newString(StringLatin1.java:864)
	at java.base/java.lang.String.substring(String.java:2881)
	at org.eclipse.rdf4j.common.net.ParsedIRI.parseMember(ParsedIRI.java:987)
	at org.eclipse.rdf4j.common.net.ParsedIRI.parseScheme(ParsedIRI.java:848)
	at org.eclipse.rdf4j.common.net.ParsedIRI.parse(ParsedIRI.java:771)
	at org.eclipse.rdf4j.common.net.ParsedIRI.<init>(ParsedIRI.java:268)
	at org.eclipse.rdf4j.rio.helpers.AbstractRDFParser.createURI(AbstractRDFParser.java:351)
	at org.eclipse.rdf4j.rio.helpers.AbstractRDFParser.resolveURI(AbstractRDFParser.java:341)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseURI(TurtleParser.java:944)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseValue(TurtleParser.java:571)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseSubject(TurtleParser.java:398)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseTriples(TurtleParser.java:333)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseStatement(TurtleParser.java:203)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parse(TurtleParser.java:165)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parse(TurtleParser.java:128)
	at org.eclipse.rdf4j.repository.util.RDFLoader.loadInputStreamOrReader(RDFLoader.java:302)
	at org.eclipse.rdf4j.repository.util.RDFLoader.load(RDFLoader.java:229)
	at org.eclipse.rdf4j.repository.util.RDFLoader.load(RDFLoader.java:191)
	at org.eclipse.rdf4j.repository.base.AbstractRepositoryConnection.add(AbstractRepositoryConnection.java:257)
	at org.eclipse.rdf4j.repository.RepositoryConnection.add(RepositoryConnection.java:785)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData(OverflowBenchmarkReal.java:79)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkReal_loadLotsOfData_jmhTest.loadLotsOfData_avgt_jmhStub(OverflowBenchmarkReal_loadLotsOfData_jmhTest.java:236)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkReal_loadLotsOfData_jmhTest.loadLotsOfData_AverageTime(OverflowBenchmarkReal_loadLotsOfData_jmhTest.java:176)
	at java.base/java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(DirectMethodHandle$Holder)
	at java.base/java.lang.invoke.LambdaForm$MH/0x0000038001020800.invoke(LambdaForm$MH)
	at java.base/java.lang.invoke.LambdaForm$MH/0x0000038001020c00.invokeExact_MT(LambdaForm$MH)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invokeImpl(DirectMethodHandleAccessor.java:155)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:527)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:504)




Result "org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkReal.loadLotsOfData":
  49493.922 ms/op


# Run complete. Total time: 00:00:57

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

NOTE: Current JVM experimentally supports Compiler Blackholes, and they are in use. Please exercise
extra caution when trusting the results, look into the generated code to check the benchmark still
works, and factor in a small probability of new VM bugs. Additionally, while comparisons between
different JVMs are already problematic, the performance difference caused by different Blackhole
modes can be very significant. Please make sure you use the consistent Blackhole mode for comparisons.

Benchmark                             Mode  Cnt      Score   Error  Units
OverflowBenchmarkReal.loadLotsOfData  avgt       49493.922          ms/op

Process finished with exit code 0
Benchmark                                               Mode  Cnt      Score       Error  Units
OverflowBenchmarkSynthetic.loadLotsOfDataEmptyStore     avgt   10   8545.636 ±  1747.399  ms/op
OverflowBenchmarkSynthetic.loadLotsOfDataNonEmptyStore  avgt   10  18126.761 ± 12417.598  ms/op

@hmottestad
Copy link
Contributor Author

# JMH version: 1.37
# VM version: JDK 22, OpenJDK 64-Bit Server VM, 22+36
# VM invoker: /Library/Java/JavaVirtualMachines/zulu-22.jdk/Contents/Home/bin/java
# VM options: -Xms1G -Xmx1G -XX:+UseParallelGC
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: <none>
# Measurement: 10 iterations, 1 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions

# Run progress: 0.00% complete, ETA 00:00:00
# Fork: 1 of 1
Iteration   1: Waiting
Waiting
Waiting
Waiting
Waiting
Removing
Removing
Removing
Removing
Removing
Removed
Removed
Removed
Removed
Removed
Waiting
Waiting
Removing
Removing
Removed
Removed
Waiting
Removing
Removed
Waiting
Removing
Removed
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Added
Done: 0
Done: 1
Done: 2
Done: 3
Done: 4
Done: 5
Done: 6
Done: 7
Done: 8
Done: 9
Done: 10
Done: 11
Done: 12
Done: 13
Done: 14
Done: 15
Done: 16
Done: 17
Added
Waiting
Added
Done: 18
Added
Waiting
Added
Added
Waiting
Removing
Removing
Removing
Added
Removing
Removed
Waiting
Done: 19
Done: 20
Removed
Removed
Waiting
Done: 21
Done: 22
Done: 23
Done: 24
Added
Removed
Waiting
Done: 25
Done: 26
Added
Removing
Removing
Removing
Added
Waiting
Removed
Done: 27
Done: 28
Removed
Waiting
Removed
Done: 29
Done: 30
Done: 31
Done: 32
Added
Waiting
Added
Removing
Removing
Added
Waiting
Removing
Added
Removed
Done: 33
Waiting
Done: 34
Added
Removing
Removed
Done: 35
Waiting
Done: 36
Added
Removed
Done: 37
Waiting
Done: 38
Removing
Added
Removing
Added
Waiting
Removed
Done: 39
Done: 40
Removing
Removing
Added
Waiting
Removed
Done: 41
Done: 42
Added
Waiting
Removed
Done: 43
Done: 44
Removing
Added
Waiting
Removing
Removed
Removed
Done: 45
Waiting
Done: 46
Done: 47
Done: 48
Removing
Added
Removing
Added
Waiting
Removed
Done: 49
Done: 50
Added
Waiting
Removing
Removed
Done: 51
Done: 52
Removing
Added
Waiting
Removing
Removed
Done: 53
Done: 54
Removed
Done: 55
Done: 56
Waiting
Added
Removing
Added
Waiting
Removed
Removed
Done: 57
Done: 58
Done: 59
Done: 60
Removing
Removed
Done: 61
Done: 62
Removing
Removing
Removing
Removed
Done: 63
Done: 64
Removed
Done: 65
Done: 66
Removed
Removed
Removed
Done: 67
Done: 68
Done: 69
Done: 70
Done: 71
Done: 72
Removing
Removing
Removed
Done: 73
Done: 74
13:23:29.397 [pool-2-thread-1] DEBUG syncing at 50000 triples. max block size: 6499288
13:23:29.399 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11307788209192422163
13:23:35.638 [pool-2-thread-1] DEBUG overflow synced to disk
13:23:38.857 [pool-2-thread-1] DEBUG syncing at 30000 triples. max block size: 4877392
13:23:38.857 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1301901278360075168
13:23:44.386 [pool-2-thread-1] DEBUG overflow synced to disk
13:23:46.488 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 5249088
13:23:46.488 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model4298799010314449698
13:23:48.953 [pool-2-thread-1] DEBUG overflow synced to disk
13:23:52.369 [pool-2-thread-1] DEBUG syncing at 30000 triples. max block size: 7266288
13:23:52.369 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model15717526378892665263
13:23:55.897 [pool-2-thread-1] DEBUG overflow synced to disk
13:24:03.921 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 6569288
13:24:03.922 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model5447578019884602512
13:24:06.678 [pool-2-thread-1] DEBUG overflow synced to disk
13:24:10.832 [pool-2-thread-1] DEBUG syncing at 90000 triples. max block size: 5854064
13:24:10.832 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8627601442416399246
13:24:26.184 [pool-2-thread-1] DEBUG overflow synced to disk
13:24:30.150 [pool-2-thread-1] DEBUG syncing at 50000 triples. max block size: 9630456
13:24:30.150 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model18002110255996823034
13:24:37.800 [pool-2-thread-1] DEBUG overflow synced to disk
13:24:39.279 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 8156360
13:24:39.280 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11837075722744670073
13:24:42.075 [pool-2-thread-1] DEBUG overflow synced to disk
13:24:58.171 [pool-2-thread-1] DEBUG syncing at 30000 triples. max block size: 7706944
13:24:58.172 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model3933961900437134182
13:25:04.061 [pool-2-thread-1] DEBUG overflow synced to disk
13:25:20.107 [pool-2-thread-1] DEBUG syncing at 40000 triples. max block size: 5601160
13:25:20.107 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1458292109409091449
13:25:28.324 [pool-2-thread-1] DEBUG overflow synced to disk
13:25:31.672 [pool-2-thread-1] DEBUG syncing at 30000 triples. max block size: 9241776
13:25:31.672 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model17298852515836143206
13:25:38.640 [pool-2-thread-1] DEBUG overflow synced to disk
13:25:58.703 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 8820464
13:25:58.703 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model6899823914189027585
13:26:03.488 [pool-2-thread-1] DEBUG overflow synced to disk
13:26:12.296 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 0
13:26:12.297 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model6835915591582974849
13:26:17.997 [pool-2-thread-1] DEBUG overflow synced to disk
13:26:38.925 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 8529936
13:26:38.925 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1084140372668921997
13:26:44.308 [pool-2-thread-1] DEBUG overflow synced to disk
13:26:52.963 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 14614904
13:26:52.963 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model5278015729073618832
13:27:00.749 [pool-2-thread-1] DEBUG overflow synced to disk
13:27:25.886 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 8515472
13:27:25.887 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12151056399573956302
13:27:33.212 [pool-2-thread-1] DEBUG overflow synced to disk
13:27:42.613 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 0
13:27:42.614 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10794800288332688901
13:27:50.250 [pool-2-thread-1] DEBUG overflow synced to disk
13:28:21.211 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 0
13:28:21.211 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9474274078515848729
13:28:40.530 [pool-2-thread-1] DEBUG overflow synced to disk
13:28:51.945 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 754600
13:28:51.945 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2427271201727630495
13:29:17.652 [pool-2-thread-1] DEBUG overflow synced to disk
13:29:50.282 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 8695312
13:29:50.282 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model17541568617229060875
13:29:57.232 [pool-2-thread-1] DEBUG overflow synced to disk
13:30:09.230 [pool-2-thread-1] DEBUG syncing at 20000 triples. max block size: 2598152
13:30:09.230 [pool-2-thread-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1533127922652456534
13:30:09.513 [pool-2-thread-1] WARN  Rolling back transaction due to connection close
<failure>

java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:185)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_avgt_jmhStub(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:236)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_AverageTime(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:176)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:527)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:504)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:183)
	... 12 more
Caused by: java.lang.OutOfMemoryError: Java heap space
	at java.base/java.util.BitSet.initWords(BitSet.java:171)
	at java.base/java.util.BitSet.<init>(BitSet.java:166)
	at org.eclipse.rdf4j.sail.nativerdf.datastore.HashFile.<init>(HashFile.java:161)
	at org.eclipse.rdf4j.sail.nativerdf.datastore.HashFile.<init>(HashFile.java:101)
	at org.eclipse.rdf4j.sail.nativerdf.datastore.DataStore.<init>(DataStore.java:49)
	at org.eclipse.rdf4j.sail.nativerdf.ValueStore.<init>(ValueStore.java:141)
	at org.eclipse.rdf4j.sail.nativerdf.NativeSailStore.<init>(NativeSailStore.java:97)
	at org.eclipse.rdf4j.sail.nativerdf.NativeSailStore.<init>(NativeSailStore.java:85)
	at org.eclipse.rdf4j.sail.nativerdf.NativeStore$MemoryOverflowIntoNativeStore.createSailStore(NativeStore.java:104)
	at org.eclipse.rdf4j.sail.nativerdf.MemoryOverflowModel.overflowToDisk(MemoryOverflowModel.java:281)
	at org.eclipse.rdf4j.sail.nativerdf.MemoryOverflowModel.checkMemoryOverflow(MemoryOverflowModel.java:265)
	at org.eclipse.rdf4j.sail.nativerdf.MemoryOverflowModel.add(MemoryOverflowModel.java:137)
	at org.eclipse.rdf4j.model.impl.AbstractModel.addAll(AbstractModel.java:129)
	at org.eclipse.rdf4j.sail.base.Changeset.approveAll(Changeset.java:887)
	at org.eclipse.rdf4j.sail.base.Changeset.sinkApproved(Changeset.java:837)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:519)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.compressChanges(SailSourceBranch.java:372)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:252)
	at org.eclipse.rdf4j.sail.base.DelegatingSailDataset.close(DelegatingSailDataset.java:51)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:248)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.endUpdateInternal(SailSourceConnection.java:692)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.endUpdate(AbstractSailConnection.java:668)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.flush(AbstractSailConnection.java:479)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:224)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:169)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent$$Lambda/0x0000400001104220.run(Unknown Source)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)




# Run complete. Total time: 00:06:50

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

NOTE: Current JVM experimentally supports Compiler Blackholes, and they are in use. Please exercise
extra caution when trusting the results, look into the generated code to check the benchmark still
works, and factor in a small probability of new VM bugs. Additionally, while comparisons between
different JVMs are already problematic, the performance difference caused by different Blackhole
modes can be very significant. Please make sure you use the consistent Blackhole mode for comparisons.

Benchmark  Mode  Cnt  Score   Error  Units

Process finished with exit code 0

@hmottestad
Copy link
Contributor Author

hmottestad commented May 9, 2024

After

Benchmark                                              Mode  Cnt      Score      Error  Units
OverflowBenchmarkReal.loadLotsOfData                   avgt   10  18801.835 ± 1093.526  ms/op
Benchmark                                               Mode  Cnt      Score      Error  Units
OverflowBenchmarkConcurrent.manyConcurrentTransactions  avgt   10  21941.674 ± 9700.417  ms/op
Benchmark                                               Mode  Cnt     Score     Error  Units
OverflowBenchmarkSynthetic.loadLotsOfDataEmptyStore     avgt   10  3418.238 ± 117.677  ms/op
OverflowBenchmarkSynthetic.loadLotsOfDataNonEmptyStore  avgt   10  3839.169 ±  68.907  ms/op

@@ -68,52 +69,22 @@ class LmdbSailStore implements SailStore {
private final ValueStore valueStore;

private final ExecutorService tripleStoreExecutor = Executors.newCachedThreadPool();
private final CircularBuffer<Operation> opQueue = new CircularBuffer<>(1024);
private final ArrayBlockingQueue<Operation> opQueue = new ArrayBlockingQueue<>(1024);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kenwenzel I found out that the ArrayBlockingQueue supports all the features that you need here and is more performant with a lot less busy waiting.

Copy link
Contributor

@kenwenzel kenwenzel May 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be wonderful. Do you have a benchmark for comparing this change?

I've switched to the custom buffer as it was much faster than blocking queues - at least in our write benchmarks.

@hmottestad
Copy link
Contributor Author

hmottestad commented May 14, 2024

LMDB

Before

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      4.703 ±    0.475  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      6.434 ±    0.209  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     46.115 ±    2.339  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     50.426 ±    5.497  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  13292.776 ± 2328.175  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  13725.802 ± 2071.501  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  29317.312 ± 1147.382  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  33674.556 ± 1213.177  ops/s

After

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      3.968 ±    0.187  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      5.513 ±    0.213  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     38.998 ±    2.844  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     47.498 ±    4.477  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  12167.252 ± 1844.259  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  12603.226 ± 1229.140  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  22407.261 ± 1562.692  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  25590.084 ± 1507.929  ops/s

@kenwenzel
Copy link
Contributor

LMDB

Before

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      4.703 ±    0.475  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      6.434 ±    0.209  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     46.115 ±    2.339  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     50.426 ±    5.497  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  13292.776 ± 2328.175  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  13725.802 ± 2071.501  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  29317.312 ± 1147.382  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  33674.556 ± 1213.177  ops/s

After

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      3.968 ±    0.187  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      5.513 ±    0.213  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     38.998 ±    2.844  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     47.498 ±    4.477  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  12167.252 ± 1844.259  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  12603.226 ± 1229.140  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  22407.261 ± 1562.692  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  25590.084 ± 1507.929  ops/s

That means we should just stay with the CircularBuffer?

@hmottestad
Copy link
Contributor Author

LMDB

Before

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      4.703 ±    0.475  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      6.434 ±    0.209  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     46.115 ±    2.339  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     50.426 ±    5.497  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  13292.776 ± 2328.175  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  13725.802 ± 2071.501  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  29317.312 ± 1147.382  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  33674.556 ± 1213.177  ops/s

After

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      3.968 ±    0.187  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      5.513 ±    0.213  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     38.998 ±    2.844  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     47.498 ±    4.477  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  12167.252 ± 1844.259  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  12603.226 ± 1229.140  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  22407.261 ± 1562.692  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  25590.084 ± 1507.929  ops/s

That means we should just stay with the CircularBuffer?

Ahh. That's ops per second and not ms/op which I use in my benchmarks.

@hmottestad
Copy link
Contributor Author

Screenshot 2024-05-15 at 06 40 44

@kenwenzel IntelliJ is saying that there is a non atomic update to an atomic field. In general I'm a bit sceptical if this is even thread safe. If two threads both call add and proceed to elements[tail] = element then one thread will overwrite the other. Is it supposed to be thread safe?

@kenwenzel
Copy link
Contributor

Screenshot 2024-05-15 at 06 40 44 @kenwenzel IntelliJ is saying that there is a non atomic update to an atomic field. In general I'm a bit sceptical if this is even thread safe. If two threads both call add and proceed to `elements[tail] = element` then one thread will overwrite the other. Is it supposed to be thread safe?

In this specific case tail is only changed in the add method and the add method is only called by exactly one thread. Hence it shouldn't be a problem and tail can't be changed between the read operation and the update. If two or more threads would call the add method then this could lead to problems.

@hmottestad hmottestad force-pushed the GH-2998-native-store-memory-overflow-performance branch from 72de6f1 to 6668015 Compare May 17, 2024 15:45
@kenwenzel
Copy link
Contributor

@hmottestad I would propose to keep the CircularBuffer for the LmdbStore.

@hmottestad
Copy link
Contributor Author

Yeah. I thought the same. I reverted it. I also reverted my attempt at throttling new transactions while a lot of write transactions are running.

I'll try running the benchmarks again and see if things are better. The biggest improvement is still included, supporting addAll through the entire stack from the Changeset through to the store.

@hmottestad hmottestad force-pushed the GH-2998-native-store-memory-overflow-performance branch from 4b15ecd to e60d672 Compare May 18, 2024 20:50
@hmottestad
Copy link
Contributor Author

hmottestad commented May 18, 2024

LMDB

Before

# JMH version: 1.37
# VM version: JDK 22, OpenJDK 64-Bit Server VM, 22+36
# VM invoker: /Library/Java/JavaVirtualMachines/zulu-22.jdk/Contents/Home/bin/java
# VM options: -Xms500M -Xmx500M -XX:+UseParallelGC
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 1 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkReal.loadLotsOfData

# Run progress: 0.00% complete, ETA 00:00:50
# Fork: 1 of 1
# Warmup Iteration   1: 14:56:39.048 [org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG syncing at 470000 triples. max block size: 58019896
14:56:39.050 [org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model16112335902653993338
14:56:50.867 [org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] DEBUG overflow synced to disk
14:56:57.085 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model16112335902653993338
22690.523 ms/op
# Warmup Iteration   2: 14:57:06.986 [org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkReal.loadLotsOfData-jmh-worker-1] WARN  Rolling back transaction due to connection close
<failure>

java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parseStatement(TurtleParser.java:179)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parse(TurtleParser.java:165)
	at org.eclipse.rdf4j.rio.turtle.TurtleParser.parse(TurtleParser.java:128)
	at 
Benchmark                                               Mode  Cnt     Score     Error  Units
OverflowBenchmarkSynthetic.loadLotsOfDataEmptyStore     avgt    5  2076.443 ± 427.854  ms/op
OverflowBenchmarkSynthetic.loadLotsOfDataNonEmptyStore  avgt    5  2054.220 ± 686.466  ms/op
# JMH version: 1.37
# VM version: JDK 22, OpenJDK 64-Bit Server VM, 22+36
# VM invoker: /Library/Java/JavaVirtualMachines/zulu-22.jdk/Contents/Home/bin/java
# VM options: -Xms1G -Xmx1G -XX:+UseParallelGC
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: <none>
# Measurement: 10 iterations, 1 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions

# Run progress: 0.00% complete, ETA 00:00:00
# Fork: 1 of 1
Iteration   1: 
<failure>

java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
	at 


# Run complete. Total time: 00:00:25

Process finished with exit code 0

After

Benchmark                             Mode  Cnt      Score      Error  Units
OverflowBenchmarkReal.loadLotsOfData  avgt    5  14517.546 ± 1033.874  ms/op
Benchmark                                               Mode  Cnt     Score     Error  Units
OverflowBenchmarkSynthetic.loadLotsOfDataEmptyStore     avgt    5  1903.541 ± 137.691  ms/op
OverflowBenchmarkSynthetic.loadLotsOfDataNonEmptyStore  avgt    5  2128.489 ± 429.895  ms/op
Benchmark                                               Mode  Cnt      Score      Error  Units
OverflowBenchmarkConcurrent.manyConcurrentTransactions  avgt    5  21421.548 ± 5353.966  ms/op
Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      5.187 ±    0.042  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      7.198 ±    0.281  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     48.654 ±    1.525  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     59.381 ±    2.759  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  13737.524 ± 1334.220  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  15103.993 ± 1914.261  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  28982.581 ±  739.827  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  32650.695 ± 1386.783  ops/s

@kenwenzel That's much better isn't it? Can you review my proposed changes?

@kenwenzel
Copy link
Contributor

Benchmark                                                      Mode  Cnt      Score      Error  Units
TransactionsPerSecondBenchmarkFoaf.transaction100kx           thrpt    5      5.187 ±    0.042  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction100kxLevelNone  thrpt    5      7.198 ±    0.281  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kx            thrpt    5     48.654 ±    1.525  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10kxLevelNone   thrpt    5     59.381 ±    2.759  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10x             thrpt    5  13737.524 ± 1334.220  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction10xLevelNone    thrpt    5  15103.993 ± 1914.261  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1x              thrpt    5  28982.581 ±  739.827  ops/s
TransactionsPerSecondBenchmarkFoaf.transaction1xLevelNone     thrpt    5  32650.695 ± 1386.783  ops/s

@kenwenzel That's much better isn't it? Can you review my proposed changes?

Looks impressive - especially for larger transactions..Cool :-)
I can review it on Tuesday or Wednesday.

@hmottestad hmottestad marked this pull request as ready for review May 19, 2024 17:56
@hmottestad
Copy link
Contributor Author

# JMH version: 1.37
# VM version: JDK 22, OpenJDK 64-Bit Server VM, 22+36
# VM invoker: /Library/Java/JavaVirtualMachines/zulu-22.jdk/Contents/Home/bin/java
# VM options: -Xms1G -Xmx1G -XX:+UseParallelGC
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 1 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions

# Run progress: 0.00% complete, ETA 00:00:50
# Fork: 1 of 1
# Warmup Iteration   1: Waiting
Waiting
Waiting
Waiting
Waiting
Removing
Removing
Removing
Removing
Removing
Removed
Removed
Removed
Removed
Removed
Waiting
Waiting
Removing
Removing
Removed
Waiting
Removing
Removed
Removed
Waiting
Removing
Removed
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Added
Added
Added
Done: 0
Waiting
Done: 1
Done: 2
Done: 3
Done: 4
Done: 5
Done: 6
Done: 7
Added
Waiting
Added
Done: 8
Done: 9
Done: 10
Done: 11
Done: 12
Done: 13
Done: 14
Done: 15
Done: 16
Done: 17
Done: 18
Added
Waiting
Removing
Added
Added
Waiting
20:00:11.415 [pool-2-thread-4] DEBUG syncing at 21024 triples. max block size: 91238488
20:00:11.417 [pool-2-thread-4] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1801634721679733166
20:00:11.455 [pool-2-thread-4] DEBUG overflow synced to disk
Added
Removed
Waiting
Done: 19
Done: 20
Removing
Removing
Added
Removing
Removing
Removing
Removing
Removing
Removed
Waiting
Removed
Removed
Waiting
Removed
Done: 21
Done: 22
Done: 23
Done: 24
Done: 25
Done: 26
Done: 27
Done: 28
Removed
Removed
Waiting
Done: 29
Added
Waiting
Removed
Added
Done: 30
Done: 31
Done: 32
Done: 33
Waiting
Added
Done: 34
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Removing
Removing
Removing
Removing
Added
Removing
Removing
Removed
Waiting
Removed
Removed
Added
Done: 35
Done: 36
Done: 37
Done: 38
Done: 39
Done: 40
Waiting
Added
Waiting
Removed
Done: 41
Done: 42
Removing
Removing
Removed
Done: 43
Waiting
Done: 44
Removed
Done: 45
Done: 46
20:00:15.069 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1801634721679733166
Added
Waiting
Added
Added
Waiting
Added
Removing
Removing
Removing
Removed
Done: 47
Done: 48
Waiting
Removed
Done: 49
Done: 50
Removing
Added
Waiting
Removing
Added
Added
Waiting
Removed
Removed
Removed
Done: 51
Waiting
Done: 52
Done: 53
Done: 54
Done: 55
Done: 56
Added
Waiting
Removing
Added
Removing
Removing
Removed
Waiting
Done: 57
Done: 58
Removing
Removing
Added
Added
Waiting
Removed
Done: 59
Done: 60
Removing
Added
Removed
Done: 61
Done: 62
Removed
Removing
Removed
Removing
Removed
Removed
Done: 63
Done: 64
Done: 65
Done: 66
Done: 67
Done: 68
Done: 69
Done: 70
Removed
Done: 71
Done: 72
Removed
Done: 73
Done: 74
20:00:19.814 [pool-2-thread-6] DEBUG syncing at 24336 triples. max block size: 324724496
20:00:19.815 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7616336677025997270
20:00:19.861 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:20.329 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 290011704
20:00:20.329 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7065644468503654467
20:00:20.347 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:20.568 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 3578600
20:00:20.568 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9443838333431330051
20:00:20.571 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:20.900 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:20.900 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model15136087500597835270
20:00:20.903 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:21.127 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 334204584
20:00:21.127 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14032046180695844920
20:00:21.148 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:21.458 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:21.459 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11080496175998753150
20:00:21.462 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:21.670 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 163940960
20:00:21.671 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model16194605813080865182
20:00:21.684 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:21.923 [pool-2-thread-6] DEBUG syncing at 33792 triples. max block size: 242272480
20:00:21.924 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2853275397390838085
20:00:21.986 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:22.185 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:22.186 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14805444514446560357
20:00:22.188 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:22.383 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 4533128
20:00:22.383 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2705319982152254169
20:00:22.386 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:22.627 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 126544768
20:00:22.627 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model13840871900629982496
20:00:22.643 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:22.819 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 4255640
20:00:22.820 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model15769189949543817883
20:00:22.822 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.057 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 193799608
20:00:23.057 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7229271111946312800
20:00:23.073 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.201 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 40305096
20:00:23.201 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9326775219600630838
20:00:23.216 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.412 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:23.412 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model821615424147579092
20:00:23.415 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.547 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:23.547 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12267616148384262876
20:00:23.550 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.682 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 130534040
20:00:23.683 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14721572460525396380
20:00:23.700 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.827 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:23.828 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2757970750550833814
20:00:23.831 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:23.971 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 127267456
20:00:23.972 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12528140235961627788
20:00:23.990 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:24.125 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:24.125 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12187196228047579919
20:00:24.128 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:24.336 [pool-2-thread-6] DEBUG syncing at 12048 triples. max block size: 432371120
20:00:24.336 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model6501887143558837934
20:00:24.361 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:24.712 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 333732472
20:00:24.712 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model17769483917859515939
20:00:24.727 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:24.881 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 3853408
20:00:24.882 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10071352517425427010
20:00:24.885 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:25.234 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:25.234 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1808107795311486079
20:00:25.237 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:25.377 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 279012912
20:00:25.377 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8464383945077281772
20:00:25.395 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:25.701 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 433111520
20:00:25.702 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11910658397500025411
20:00:25.717 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:25.877 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:25.877 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model13908274443666819515
20:00:25.881 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:26.031 [pool-2-thread-6] DEBUG syncing at 10000 triples. max block size: 350568656
20:00:26.031 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10914584864566791156
20:00:26.048 [pool-2-thread-6] DEBUG overflow synced to disk
20:00:26.208 [pool-2-thread-6] DEBUG syncing at 1024 triples. max block size: 0
20:00:26.208 [pool-2-thread-6] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10268331546843225546
20:00:26.212 [pool-2-thread-6] DEBUG overflow synced to disk
Removed
Done: 75
19536.748 ms/op
# Warmup Iteration   2: 20:00:26.554 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10268331546843225546
20:00:26.556 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9443838333431330051
20:00:26.557 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7065644468503654467
20:00:26.558 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7616336677025997270
20:00:26.560 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10914584864566791156
20:00:26.561 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model13908274443666819515
20:00:26.562 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11080496175998753150
20:00:26.563 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14032046180695844920
20:00:26.565 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model15136087500597835270
20:00:26.566 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14805444514446560357
20:00:26.567 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2853275397390838085
20:00:26.568 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model16194605813080865182
20:00:26.569 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model821615424147579092
20:00:26.571 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9326775219600630838
20:00:26.572 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7229271111946312800
20:00:26.573 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model15769189949543817883
20:00:26.575 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model13840871900629982496
20:00:26.576 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2705319982152254169
20:00:26.577 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2757970750550833814
20:00:26.578 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14721572460525396380
20:00:26.580 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12267616148384262876
20:00:26.582 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model6501887143558837934
20:00:26.586 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12187196228047579919
20:00:26.587 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12528140235961627788
20:00:26.588 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model1808107795311486079
20:00:26.590 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model10071352517425427010
20:00:26.594 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model17769483917859515939
20:00:26.595 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11910658397500025411
20:00:26.597 [Finalizer] DEBUG finalizing /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8464383945077281772
Waiting
Waiting
Waiting
Waiting
Waiting
Removing
Removing
Removing
Removing
Removing
Removed
Removed
Waiting
Removing
Removed
Removed
Removed
Removed
Waiting
Waiting
Removing
Removing
Removed
Removed
Waiting
Removing
Removed
Added
Waiting
Added
Added
Waiting
Added
Added
Added
Done: 0
Done: 1
Done: 2
Waiting
Done: 3
Added
Waiting
Added
Done: 4
Done: 5
Done: 6
Done: 7
Done: 8
Done: 9
Added
Waiting
Added
Done: 10
Done: 11
Done: 12
Done: 13
Done: 14
Done: 15
Done: 16
Done: 17
Done: 18
Added
Waiting
Added
Added
Waiting
Added
Added
Waiting
Removing
Removed
Done: 19
Done: 20
Added
Waiting
Removing
Removing
Removing
Added
Removing
Added
Waiting
Removing
Removing
Removed
Done: 21
Done: 22
Removed
Waiting
Removed
Done: 23
Done: 24
Done: 25
Done: 26
Removing
Added
Waiting
Removed
Done: 27
Done: 28
Added
Waiting
Removed
Removed
Done: 29
Waiting
Done: 30
Done: 31
Done: 32
Added
Removing
Removing
Added
Waiting
Added
Added
Waiting
Removing
Removed
Done: 33
Done: 34
Removing
Added
Waiting
Removing
Removed
Done: 35
Done: 36
Added
Waiting
Removed
Done: 37
Done: 38
Removing
Removed
Done: 39
Done: 40
Waiting
Added
Removing
Removing
Added
Waiting
Removing
Added
Removed
Done: 41
Done: 42
Waiting
Removing
Removed
Done: 43
Done: 44
Added
Waiting
Removed
Done: 45
Done: 46
Removing
Removing
Added
Waiting
Removed
Done: 47
Done: 48
Removed
Waiting
Added
Removed
Waiting
Removing
Added
Removing
Removed
Done: 49
Waiting
Done: 50
Done: 51
Done: 52
Done: 53
Done: 54
Added
Added
Waiting
Removing
Removed
Done: 55
Done: 56
Added
Waiting
Removing
Removing
Removed
Done: 57
Done: 58
Added
Waiting
Added
Removed
Done: 59
Done: 60
Removed
Done: 61
Done: 62
Removing
Removing
Removing
Removed
Done: 63
Done: 64
Removed
Done: 65
Done: 66
Removing
Removed
Done: 67
Done: 68
Removed
Done: 69
Done: 70
Removed
Done: 71
Done: 72
Removed
Done: 73
Done: 74
20:00:41.610 [pool-34-thread-2] DEBUG syncing at 36624 triples. max block size: 251203232
20:00:41.610 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2610626796508350868
20:00:41.987 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:42.270 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 337288160
20:00:42.270 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8115993132568624403
20:00:42.288 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:42.462 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 171192416
20:00:42.463 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12111513509252451619
20:00:42.479 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:42.802 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 2619648
20:00:42.802 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model13352087438677708615
20:00:42.805 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:42.927 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 233444976
20:00:42.927 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12434718361527705922
20:00:42.943 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:43.187 [pool-34-thread-2] DEBUG syncing at 40720 triples. max block size: 69267144
20:00:43.187 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12509936507799076231
20:00:43.258 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:43.513 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 0
20:00:43.514 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model16779303145694506790
20:00:43.516 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:43.684 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 133946728
20:00:43.685 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2060334656005360500
20:00:43.699 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.086 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 75158512
20:00:44.086 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12732083252230079257
20:00:44.102 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.263 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 29714176
20:00:44.264 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12780417967329955728
20:00:44.282 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.415 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1633392
20:00:44.415 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model365473062251351609
20:00:44.418 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.553 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 80857840
20:00:44.553 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8873427288931021385
20:00:44.570 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.695 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 87315768
20:00:44.696 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model5707897861487054140
20:00:44.711 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.839 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1331888
20:00:44.839 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model14301232810193798641
20:00:44.842 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:44.984 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1331880
20:00:44.984 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model17147255067391769665
20:00:44.987 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.117 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 50770528
20:00:45.118 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model7892486797220688653
20:00:45.141 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.275 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1405456
20:00:45.275 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model11531153716567985361
20:00:45.278 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.414 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 97348848
20:00:45.414 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model6126725303065387831
20:00:45.430 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.569 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1405544
20:00:45.570 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model8041106783356222842
20:00:45.573 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.698 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 209142432
20:00:45.698 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model9775552503178479091
20:00:45.715 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.845 [pool-34-thread-2] DEBUG syncing at 10000 triples. max block size: 198519760
20:00:45.846 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model12900328240995780733
20:00:45.864 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:45.996 [pool-34-thread-2] DEBUG syncing at 1024 triples. max block size: 1405464
20:00:45.996 [pool-34-thread-2] DEBUG memory overflow using temp directory /var/folders/0m/x3jzy4t15djd88znchzzx1_h0000gn/T/model2638079235708837379
20:00:45.999 [pool-34-thread-2] DEBUG overflow synced to disk
20:00:46.651 [pool-34-thread-2] ERROR Encountered an unexpected problem while trying to commit
java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:448)
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:432)
	at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1146)
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:897)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.resizeMap(ValueStore.java:561)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.gcIds(ValueStore.java:903)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.handleRemovedIdsInValueStore(LmdbSailStore.java:458)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:503)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$1.close(SailSourceBranch.java:197)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:303)
	at org.eclipse.rdf4j.sail.base.UnionSailSource.flush(UnionSailSource.java:74)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.commitInternal(SailSourceConnection.java:506)
	at org.eclipse.rdf4j.sail.lmdb.LmdbStoreConnection.commitInternal(LmdbStoreConnection.java:91)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.commit(AbstractSailConnection.java:524)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:226)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:169)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)
20:00:46.651 [pool-34-thread-2] WARN  Rolling back transaction due to connection close
<failure>

java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:185)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_avgt_jmhStub(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:236)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_AverageTime(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:176)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:527)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:504)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:183)
	... 12 more
Caused by: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.unmatchedUnlockException(ReentrantReadWriteLock.java:448)
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:432)
	at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1146)
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:897)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.resizeMap(ValueStore.java:561)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.gcIds(ValueStore.java:903)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.handleRemovedIdsInValueStore(LmdbSailStore.java:458)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:503)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$1.close(SailSourceBranch.java:197)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:303)
	at org.eclipse.rdf4j.sail.base.UnionSailSource.flush(UnionSailSource.java:74)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.commitInternal(SailSourceConnection.java:506)
	at org.eclipse.rdf4j.sail.lmdb.LmdbStoreConnection.commitInternal(LmdbStoreConnection.java:91)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.commit(AbstractSailConnection.java:524)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:226)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:169)
	... 5 more




# Run complete. Total time: 00:00:39

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

NOTE: Current JVM experimentally supports Compiler Blackholes, and they are in use. Please exercise
extra caution when trusting the results, look into the generated code to check the benchmark still
works, and factor in a small probability of new VM bugs. Additionally, while comparisons between
different JVMs are already problematic, the performance difference caused by different Blackhole
modes can be very significant. Please make sure you use the consistent Blackhole mode for comparisons.

Benchmark  Mode  Cnt  Score   Error  Units

Process finished with exit code 0

@kenwenzel I managed to get this error when running one of my benchmarks.

@kenwenzel
Copy link
Contributor

I'll check this. Maybe gcIds is called from the wrong thread.

@kenwenzel
Copy link
Contributor

I'll check this. Maybe gcIds is called from the wrong thread.

OK, I think I found it. The method resizeMap expects always a surrounding readTransaction which is not the case for gcIds.
Solution:
resizeMap should check if the txn argument is equal to this.writeTxn. If this is the case then it should do nothing with the readLock.

@hmottestad
Copy link
Contributor Author

I'll check this. Maybe gcIds is called from the wrong thread.

OK, I think I found it. The method resizeMap expects always a surrounding readTransaction which is not the case for gcIds. Solution: resizeMap should check if the txn argument is equal to this.writeTxn. If this is the case then it should do nothing with the readLock.

I've tried about every permutation of the following

				if(txn != writeTxn){
					System.err.println("Unlocking read lock");
					txnLock.readLock().unlock();
				}else {
					System.err.println("Not unlocking read lock");
				}
				txnLock.writeLock().lock();

I've tried moving the write lock into the first "if" clause. I've tried == instead of !=. And all combinations of those.

Either I end up with a deadlock or I end up with an exception.

@kenwenzel
Copy link
Contributor

I'll check this. Maybe gcIds is called from the wrong thread.

OK, I think I found it. The method resizeMap expects always a surrounding readTransaction which is not the case for gcIds. Solution: resizeMap should check if the txn argument is equal to this.writeTxn. If this is the case then it should do nothing with the readLock.

I've tried about every permutation of the following

				if(txn != writeTxn){
					System.err.println("Unlocking read lock");
					txnLock.readLock().unlock();
				}else {
					System.err.println("Not unlocking read lock");
				}
				txnLock.writeLock().lock();

I've tried moving the write lock into the first "if" clause. I've tried == instead of !=. And all combinations of those.

Either I end up with a deadlock or I end up with an exception.

I suppose you also did the same for re-enabling the read-lock some lines below?

@kenwenzel
Copy link
Contributor

I've pushed a commit that hopefully fixes the problem.

@kenwenzel kenwenzel force-pushed the GH-2998-native-store-memory-overflow-performance branch from 689c0bb to 7694fc9 Compare May 20, 2024 12:46
@kenwenzel
Copy link
Contributor

I've managed to trigger the following error:

19:40:59.062 [pool-2-thread-7] DEBUG memory overflow using temp directory /tmp/model14404253523601709106
19:40:59.751 [pool-2-thread-7] DEBUG overflow synced to disk
19:41:05.285 [pool-2-thread-7] DEBUG syncing at 10000 triples. max block size: 465689360
19:41:05.286 [pool-2-thread-7] DEBUG memory overflow using temp directory /tmp/model15326862463552813733
19:41:15.940 [pool-2-thread-7] WARN  Rolling back transaction due to connection close
19:41:17.848 [org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions-jmh-worker-1] ERROR Encountered an unexpected problem while trying to add a statement
java.lang.IllegalStateException: MemoryOverflowModel is in an inconsistent state
	at org.eclipse.rdf4j.sail.nativerdf.MemoryOverflowModel.getDelegate(MemoryOverflowModel.java:244)
	at org.eclipse.rdf4j.sail.nativerdf.MemoryOverflowModel.iterator(MemoryOverflowModel.java:184)
	at org.eclipse.rdf4j.sail.nativerdf.NativeSailStore$NativeSailSink.approveAll(NativeSailStore.java:457)
	at org.eclipse.rdf4j.sail.base.Changeset.sinkApproved(Changeset.java:857)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:519)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:484)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:299)
	at org.eclipse.rdf4j.sail.base.SnapshotSailStore.close(SnapshotSailStore.java:58)
	at org.eclipse.rdf4j.sail.nativerdf.NativeStore.shutDownInternal(NativeStore.java:384)
	at org.eclipse.rdf4j.sail.helpers.AbstractSail.shutDown(AbstractSail.java:281)
	at org.eclipse.rdf4j.sail.nativerdf.NativeStore.shutDown(NativeStore.java:410)
	at org.eclipse.rdf4j.sail.helpers.SailWrapper.shutDown(SailWrapper.java:115)
	at org.eclipse.rdf4j.sail.helpers.SailWrapper.shutDown(SailWrapper.java:115)
	at org.eclipse.rdf4j.sail.helpers.SailWrapper.shutDown(SailWrapper.java:115)
	at org.eclipse.rdf4j.sail.helpers.SailWrapper.shutDown(SailWrapper.java:115)
	at org.eclipse.rdf4j.sail.helpers.SailWrapper.shutDown(SailWrapper.java:115)
	at org.eclipse.rdf4j.repository.sail.SailRepository.shutDownInternal(SailRepository.java:172)
	at org.eclipse.rdf4j.repository.base.AbstractRepository.shutDown(AbstractRepository.java:48)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:195)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_avgt_jmhStub(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:236)
	at org.eclipse.rdf4j.sail.nativerdf.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_AverageTime(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:176)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:527)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:504)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
<failure>

I've also seen that the first overflow to disk for NativeStore starts after the last output of Done:

Done: 74
19:39:07.827 [pool-2-thread-7] DEBUG syncing at 39696 triples. max block size: 2260072
19:39:07.833 [pool-2-thread-7] DEBUG memory overflow using temp directory /tmp/model7390359989828614421
19:39:11.186 [pool-2-thread-7] DEBUG overflow synced to disk

@hmottestad
Copy link
Contributor Author

Bummer. I had seen that before but thought I had fixed it. Probably need to figure out how to reproduce it a bit more reliably.

As for the point where things overflow. What I've noticed it that when there are a lot of transactions that add and remove data the transaction isolation accumulates layers of Changeset objects. Once the last transaction finishes we get a cascade of Changesets that compress by merging the top Changeset into the one below it in a loop until the final Changeset flushes to the native store.

During the compression the models within the Changeset grow larger and larger for each merged Changeset. So it quickly leads to a memory overflow.

@kenwenzel
Copy link
Contributor

Bummer. I had seen that before but thought I had fixed it. Probably need to figure out how to reproduce it a bit more reliably.

As for the point where things overflow. What I've noticed it that when there are a lot of transactions that add and remove data the transaction isolation accumulates layers of Changeset objects. Once the last transaction finishes we get a cascade of Changesets that compress by merging the top Changeset into the one below it in a loop until the final Changeset flushes to the native store.

During the compression the models within the Changeset grow larger and larger for each merged Changeset. So it quickly leads to a memory overflow.

I'm also experimenting with using only the GC load for finding the right point for overflowing to disk:
https://github.com/kenwenzel/rdf4j/blob/e15bf0df4918d90972094f33ce5fadc06ddcbb86/core/sail/lmdb/src/main/java/org/eclipse/rdf4j/sail/lmdb/MemoryOverflowModel.java#L88

This also works somehow but sometimes the trigger comes too late and the GC takes a very long time.

@hmottestad
Copy link
Contributor Author

Bummer. I had seen that before but thought I had fixed it. Probably need to figure out how to reproduce it a bit more reliably.

As for the point where things overflow. What I've noticed it that when there are a lot of transactions that add and remove data the transaction isolation accumulates layers of Changeset objects. Once the last transaction finishes we get a cascade of Changesets that compress by merging the top Changeset into the one below it in a loop until the final Changeset flushes to the native store.

During the compression the models within the Changeset grow larger and larger for each merged Changeset. So it quickly leads to a memory overflow.

I'm also experimenting with using only the GC load for finding the right point for overflowing to disk:

https://github.com/kenwenzel/rdf4j/blob/e15bf0df4918d90972094f33ce5fadc06ddcbb86/core/sail/lmdb/src/main/java/org/eclipse/rdf4j/sail/lmdb/MemoryOverflowModel.java#L88

This also works somehow but sometimes the trigger comes too late and the GC takes a very long time.

That's really cool. I didn't know about the notification system for GCs. Maybe you can try to accumulate how much time is spent in garbage collection over the past 10 seconds and see if it's more than five seconds in total and that would be considered high memory usage. G1 prefers frequent and short GCs rather than long ones, so that might be why it's sometimes triggering too late.

@kenwenzel
Copy link
Contributor

kenwenzel commented May 22, 2024

@hmottestad Maybe you can test the following branch by running org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent:
https://github.com/kenwenzel/rdf4j/tree/GH-4554-fix-memory-overflow

It includes your above suggestion for computing the GC load and also includes the proactive closing of models.
At least on my computer it shows a stable behavior without long GC stalls.

All optimisations are useless if we can't close the temporary models early.

The running time on my computer is much slower than yours:

Benchmark                                               Mode  Cnt       Score   Error  Units
OverflowBenchmarkConcurrent.manyConcurrentTransactions  avgt       114014.992          ms/op

@kenwenzel
Copy link
Contributor

kenwenzel commented May 22, 2024

I've also ported the logic on top of your work within the following branch:
https://github.com/kenwenzel/rdf4j/tree/GH-2998-native-store-memory-overflow-performance-lmdb

The following is an excerpt from:
https://github.com/kenwenzel/rdf4j/blob/cd5bc79107894c0b762d3445de5a9781538cd939/core/sail/lmdb/src/main/java/org/eclipse/rdf4j/sail/lmdb/MemoryOverflowModel.java#L84

List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans();
RuntimeMXBean runtimeMXBean = ManagementFactory.getRuntimeMXBean();
for (GarbageCollectorMXBean gcBean : gcBeans) {
	NotificationEmitter emitter = (NotificationEmitter) gcBean;
	emitter.addNotificationListener((notification, o) -> {
		long uptimeInMillis = runtimeMXBean.getUptime();
		while (! gcInfos.isEmpty()) {
			if (uptimeInMillis - gcInfos.peek().getEndTime() > 5000) {
				gcSum -= gcInfos.poll().getDuration();
			} else {
				break;
			}
		}

		// extract garbage collection information from notification.
		GarbageCollectionNotificationInfo gcNotificationInfo = GarbageCollectionNotificationInfo.from((CompositeData) notification.getUserData());
		GcInfo gcInfo = gcNotificationInfo.getGcInfo();
		gcInfos.add(gcInfo);
		gcSum += gcInfo.getDuration();
		if (System.currentTimeMillis() - lastGcUpdate > 10000) {
			overflow = false;
		}
		if (! overflow) {
			// maximum heap size the JVM can allocate
			long maxMemory = RUNTIME.maxMemory();

			// total currently allocated JVM memory
			long totalMemory = RUNTIME.totalMemory();
			// amount of memory free in the currently allocated JVM memory
			long freeMemory = RUNTIME.freeMemory();

			// estimated memory used
			long used = totalMemory - freeMemory;

			// amount of memory the JVM can still allocate from the OS (upper boundary is the max heap)
			long freeToAllocateMemory = maxMemory - used;

			// try to prevent OOM
			overflow = freeToAllocateMemory < MIN_AVAILABLE_MEM_BEFORE_OVERFLOWING;
			lastGcUpdate = System.currentTimeMillis();
		}
		if (! overflow && gcSum > 2500) {
			overflow = true;
			lastGcUpdate = System.currentTimeMillis();
		}
	}, null, null);
}

It uses a combination of available memory and GC load and it also seems to work without explicitly closing the temporary models.

@hmottestad
Copy link
Contributor Author

hmottestad commented May 23, 2024

@hmottestad Maybe you can test the following branch by running org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent: https://github.com/kenwenzel/rdf4j/tree/GH-4554-fix-memory-overflow

It includes your above suggestion for computing the GC load and also includes the proactive closing of models. At least on my computer it shows a stable behavior without long GC stalls.

All optimisations are useless if we can't close the temporary models early.

The running time on my computer is much slower than yours:

Benchmark                                               Mode  Cnt       Score   Error  Units
OverflowBenchmarkConcurrent.manyConcurrentTransactions  avgt       114014.992          ms/op

I couldn't get your branch to work, but I made some modifications:


    private static final int[] GC_LOAD = new int[100];
    private static int prevBucket;

    private static volatile boolean highGcLoad = false;
    private static volatile long lastGcUpdate;
    private static volatile long gcSum;
    private static volatile List<GcInfo> gcInfos = new CopyOnWriteArrayList<>();

    static {
        List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans();
        for (GarbageCollectorMXBean gcBean : gcBeans) {
            NotificationEmitter emitter = (NotificationEmitter) gcBean;
            emitter.addNotificationListener((notification, o) -> {
//                System.out.println("GC event: " + notification.getMessage());
                long currentBucket = (System.currentTimeMillis() / 10) % 100;
//                System.out.println("Bucket: " + currentBucket);
                while (currentBucket != prevBucket) {
                    prevBucket = (prevBucket + 1) % 100;
////                    System.out.println("clearing bucket: " + prevBucket);
                    GC_LOAD[(int) prevBucket] = 0;
                }

                while (!gcInfos.isEmpty()) {
                    GcInfo gcInfo = gcInfos.remove(0);
                    GC_LOAD[(int) currentBucket] += (int) gcInfo.getDuration();
                }

                // extract garbage collection information from notification.
                GarbageCollectionNotificationInfo gcNotificationInfo = GarbageCollectionNotificationInfo.from((CompositeData) notification.getUserData());
                GcInfo gcInfo = gcNotificationInfo.getGcInfo();
                gcInfos.add(gcInfo);
                long gcSum = 0;
                for (int i : GC_LOAD) {
                    gcSum += i;
                }
                System.out.println("gcSum: " + gcSum);
                if (gcSum > 250) {
                    System.out.println("High GC load");
                    highGcLoad = true;
                } else {
                    System.out.println("Low GC load");
                    highGcLoad = false;
                }
            }, null, null);
        }
    }

Essentially I'm tracking the total time spent in GC over the past second in a sliding window made of 100 buckets of 10 ms each.

That brings it down to 18785.157 ms/op

But it's really inconsistent. From one benchmark run to another it'll will be fine, and then suddenly it starts running out of memory. I also still get the inconsistent state exception, even though everything is wrapped in synchronized. That one is really bizarre.

@kenwenzel
Copy link
Contributor

@hmottestad Maybe you can test the following branch by running org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent: https://github.com/kenwenzel/rdf4j/tree/GH-4554-fix-memory-overflow
It includes your above suggestion for computing the GC load and also includes the proactive closing of models. At least on my computer it shows a stable behavior without long GC stalls.
All optimisations are useless if we can't close the temporary models early.
The running time on my computer is much slower than yours:

Benchmark                                               Mode  Cnt       Score   Error  Units
OverflowBenchmarkConcurrent.manyConcurrentTransactions  avgt       114014.992          ms/op

I couldn't get your branch to work, but I made some modifications:


    private static final int[] GC_LOAD = new int[100];
    private static int prevBucket;

    private static volatile boolean highGcLoad = false;
    private static volatile long lastGcUpdate;
    private static volatile long gcSum;
    private static volatile List<GcInfo> gcInfos = new CopyOnWriteArrayList<>();

    static {
        List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans();
        for (GarbageCollectorMXBean gcBean : gcBeans) {
            NotificationEmitter emitter = (NotificationEmitter) gcBean;
            emitter.addNotificationListener((notification, o) -> {
//                System.out.println("GC event: " + notification.getMessage());
                long currentBucket = (System.currentTimeMillis() / 10) % 100;
//                System.out.println("Bucket: " + currentBucket);
                while (currentBucket != prevBucket) {
                    prevBucket = (prevBucket + 1) % 100;
////                    System.out.println("clearing bucket: " + prevBucket);
                    GC_LOAD[(int) prevBucket] = 0;
                }

                while (!gcInfos.isEmpty()) {
                    GcInfo gcInfo = gcInfos.remove(0);
                    GC_LOAD[(int) currentBucket] += (int) gcInfo.getDuration();
                }

                // extract garbage collection information from notification.
                GarbageCollectionNotificationInfo gcNotificationInfo = GarbageCollectionNotificationInfo.from((CompositeData) notification.getUserData());
                GcInfo gcInfo = gcNotificationInfo.getGcInfo();
                gcInfos.add(gcInfo);
                long gcSum = 0;
                for (int i : GC_LOAD) {
                    gcSum += i;
                }
                System.out.println("gcSum: " + gcSum);
                if (gcSum > 250) {
                    System.out.println("High GC load");
                    highGcLoad = true;
                } else {
                    System.out.println("Low GC load");
                    highGcLoad = false;
                }
            }, null, null);
        }
    }

Essentially I'm tracking the total time spent in GC over the past second in a sliding window made of 100 buckets of 10 ms each.

That brings it down to 18785.157 ms/op

But it's really inconsistent. From one benchmark run to another it'll will be fine, and then suddenly it starts running out of memory. I also still get the inconsistent state exception, even though everything is wrapped in synchronized. That one is really bizarre.

Maybe you can also try the other branch from my previous message. It combines the GC load with the memory usage to find a good moment for overflowing.

@hmottestad
Copy link
Contributor Author

Btw. The running time will heavily depend on the speed of your SSD and the operating system that you are running. I'm on a MacBook and MacOS is notorious for ignoring fsync requests. If you are using Linux then it's much more likely that it will actually wait for fsync to complete.

@kenwenzel
Copy link
Contributor

Btw. The running time will heavily depend on the speed of your SSD and the operating system that you are running. I'm on a MacBook and MacOS is notorious for ignoring fsync requests. If you are using Linux then it's much more likely that it will actually wait for fsync to complete.

It's Ubuntu in a VirtualBox under Windows 🙈

@hmottestad
Copy link
Contributor Author

It's Ubuntu in a VirtualBox under Windows 🙈

I assume that VirtualBox is probably limiting your IO speed.

@hmottestad
Copy link
Contributor Author

Btw. I'm getting this exception here every once in a while:

09:24:29.101 [pool-2-thread-7] ERROR Encountered an unexpected problem while trying to commit
java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.E(LmdbUtil.java:56)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.endTransaction(ValueStore.java:1132)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.resizeMap(ValueStore.java:570)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.lambda$gcIds$12(ValueStore.java:905)
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.readTransaction(LmdbUtil.java:78)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.readTransaction(ValueStore.java:794)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.gcIds(ValueStore.java:903)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.handleRemovedIdsInValueStore(LmdbSailStore.java:462)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:507)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$1.close(SailSourceBranch.java:197)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:303)
	at org.eclipse.rdf4j.sail.base.UnionSailSource.flush(UnionSailSource.java:74)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.commitInternal(SailSourceConnection.java:506)
	at org.eclipse.rdf4j.sail.lmdb.LmdbStoreConnection.commitInternal(LmdbStoreConnection.java:91)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.commit(AbstractSailConnection.java:524)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:226)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:170)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)
09:24:29.101 [pool-2-thread-7] WARN  Rolling back transaction due to connection close
<failure>

java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.eclipse.rdf4j.repository.RepositoryException: org.eclipse.rdf4j.sail.SailException: java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:186)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_avgt_jmhStub(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:236)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.jmh_generated.OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.manyConcurrentTransactions_AverageTime(OverflowBenchmarkConcurrent_manyConcurrentTransactions_jmhTest.java:176)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:527)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:504)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)
Caused by: java.util.concurrent.ExecutionException: org.eclipse.rdf4j.repository.RepositoryException: org.eclipse.rdf4j.sail.SailException: java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.manyConcurrentTransactions(OverflowBenchmarkConcurrent.java:184)
	... 12 more
Caused by: org.eclipse.rdf4j.repository.RepositoryException: org.eclipse.rdf4j.sail.SailException: java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:228)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:170)
	... 5 more
Caused by: org.eclipse.rdf4j.sail.SailException: java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:517)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$1.close(SailSourceBranch.java:197)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:303)
	at org.eclipse.rdf4j.sail.base.UnionSailSource.flush(UnionSailSource.java:74)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.commitInternal(SailSourceConnection.java:506)
	at org.eclipse.rdf4j.sail.lmdb.LmdbStoreConnection.commitInternal(LmdbStoreConnection.java:91)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.commit(AbstractSailConnection.java:524)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.commit(SailConnectionWrapper.java:150)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:226)
	... 6 more
Caused by: java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.E(LmdbUtil.java:56)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.endTransaction(ValueStore.java:1132)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.resizeMap(ValueStore.java:570)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.lambda$gcIds$12(ValueStore.java:905)
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.readTransaction(LmdbUtil.java:78)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.readTransaction(ValueStore.java:794)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.gcIds(ValueStore.java:903)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.handleRemovedIdsInValueStore(LmdbSailStore.java:462)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:507)
	... 20 more

@hmottestad
Copy link
Contributor Author

I'm also getting GC overhead exception even though there is plenty of free memory:

gcSum: 5076
Max memory: 910.5
Total memory: 910.5
Free memory: 308.9868698120117
Used memory: 601.5131301879883
Free to allocate memory: 308.9868698120117
MB free: 308.9868698120117
High GC load
<failure>

java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded

@kenwenzel
Copy link
Contributor

MDB_BAD_TXN: Transaction must abort, has a child, or is invalid

I think this is the same error as the weird problem with the NativeStore. IO operations fail for some reason but are somehow swallowed in NativeStore.

@hmottestad
Copy link
Contributor Author

MDB_BAD_TXN: Transaction must abort, has a child, or is invalid

I think this is the same error as the weird problem with the NativeStore. IO operations fail for some reason but are somehow swallowed in NativeStore.

It seems to happen inside the new ID GC code where you fixed the lock issue by adding a read transaction.

@kenwenzel
Copy link
Contributor

kenwenzel commented May 23, 2024

MDB_BAD_TXN: Transaction must abort, has a child, or is invalid

I think this is the same error as the weird problem with the NativeStore. IO operations fail for some reason but are somehow swallowed in NativeStore.

It seems to happen inside the new ID GC code where you fixed the lock issue by adding a read transaction.

Yes, that's correct. The question is: Does it always happen if resizeMap is required within gcIds or only from time to time?
A quick fix could be to disable GC via the enableGc flag as in my branch above.

@kenwenzel
Copy link
Contributor

MDB_BAD_TXN: Transaction must abort, has a child, or is invalid

I think this is the same error as the weird problem with the NativeStore. IO operations fail for some reason but are somehow swallowed in NativeStore.

It seems to happen inside the new ID GC code where you fixed the lock issue by adding a read transaction.

@hmottestad I can't reproduce it with my branch
https://github.com/kenwenzel/rdf4j/tree/GH-2998-native-store-memory-overflow-performance-lmdb
even if lmdbSailStore.enableGc = true in LmdbStore.

Maybe there is some LMDB modification operation that results in an error like MDB_MAP_FULL and the error you are seeing is just a consequence of the previous one.

@kenwenzel kenwenzel closed this May 23, 2024
@kenwenzel kenwenzel reopened this May 23, 2024
@hmottestad
Copy link
Contributor Author

java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.E(LmdbUtil.java:56)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.endTransaction(ValueStore.java:1132)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.commit(ValueStore.java:1160)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:504)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:253)
	at org.eclipse.rdf4j.sail.base.DelegatingSailDataset.close(DelegatingSailDataset.java:51)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:248)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.endUpdateInternal(SailSourceConnection.java:692)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.endUpdate(AbstractSailConnection.java:668)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.flush(AbstractSailConnection.java:479)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:224)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:170)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)

I reverted the changes you did to fix the read lock issue and added a try catch rollback to my benchmark code instead. I'm still running into this MDB_BAD_TXN issue. There also doesn't seem to be any way to handle the MDB_BAD_TXN issue, since even calling a rollback afterwards causes the entire JVM to exit with 134.

@hmottestad
Copy link
Contributor Author

@kenwenzel I have come to the conclusion that the ParallelGC is very hard to deal with when trying to predict that we are running low on memory. When I switched to G1GC things became a lot more consistent and I'm able to consistently overflow to disk before running out of memory. I also tried to increase the number of transactions in my benchmark and the performance scaled quite well and without running out of memory.

@kenwenzel
Copy link
Contributor

kenwenzel commented May 23, 2024

java.io.IOException: MDB_BAD_TXN: Transaction must abort, has a child, or is invalid
	at org.eclipse.rdf4j.sail.lmdb.LmdbUtil.E(LmdbUtil.java:56)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.endTransaction(ValueStore.java:1132)
	at org.eclipse.rdf4j.sail.lmdb.ValueStore.commit(ValueStore.java:1160)
	at org.eclipse.rdf4j.sail.lmdb.LmdbSailStore$LmdbSailSink.flush(LmdbSailStore.java:504)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.flush(SailSourceBranch.java:300)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch.autoFlush(SailSourceBranch.java:392)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:253)
	at org.eclipse.rdf4j.sail.base.DelegatingSailDataset.close(DelegatingSailDataset.java:51)
	at org.eclipse.rdf4j.sail.base.SailSourceBranch$2.close(SailSourceBranch.java:248)
	at org.eclipse.rdf4j.sail.base.SailSourceConnection.endUpdateInternal(SailSourceConnection.java:692)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.endUpdate(AbstractSailConnection.java:668)
	at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.flush(AbstractSailConnection.java:479)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.sail.helpers.SailConnectionWrapper.flush(SailConnectionWrapper.java:243)
	at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.commit(SailRepositoryConnection.java:224)
	at org.eclipse.rdf4j.sail.lmdb.benchmark.OverflowBenchmarkConcurrent.lambda$manyConcurrentTransactions$3(OverflowBenchmarkConcurrent.java:170)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1570)

I reverted the changes you did to fix the read lock issue and added a try catch rollback to my benchmark code instead. I'm still running into this MDB_BAD_TXN issue. There also doesn't seem to be any way to handle the MDB_BAD_TXN issue, since even calling a rollback afterwards causes the entire JVM to exit with 134.

The wrapping readTransaction is absolutely required for gcIds (we could also commit this as a separate PR).
I'm just thinking that MDB_BAD_TXN could be a result of a previous (hidden) MDB_MAP_FULL that is not yet detected.

I've seen some places where mdb_put and mdb_get are called in ValueStore without wrapping them with E(...), which would detect errors while executing the function.

Furthermore, we could also move resizeMap into the inner loop of gcIds to minimize the impact of guessing the incorrect size of required memory.

@hmottestad
Copy link
Contributor Author

The wrapping readTransaction is absolutely required for gcIds (we could also commit this as a separate PR). I'm just thinking that MDB_BAD_TXN could be a result of a previous (hidden) MDB_MAP_FULL that is not yet detected.

I've seen some places where mdb_put and mdb_get are called in ValueStore without wrapping them with E(...), which would detect errors while executing the function.

Furthermore, we could also move resizeMap into the inner loop of gcIds to minimize the impact of guessing the incorrect size of required memory.

I wrapped everything i could in E(...) and now the MDB_BAD_TXN doesn't happen anymore :) I probably wrapped more than is needed, but at least we now know that that is what the problem is.

@kenwenzel
Copy link
Contributor

kenwenzel commented May 23, 2024

The wrapping readTransaction is absolutely required for gcIds (we could also commit this as a separate PR). I'm just thinking that MDB_BAD_TXN could be a result of a previous (hidden) MDB_MAP_FULL that is not yet detected.
I've seen some places where mdb_put and mdb_get are called in ValueStore without wrapping them with E(...), which would detect errors while executing the function.
Furthermore, we could also move resizeMap into the inner loop of gcIds to minimize the impact of guessing the incorrect size of required memory.

I wrapped everything i could in E(...) and now the MDB_BAD_TXN doesn't happen anymore :) I probably wrapped more than is needed, but at least we now know that that is what the problem is.

Thank you! A small note from my side: I would propose to not wrap the _get.methods as it will have a performance impact and the return values are already tested. But checking modifications (put, delete) is really good!
BTW, you also wrapped some calls where the return values are already tested explicitly.
-> Maybe we can move the patch/commit into a separate PR before merging this PR?

@hmottestad hmottestad force-pushed the GH-2998-native-store-memory-overflow-performance branch from 4dfd13b to c9734dc Compare May 25, 2024 17:31
@@ -953,8 +958,10 @@ protected void deleteValueToIdMappings(MemoryStack stack, long txn, Collection<L
for (Long id : ids) {
idVal.mv_data(id2data(idBb.clear(), id).flip());
// id must not have a reference count and must have an existing value
if (mdb_get(writeTxn, refCountsDbi, idVal, ignoreVal) != 0 &&
mdb_get(txn, dbi, idVal, dataVal) == 0) {
int a = E(mdb_get(writeTxn, refCountsDbi, idVal, ignoreVal)); // this is where I get MDB_BAD_TXN
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kenwenzel I fairly consistently get MDB_BAD_TXN here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be due to a previous map full. Can you try to move resizeMap into the inner loop of gcIds?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants