Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when full backuping microstream file, it fails and report "Java technical array capacity limit of max signed 32 bit integer value exceeded: 2340984838" #611

Open
yqbjtu opened this issue Jul 11, 2023 · 5 comments
Labels
bug Something isn't working

Comments

@yqbjtu
Copy link

yqbjtu commented Jul 11, 2023

Environment Details

  • MicroStream Version:08.01.00-MS-GA
  • JDK version:17.04
  • OS:Ubuntu 18.04.6 LTS, centos 7
  • Used frameworks: Spring boot 2.6.8

Describe the bug

I used microstream 8.0.1 and use LazyArrayList as my dataRoot memeber.

When I call the backup api ,it failed.
the max size of my lazyArrayList is about 160,000, new element is added to it ,and the old one is deleted from it dynamically. but the realtime max size does not exceed 200, 000

my data root like this:

public class CellConnRoot2 implements RegisterStorageService
{
   HashMap<String, LazyArrayList<ClassB>> tables;

    HashMap<String, LazyArrayList<ClassA>> historyTables;
}

Include stack traces or command outputs

one.microstream.storage.exceptions.StorageException: Problem in channel #0
        at one.microstream.storage.types.StorageChannelTask$Abstract.checkForProblems(StorageChannelTask.java:114)
        at one.microstream.storage.types.StorageChannelTask$Abstract.waitOnCompletion(StorageChannelTask.java:176)
        at one.microstream.storage.types.StorageRequestAcceptor$Default.waitOnTask(StorageRequestAcceptor.java:162)
        at one.microstream.storage.types.StorageRequestAcceptor$Default.exportChannels(StorageRequestAcceptor.java:246)
        at one.microstream.storage.types.StorageConnection$Default.exportChannels(StorageConnection.java:586)
        at one.microstream.storage.types.StorageConnection.exportChannels(StorageConnection.java:287)
        at one.microstream.storage.types.StorageConnection$Default.issueFullBackup(StorageConnection.java:563)
        at one.microstream.storage.embedded.types.EmbeddedStorageManager$Default.issueFullBackup(EmbeddedStorageManager.java:524)
        at one.microstream.storage.types.StorageConnection.issueFullBackup(StorageConnection.java:219)
        at ai.momenta.osmdb.cellconn.CellConnDB.backupMeta(CellConnDB.java:414)
        at ai.momenta.osmdb.cellconn.CellConnDB.asyncRun(CellConnDB.java:198)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
        at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: one.microstream.storage.exceptions.StorageException: null
        at one.microstream.storage.types.StorageFile$Abstract.copyTo(StorageFile.java:284)
        at one.microstream.storage.types.StorageFileManager$Default.lambda$exportData$2(StorageFileManager.java:1297)
        at one.microstream.afs.types.AFS.executeWriting(AFS.java:248)
        at one.microstream.afs.types.AFS.executeWriting(AFS.java:235)
        at one.microstream.storage.types.StorageFileManager$Default.exportData(StorageFileManager.java:1296)
        at one.microstream.storage.types.StorageChannel$Default.exportData(StorageChannel.java:621)
        at one.microstream.storage.types.StorageRequestTaskExportChannels$Default.internalProcessBy(StorageRequestTaskExportChannels.java:63)
        at one.microstream.storage.types.StorageRequestTaskExportChannels$Default.internalProcessBy(StorageRequestTaskExportChannels.java:26)
        at one.microstream.storage.types.StorageChannelTask$Abstract.processBy(StorageChannelTask.java:252)
        at one.microstream.storage.types.StorageChannel$Default.work(StorageChannel.java:409)
        at one.microstream.storage.types.StorageChannel$Default.run(StorageChannel.java:492)
        ... 1 common frames omitted
Caused by: one.microstream.exceptions.ArrayCapacityException: Java technical array capacity limit of max signed 32 bit integer value exceeded: 2329107894
        at one.microstream.X.checkArrayRange(X.java:154)
        at one.microstream.memory.XMemory.allocateDirectNative(XMemory.java:1086)
        at one.microstream.afs.types.AIoHandler$Abstract.copyGeneric(AIoHandler.java:301)
        at one.microstream.afs.types.AIoHandler$Abstract.copyFrom(AIoHandler.java:937)
        at one.microstream.afs.types.AWritableFile.copyFrom(AWritableFile.java:75)
        at one.microstream.storage.types.StorageFile$Abstract.copyTo(StorageFile.java:280)
        ... 11 common frames omitted


<img width="1315" alt="image" src="https://github.com/microstream-one/microstream/assets/3291079/2cdf42da-cd90-4db4-aab6-f9ffa3b1894e">

microstream data
[root@x meta]# ls -al
total 36
drwxr-xr-x. 3 root root 4096 Jun 15 19:33 .
drwxr-xr-x. 9 root root 4096 Jul 11 10:13 ..
drwxr-xr-x. 2 root root 4096 Jul 11 10:52 channel_0
-rw-r--r--. 1 root root 23069 Jun 29 14:24 PersistenceTypeDictionary.ptd
[root@x]# cd channel_0/
[root@x channel_0]# ls -al
total 2418900
drwxr-xr-x. 2 root root 4096 Jul 11 10:52 .
drwxr-xr-x. 3 root root 4096 Jun 15 19:33 ..
-rw-r--r--. 1 root root 135397049 Jul 11 13:15 channel_0_1869.dat
-rw-r--r--. 1 root root 2341540368 Jul 11 13:15 transactions_0.sft
[root@x channel_0]# du -sh
2.4G .
[root@x channel_0]# ls -sh
total 2.4G
130M channel_0_1869.dat 2.2G transactions_0.sft

my microstream data is opened like this:
public EmbeddedStorageManager openByPath(String graphDbPath, boolean enableGC)
{
StorageEntityCache.Default.setGarbageCollectionEnabled(enableGC);

var builder = EmbeddedStorageConfiguration.Builder();
var foundation = builder
  .setStorageDirectory(graphDbPath)
  .setChannelCount(1)
  .setDataFileMaximumSize(ByteSize.New(2, ByteUnit.GB))
  .createEmbeddedStorageFoundation();

foundation.onConnectionFoundation(BinaryHandlersJDK8::registerJDK8TypeHandlers);

return foundation.createEmbeddedStorageManager().start();

}

To Reproduce

insert may element to dataRoot lazyArrayList, the call backup like this

storageService.getStorage().issueFullBackup(NioFileSystem.New().ensureDirectoryPath(fullDirForMetaBackup.toString()));

Expected behavior

the microstream file is backup

Screenshots

Additional context

@hg-ms
Copy link
Contributor

hg-ms commented Jul 11, 2023

The error is caused by a storage file larger that is larger than 2GB (Integer.MAX_VALUE bytes).
The Backup fails because it can’t allocate the required DirectByteBuffer cause of that size.

The file’s size may be exceeded because the internal housekeeping had not enough time to split that large file after a larger store operation before the backup was triggered.
Maybe it helps triggering the housekeeping after such large writes manually by calling issueFullGarbageCollection() and issueFullFileCheck() or increasing the housekeeping budget via configuration. See https://docs.microstream.one/manual/storage/configuration/housekeeping.html.

However, the storage should handle that case by itself without failing.

Many thanks for your error report.

@hg-ms hg-ms added the bug Something isn't working label Jul 11, 2023
@yqbjtu
Copy link
Author

yqbjtu commented Jul 11, 2023

issueFullFileCheck()

it seems that issueFullGarbageCollection method does not take effect.

my code is like this.

storageService.getStorage().issueGarbageCollection(TimeUnit.HOURS.toNanos(2));
storageService.getStorage().issueFullGarbageCollection();
storageService.getStorage().issueFullFileCheck();
storageService.getStorage().issueFileCheck(TimeUnit.HOURS.toNanos(1));

@yqbjtu
Copy link
Author

yqbjtu commented Jul 11, 2023

When I changed my open microstream code like this. it still does not take effect . the transactions_0.sft still is about 2.2G

root@hdmap-testing-team:/coredata/metalist/meta/channel_0]ls -al
total 2436320
drwxr-xr-x 2 root root 4096 Jul 11 15:48 .
drwxr-xr-x 3 root root 4096 Jul 11 10:06 ..
-rw-r--r-- 1 root root 152576032 Jul 11 19:21 channel_0_1870.dat
-rw-r--r-- 1 root root 2342198768 Jul 11 19:21 transactions_0.sft

  public EmbeddedStorageManager openByPath(String graphDbPath, boolean enableGC)
  {
    StorageEntityCache.Default.setGarbageCollectionEnabled(enableGC);//todo change from default to each instance

    var builder = EmbeddedStorageConfiguration.Builder();
    var foundation = builder
      .setStorageDirectory(graphDbPath)
      .setChannelCount(1)
      .setDataFileMaximumSize(ByteSize.New(2, ByteUnit.GB))
      .setHousekeepingTimeBudget(Duration.of(2, ChronoUnit.HOURS))
      .setHousekeepingInterval()
      .createEmbeddedStorageFoundation();

    foundation.onConnectionFoundation(BinaryHandlersJDK8::registerJDK8TypeHandlers);

    return foundation.createEmbeddedStorageManager().start();
  }

@yqbjtu
Copy link
Author

yqbjtu commented Jul 11, 2023

could we fix this bug asap?  I depend on microstream heavily

@hg-ms
Copy link
Contributor

hg-ms commented Jul 11, 2023

If the storage has been shut down without incomplete writes, you can delete the transaction log file (transactions_0.sft). The storage will startup without that, it just can’t do “rollbacks” of incomplete writes.
You can ensure that all writes are complete when the storageManager.shutdown() call has returned successfully.

I’ll see what we can do to fix that problem soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants