Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added FlushIndexTask to flush documents at index thread level. #2349

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

balmukundblr
Copy link

Description

Longer completion time for Close Index call.

Once AddDoc task completes, Benchmark algo calls ForceMerge/CloseIndex task, which eventually allows all pending flushes to be completed. Since flushes during CloseIndex call are sequential, it takes longer time to complete and delays the overall Index completion time. While indexing 1 million documents with reuters21578 (plain text Document derived from reuters21578 corpus), we observed CloseIndex call takes around 35% of total time.

Solution

Developed a new FlushIndexTask, which uses flushNextBuffer() Lucene API, to flush document at Index thread level, while not impacting any other Index threads. Adding this task in the algo file, immediately after AddDoc task, would ensure flushing all docs before calling ForceMerge/CloseIndex task.
With this solution in place, CloseIndex task time was reduced significantly and it also improved total time for Indexing.

Tests

Since, we are using existing Lucene API - flushNextBuffer(), hence it already has test cases.
-Passed existing tests

Checklist

Please review the following and check all that apply:

  • I have reviewed the guidelines for How to Contribute and my code conforms to the standards described there to the best of my ability.
    [ ] I have created a Jira issue and added the issue ID to my pull request title.
  • I have given Solr maintainers access to contribute to my PR branch. (optional but recommended)
  • I have developed this patch against the master branch.
  • I have run ./gradlew check.
  • I have added tests for my changes.
  • I have added documentation for the Ref Guide (for Solr changes only).

Copy link
Member

@mikemccand mikemccand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! This IndexWriter.flushNextBuffer() API is a relatively recent addition to Lucene and did not exist when we were actively iterating on benchmarks module!

I left some small comments, and +1 to push this once we resolve those!

@@ -53,7 +53,8 @@ log.queries=true

{ "Populate"
CreateIndex
[{ "MAddDocs" AddDoc } : 5000] : 4
#[{ "MAddDocs" AddDoc } : 5000] : 4
[{ {{"MAddDocs" AddDoc } : 5000} FlushIndex } ] : 20
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe 1) remove the old (commented out) line entirely? And 2), 20 threads seems a bit much for a "typical" box now? Maybe go to 8 instead?

Hmm, it would be nice if benchmarks made the local machine's CPU concurrency available (Runtime.getRuntime().availableProcessors()) so we could set this dynamically. But that is for another day!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Sure Mike, we will remove the old (commented out) line entirely.
  2. We can change #parallel indexing thread to 8, to work with a typical box.

@mikemccand
Copy link
Member

Thanks @balmukundblr this looks great! Could you please open a new PR on the new Lucene GitHub repo? https://github.com/apache/lucene

Thanks!

@balmukundblr balmukundblr mentioned this pull request Apr 29, 2021
6 tasks
@balmukundblr
Copy link
Author

Thanks @balmukundblr this looks great! Could you please open a new PR on the new Lucene GitHub repo? https://github.com/apache/lucene

Thanks!

As you suggested, i've raised a PR(apache/lucene#116)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants