-
-
Notifications
You must be signed in to change notification settings - Fork 447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow concurrent MediaInfoStore retrieval #4639
allow concurrent MediaInfoStore retrieval #4639
Conversation
@SurfaceS Therefore the above example could be written this way :
No lock needed, as far I can see it ... I will scan the class for more places to optimize and will commit it to the other PR |
One more thing, please. If you don't mind, it would be nice to work on the same branch. Otherwise, we probable will get merge conflicts and I am usually the one who runs behind you, because you're merging your changes to main, before my PR. |
I added an optimized version to PR #4635. I think that should do it and prevent NPE's. |
|
you should split your code into multiple PR, because some solution may be merged, some not. |
Yeah thanks, but I think you got the idea behind it. Should work now. |
True, and that why we use
Latency talking, it is nearly 0 improvement (really relative/subjective), because the On the code proposed here, the STORE is locked only on doing something under it (get/set/clear, and so on). Before, it was locked when parsing: that was the blocking thing. |
As far as I know, the ConcurrentHashMap synchronizes only on the bucket the object lies in. This is of course a much butter approach as to sync on the HashMap itself. Otherwise this class would be useless ... |
And this doesn't need the sync I believe ...
The benefit is, when you access Objects that lays in different buckets, this implementation is much faster. And the benefit is measurable, since the profiler was pointing out to this implementation as a bottleneck. |
However guys ... I'm off for the weekend. Will be back on Monday. |
On this file,
bottleneck is here :
Removing the synchronized remove the bottleneck, but add this unwanted issue. That's why this PR add the synchronized on filename. About
That's why I doubt the real benefit of using |
Yeah. I think this is solved in 517d264. You might want to have a look at the current PR implementation. |
I hate to ruin the party, but there's a reason why media parsing is serialized in the first place. I doubt there's much performance gain to be had here unless there is some other "issue" at hand. The reason is that media parsing is IO bound. It's not the time the CPU spends to analyze that matters, it's the time it takes to retrieve the data (in almost all cases at least, unless maybe if you're using a RAM drive or similar). Asking a harddrive to retrieve multiple files in parallel is slower than doing it serially - because a harddrive is fastest when reading large chunks of data, not doing "random access". I'm pretty sure the same is true for SSDs. Thus, running multiple parsing processes will slow things down, not speed it up. The only exception is where the data resides on different devices. I'd say that is a quite rare circumstance that don't outweigh the slowdown achieved during "normal" parsing with multiple files on the same device. I'd say that the order/priority of the parsing probably is "the problem", not the fact that it's running sequensially. |
@Nadahar You're complete right ... the issue is the startup experience. If the media scanner is building up the database and a user makes a browse request in that time, the browse request and the library scanner are blocking each other. Same for multiple browse requests. This ought to be solved in #6535. |
included in v14.1 |
allow concurrent
MediaInfoStore
retrieval.Less blocking (only on same filename), but keep synchronized as needed.
for @ik666 :
why synchronized ?
STORE.containsKey(filename) && STORE.get(filename).get() != null
Thread1 => STORE.containsKey(filename) = true
Thread2 => STORE.clear()
Thread1 => STORE.get(filename).get() != null => Throw exception.