New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Package with offline links never "finishes" #4392
Comments
Those are pyload/src/pyload/core/datatypes/pyfile.py Lines 10 to 26 in a00b2af
|
This change considers (permanently) offline links as finished. Although they are not technically finished in being downloaded, they also never will be so there is no reason not to dispatch `package_finished`. This allows plugins and scripts to further process the files. Note that this does not adjust the progress display in queue, allowing users to check why their progress never finishes and decide what to do with the package or links in question. Fixes pyload#4392
This change considers (permanently) offline links as finished. Although they are not technically finished in being downloaded, they also never will be so there is no reason not to dispatch `package_finished`. This allows plugins and scripts to further process the files. Note that this does not adjust the progress display in queue, allowing users to check why their progress never finishes and decide what to do with the package or links in question. Fixes pyload#4392
From my point of view the behaviour of pyload is correct. |
You won't because the extract will simply fail on a missing part. Also remember that pyload handles mirrored files across different sources. So say you have The user will then have to periodically check manually whether the packages finished or not just to discover that it didn't finish because a functionally irrelevant link was offline.
Which is effectively doing the same thing just manually. You're treating the remaining offline links as if they hadn't been present in the first place. Treating them as finished results in the same. On that note please take a look at my proposed PR #4394. If you have any thoughts on where treating them as not finished has any advantages I'd very much like to know to take this into account. |
@mihawk90 I get your point and in case of mirrored links it could be a possible way. AND I really like the idea of using the err out from extracting. Back to treating a offline link as finished. The only safe way would be if pyload could check what file the link is for. But if understand correct this impossible when the link is offline!? |
Then you either didn't read the PR description properly or I'm not understanding your workflow. As noted there:
This covers exactly what you are saying. When you open the queue it will still say e.g. 80/81. So this makes me wonder how else you check whether a package is finished, if it's not by the progress display. The PR covers treating the package as finished programatically to trigger events, visually to the user there is (intentionally) no difference. On the other end of the spectrum it helps users who never check their queue and instead rely on notifications to be triggered. As it currently stands packages with offline links will never trigger notifications because the offline links are.. well, not "finished" and so the event to trigger notifications is never emitted. The PR allows for those users to receive notifications, and then potentially check the progress (they'll have to check the queue for cleanup anyway).
I don't really see how to be honest. The package and all its links remain in the queue since they aren't deleted. It changes nothing in the workflow for the user, it allows for users receiving the event via notifications though or have already (actually) finished files extracted. As noted on the PR though, the best approach is probably giving users an option in the settings, which I haven't done yet (because I was waiting for feedback). |
@mihawk90 ...my fault, I REALLY read over this part of the PR description...
With this I think it could be a usable solution. But
contradicts
This could be confusing. The trigger is "package finished" and the notification will tell something similar. |
It might contradict in the way it was worded yes, however what I meant was never checking the queue while the downloads are running, i.e. until they get a finished notification. Either way, yes the way it currently is, it would simply trigger the finished notification the same way they are triggered now when everything is done. (and personally I would argue offline links are also done because there's nothing else to do with them) That being said, this could be adjusted in the future. The pyload/src/pyload/core/managers/addon_manager.py Lines 227 to 233 in a65a968
So it's possible to print whether there were offline links and how many. Not sure that's in scope for that PR though. |
Description
I have a package in my Queue where 3 out of 24 links are offline. The rest of them has been downloaded, however the package never enters the "finished" state and therefore never fires an event for the
ExtractArchive
plugin to pick up and do the extraction.I'm not sure whether this is a bug or intentional, since offline links are obviously not downloaded. However that being the case they also will never be downloaded and finished, so pyload should treat them as finished and at least attempt extraction. If extraction fails, then that's just that. At that point it doesn't matter whether the links are treated as finished or not though since it doesn't change the situation. All it does is saving time by automating an extraction step that will otherwise have to be done manually.
Debug log
My log is 17k lines and I don't know where exactly the last link finished... I can however say that the package name only appears in the log where the package was created, where normally finished packages are logged with
Package finished: <packagename>
.Here is however a screenshot for illustration:
As you can see on the progress bar 3 links are "unfinished", and those are exactly the 3 offline links.
It should be noted in this case those were links that would have been skipped due to the file already existing (or rather having been downloaded on another hoster), so the package is finished regardless (but of course pyload can't know that when the offline link doesn't provide the filename).
Additional references
I dug through the code a little and the issue seems to lie here:
pyload/src/pyload/core/managers/file_manager.py
Lines 596 to 607 in a65a968
Now, L600 calls
get_unfinished
:pyload/src/pyload/core/database/file_database.py
Lines 400 to 409 in a65a968
Unfortunately I didn't find any documentation on what those status
(0, 4, 13)
are, but I'm assuming something likedownloading
,finished
, andskipped
. So I guess this would need a fourth filter foroffline
, however I don't know what that would be and also I couldn't test it even if I knew ( #4391 ).The text was updated successfully, but these errors were encountered: