-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error, the webdriver won't quit correctly. #7
Comments
I'm able to reproduce the issue using your code. Thanks for providing that. I had a lot of trouble with this method since async destructors are not currently supported in the language. There is an interesting blog post about it here: https://boats.gitlab.io/blog/post/poll-drop/. I have tried using a tokio runtime inside this drop() method but tokio complains about creating a runtime within a runtime, which seems like an arbitrary restriction to me but maybe there is a good reason for it. Previously I have used an async_std runtime to do block_on() here, which worked fine but feels nasty. I'm not sure what futures::block_on does, but that seems to be the issue here. Are you able to use the threaded_scheduler as a workaround for now? |
I've added another potential workaround - as of 0.9.1 you will be able to manually call |
I will leave this issue open since the underlying bug is still not resolved. Perhaps someone can provide a cleaner solution for this. |
I've update the code to version 0.9.1 and it's works fine. |
If someone has a potential solution for this, a PR would be most welcome. Thanks! |
I've run into a similar issue with one of my projects before. Not sure if it is the best solution but I just (In the end, I implemented a process-wide job system which handles finalising of jobs including dropping values but that is totally overkill and unviable here) |
It should be possible to run a closure on a new executor and block, but I'm not sure why that doesn't work. It's possible that reqwest also spawns a new task internally? That would make it deadlock. For now the workaround is to manually close the session just before the webdriver goes out of scope. |
I've done some more digging on this. Spawning a new task won't help unless you allow enough time for tokio to run that task. If your WebDriver goes out of scope at the end of your main function, you're probably out of luck. I don't really like this approach because it's really unclear if/when the browser will actually close. There's no way to access the join handle even if you wanted to. I've also tried:
The deadlock only happens if using the basic (single-threaded) runtime. It may be related to the way reqwest manages its connection pool using spawned futures. I haven't tried any other async http clients. The short story is that what we really want are async destructors, but who knows whether such a feature will ever exist in Rust. |
The question we should probably ask ourselves here is whether a session should actively be closed when it is dropped.
Let me explain: In general, I'd expect as few side-effects as possible when calling a function unless its signature or documentation makes it explicitly clear. The fact that a destructor performs asynchronous "side-effects" which — compounding the issue — make network calls, may be considered undesirable and somewhat of an anti-pattern. Imagine PR #38 where a session is persisted to disk. This feature would be completely useless, when the destructor ALWAYS, inevitably, makes a network call to nuke the session. So in that regard, I'd argue that it should be a deliberate act from the user to close the session and not something the destructor does. Welcome to the flip side. From a "out-in-the-wild" perspective it obviously makes sense to automagically end the session when the linked object is dropped. Especially, since this tends to cause real headaches when people are using remote infrastructure for sessions and just occupy idle sessions because they forgot a So overall, the best middle ground might be to remove the
Additionally, we could add a section in the documentation of 1I may be exaggerating a bit, but you get the gist |
I only just read this now, but I have independently reached the same conclusion to remove the I asked a question on the tokio repo yesterday regarding alternative solutions and came away from the discussion with the understanding that Rust's async really prefers you to be explicit about what you want to do and await any operations performed. This agrees with what you said about performing work such as network requests during a destructor being an anti-pattern. I see the point and it makes sense. It's just really uncomfortable to imagine trying to explain this to someone coming to Rust from another language. Then again, Rust often coerces you into certain patterns "for your own good" so maybe it's not that unusual. I don't mind your suggestion about logging an error if WebDriver was dropped without calling |
What should the message say? What log level? Is it really necessary? |
In my opinion, I don't think it's necessary to log it. If someone will expect the browser to close he'll probably search for a solution, so maybe it's better to document it instead :) |
I agree with the sentiment, especially because adding a log message would definitely require a method to suppress it for those few people intentionally omitting it. However, especially for people new to Selenium, the fact that you explicitly have to quit the driver or else it will stick around for potentially a long time should not be neglected. We should make sure that every example includes the explicit call to |
Documentation updated on |
Looks good to me 👍 |
Drive by comment re the example in README: if |
I was testing the async api with a runtime support.
And a very strange behaviour appeared, the code bellow won't move out at the end of the block_on closure.
The issue has been tested with 0.8 and 0.9 version of thirtyfour, tokio 0.2.19 and using the selenium docker command given in the readme.
After some investigation, the problem seems to come from the function drop (webdriver.rs:159).
For some unknown reason, the block_on(self.quit()) loops without returning.
I tried to explicitly call driver.quit() before the end of the closure, but it didn't solve the problem.
Thanks
The text was updated successfully, but these errors were encountered: