-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAproto asyncio client going into endless loop on teardown when used with pytest #847
Comments
codedump
pushed a commit
to codedump/caproto
that referenced
this issue
Apr 22, 2024
I've uploaded my patch here (please let me know if you'd like this handled differently). The full error message when the pytest fails is this (previously impossible to see because of the endless loop):
Cheers, |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
here is a bug in the
caproto.asyncio
infrastructure.Consider the following pytest unit test, based on caproto:
As it is (i.e. spawning an asyncio IOC with the PV
"catest:foo"
and reading it with the synchronous client), the test behaves as it should:(There are a number of deprecation warnings as to API elements of more recent Python versions, but that isn't an issue -- yet).
Now if you change the line that says
if False:
toif True:
, i.e. deactivating the synchronous client and building an asyncio client context instead, eventually this happens:This is repeated over and over again, in an endless loop.
It isn't difficult to trace this to line
caproto/asyncio/client.py:1225
, and explicitly catching aRuntimeError
and breaking the loop actually kind-of fixes the issue. But then this happens:I.e. a bunch of errors come up, to my understanding related to the fact that there are still
asyncio
tasks running that haven't been properly shut down.The unit test does what it should, though (output is similarly to the synchronous case) and concludes as passing. It's just the giant error message that's messy.
Adding a
ctx.disconnect()
, so that the relevant part of the test reads like this:Then the output of the
pytest
run looks more sane:This happens both with the patched version of CAproto (i.e. the one with my ad-hoc "...except RuntimeError" modification) and with pristine CAproto from today's checkout.
I'm not sure what the correct behavior (from
caproto
or from the programmer :-) ) would be. I think I can identify at least the following issues:continue
statement inasyncio/client.py
, which makes the loop go on onRuntimeError
-- it behaves correctly within a regular application (e.g. a caproto client application, or a caproto IOC which also monitors some other PV, thus also serving as an async client); it doesn't crash. But it crashes in a pytest. I think that a program that runs "successfully" should also pass a pytest.await
-ing actx.disonnect()
on a client an official requirement now? Or not?ctx.disconnect()
was issued (Task was destroyed but it is pending!
error when using with pytest and actx.disconenct()
) feels like it shouldn't be there. There's likely a.cancel()
to be issued, and anasyncio.CancelledError
to be caught in there somewhere....except RuntimeError
in the main asyncio client loop? Obviously it would outright crash more often, but... is there a known case where this kind of crash is not desired? Is there a reason why every other error is accepted andcontinue
'd upon, or is it just a temporary development measure trying to figure out what could go wrong?I can invest a bit of time in trying to properly fix this, but I need a bit of insight as to the inventor's intention here :-) Does anyone (@tacaswell?) have any pointers as to how to unwind this (except for "make sure your applications call
ctx.disconnect()
") on the CAproto side? Is just looking for the rogue task, starting inVirtualCircuitManager._command_queue_loop()
, essentially all that should be done?Thanks & Cheers,
F.
The text was updated successfully, but these errors were encountered: