Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior of async response_handler callback #256

Open
returnnullptr opened this issue Feb 15, 2022 · 9 comments · Fixed by #258
Open

Unexpected behavior of async response_handler callback #256

returnnullptr opened this issue Feb 15, 2022 · 9 comments · Fixed by #258

Comments

@returnnullptr
Copy link

Describe the bug

Changes made in #248 break the async response_handler callback behavior.

To Reproduce

Define asynchronous response handler:

@uplink.response_handler
async def custom_response_handler(response):
    return "result"

Define consumer method decorated this way:

class ExampleConsumer(uplink.Consumer):
    @custom_response_handler
    @uplink.get("/")
    def consumer_method(self):
        pass

Call the method:

async def main(base_url: str):
    async with aiohttp.ClientSession() as session:
        example_consumer = ExampleConsumer(
            base_url=base_url,
            client=uplink.AiohttpClient(session),
        )

        # expected: "result", actual: <coroutine object custom_response_handler at 0x7f7c7ccbc8c0>
        # In the `0.9.5`, asynchronous callbacks worked as expected
        result = await example_consumer.consumer_method()

        # double await works, but this is unexpected
        result = await (await example_consumer.consumer_method())

Since the 0.9.6, awaited method call is a coroutine.

Expected behavior

A consumer method decorated this way returns a result using a single await.

Additional context

The aiohttp client is used.

@prkumar
Copy link
Owner

prkumar commented Feb 20, 2022

Hey, @returnnullptr - thanks for reporting! I think you caught something deeper. #258 should fix this.

prkumar added a commit that referenced this issue Feb 22, 2022
Async callbacks are being wrapped by ThreadedCoroutine when they shouldn't be. For some context, the purpose of ThreadedCoroutine is to allow sync callbacks with aiohttp client. When the callback is an async function, this condition should skip this behavior.

See #256
@prkumar prkumar reopened this Feb 22, 2022
@prkumar prkumar mentioned this issue Mar 7, 2022
@HarvsG
Copy link

HarvsG commented Mar 8, 2022

This also seems to have broken my app. I find that objects I await return a co-routine object and have to be awaited again....

@prkumar
Copy link
Owner

prkumar commented Mar 8, 2022

Hey, @HarvsG - Got it. The fix will be part of v0.9.7, which I'll be releasing this week. ETA is by March 11th

@HarvsG
Copy link

HarvsG commented Mar 8, 2022

Great, my app actually targets uplink 0.9.4 but is is used within Homeassisstant which seems to be forcing it to use a newer version with the bug.

Edit: In the meantime I have fixed the issue by ssh-ing into my machine, then running pip install uplink==0.9.5 in the relevant container.

@prkumar
Copy link
Owner

prkumar commented Mar 12, 2022

v0.9.7 is live now and includes the fix for this issue

@HarvsG
Copy link

HarvsG commented Mar 13, 2022

hmm, still getting the same error with 0.9.7

@prkumar
Copy link
Owner

prkumar commented Mar 13, 2022

@HarvsG - Interesting.. I might have missed something, or you might be running into a similar issue with a different cause. Could you check out this test case and let me know if it captures your usage? (i.e., invoking a method wrapped by response handler with an async http client)

@jamur2
Copy link

jamur2 commented Sep 27, 2022

This might be a different issue altogether, in which case I can file a separate ticket, but we've found a similar issue with the usage of the aiohttp client and uplink >= 0.9.6.

In our system, we enqueue many (tens of thousands) requests, and execute them with a concurrency of, say, 100. For example:

 client = AiohttpClient(aiohttp.ClientSession(connector=aiohttp.TCPConnector(limit=100),
                                              timeout=aiohttp.ClientTimeout(total=600)))

In short, we've noticed that when using an aiohttp client in conjunction with the @retry decorator, after the first retry event, performance slows to a trickle (multiple minutes between requests), and eventually seemingly stops.

We've added logging and found that these retried requests don't spend that time waiting for a connection or anything; once the on_request_start event is fired for the retried request, they immediately get a connection and execute the request, it just takes them forever to get requeued and for the on_request_start event to fire.

When using the aiohttp client with uplink 0.9.5, the retry logic works as expected and is fully performant even in the case of occasional retries due to server/network errors.

I'm not sure if this is any relation to the response_handler callback issue mentioned here, but the timing and relation to aiohttp made us think this could be related.

@speksen-kb01
Copy link

speksen-kb01 commented Jan 30, 2024

Using converters with request handlers seems to be another problem. I used the simple test case and rewritten something like this to reproduce the issue:

import asyncio

import pydantic
import uplink
import uplink.converters.pydantic_  # noqa
from uplink.clients.aiohttp_ import AiohttpClient

class MyModel(pydantic.BaseModel):
    something: str

@uplink.response_handler
async def simple_async_handler(response):
    return {"something": "somestr"}

class Calendar(uplink.Consumer):
    @simple_async_handler
    @uplink.get("todos/{todo_id}")
    def get_todo(self, todo_id) -> MyModel:
        pass

async def main():
    calendar = Calendar(base_url="https://example.com/", client=AiohttpClient())
    # Run
    response = await calendar.get_todo(todo_id=1)

asyncio.run(main())

Running this results in:

pydantic_core._pydantic_core.ValidationError: 1 validation error for mymodel
  Input should be a valid dictionary or instance of mymodel [type=model_type, input_value=<coroutine object simple_...r at 0x000001D09BE2B700>, input_type=coroutine]

I tracked it down to here:

Line 102 of uplink.hooks.TransactionHookChain.handle_response

    def handle_response(self, consumer, response):
        for hook in self._response_handlers:
            response = hook.handle_response(consumer, response)
        return response

hook.handle_response is simple_async_handler in this case, which itself is a coroutine, yet it isn't awaited. Therefore response becomes a coroutine object.

Maybe this can be fixed by introducing an async version of the Converter class, that enables us to await inside self.convert method

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants