Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better Error Messages. #244

Open
tacozMacleo opened this issue Aug 29, 2023 · 8 comments
Open

Better Error Messages. #244

tacozMacleo opened this issue Aug 29, 2023 · 8 comments
Labels
enhancement New feature or request

Comments

@tacozMacleo
Copy link

With given code:

import httpx
from respx.router import MockRouter


class TestErrorFeedback:
    def test_missing_path(self, respx_mock: MockRouter) -> None:
        url: str = 'https://example.com'

        respx_mock.get(url=url + '/hey').respond(status_code=200)
        httpx.get(url=url)

    def test_missing_header(self, respx_mock: MockRouter) -> None:
        url: str = 'https://example.com'
        headers: dict[str, str] = {'session': 'TheSuperSecretKey'}

        respx_mock.get(url=url, headers=headers).respond(status_code=200)
        httpx.get(url=url)

    def test_missing_params(self, respx_mock: MockRouter) -> None:
        url: str = 'https://example.com/'
        params: dict[str, str | int] = {'expand': 0}

        respx_mock.get(url=url, params=params).respond(status_code=200)
        httpx.get(url=url)

    def test_missing_json(self, respx_mock: MockRouter) -> None:
        url: str = 'https://example.com/'
        json: dict[str, str | int] = {"code": 42}

        respx_mock.get(url=url, json=json).respond(status_code=200)
        httpx.get(url=url)

it gives the same error for all test cases:
respx.models.AllMockedAssertionError: RESPX: <Request('GET', 'https://example.com/')> not mocked!.

It would be nice if it wrote where the different occurred.

@lundberg
Copy link
Owner

Do you mean show all existing/added mock routes that didn't match the request?

@tacozMacleo
Copy link
Author

Well, first step will be to show all the set data for the request that was make. (with the JSON/Content shorten if too long)

Then there would be a matter of prioritize what is most likely right/wrong, then it can be split up into something like Binary Tree.

No need to show all GET request, if it was a POST request that was make.
If the HTTP Request & a URL fit, no need to show the other URLs.
If the HTTP Request, URL & Parameters fit, no need to show those that do not, and so on all the way down to only one element left (Or if the is only one thing off for multiple).
Then List the Request on top, with the possible candidates below it. Maybe with the diff comparing the pytest do.

Received: GET; url: https://www.example.com; params: {expand: 0}; headers: {'session': 'TheSuperSecretKey'}; data: {}
Expected: GET; url: https://www.example.com; params: {expand: 0}; headers: {'session': 'NotTheWantedKey.'} ; data: {}
                                                                                        +++   ++++++
Expected: GET; url: https://www.example.com; params: {expand: 0}; headers: {'session': 'SomethingWrong'}   ; data: {}
                                                                                        +++++ ++++ +++

@lundberg
Copy link
Owner

In your example you only have 1 mocked route per test case, e.g.
respx_mock.get(url=url + '/hey').respond(status_code=200)

A common pattern is to have a respx mock router that holds all available endpoints for the external api that is mocked. When a httpx request is made all routes/patterns are tested and it's only when no match is found that the assertion error mentioned is triggered. This means that it's not possible to show a single non-matching mock route, it will have to print all added mocks which may be a bit bloated?

@tacozMacleo
Copy link
Author

tacozMacleo commented Aug 31, 2023

It will help a lot just to print the full request with all the data set.

So if there was make a:

httpx.get(
   url='https://example.com/hey',
   params={'expand':0},
   headers={'session': 'TheSuperSecretKey'},
   json={"code": 42},
)

And it did not match anything mocked, I will get an error with all the values set like:
<Request('GET', 'https://example.com/hey', params: {'expand':0}, headers: {'session': 'TheSuperSecretKey'}, json: {"code": 42})>

Then it can be enhanced with it letting know the first element on the priority list that did not match.
So if no GET's was mocked, it will say: 'No GET was mocked'
So if GET was mocked but the url was not mocked, it will say: 'Url was not mocked', and so on.

Then it do not matter how many requests was mocked, it will only be two line of error messages.

@lundberg
Copy link
Owner

Fair enough. Currently we're relying on the httpx representation of their Request instance ..

raise AllMockedAssertionError(f"RESPX: {request!r} not mocked!")

To print more details about the actual request, we'll need to write our own httpx request repr string, which would be doable. Not sure though how or if it should be limited, e.g. like you're suggesting with post body etc.

@tacozMacleo
Copy link
Author

Have had a look into the structure of httpx.Request and all the need info is nicely exposed.
With the exception of the content/data/file/json, which is hidden in a internal variable. Unless one want to iter through it 1 byte at the time: https://github.com/encode/httpx/blob/053bc57c3799801ff11273dd393cb0715e63ecf9/httpx/_content.py#L33

So the hardest part will be finding a string structure people can read fast and still have all the needed info. 😅

If you are okay with a httpx.Request to error-string converter function with intended use as:

            msg: str = generate_error_msg(request)
            raise AllMockedAssertionError(msg)

Maybe something like:

def generate_error_msg(rqt: httpx.Request) -> str:
    limit: int = 25
    end_len: int = 10
    start_len: int = limit - end_len - 3

    stream_data = (
        f"{rqt.stream._stream}"
        if len(rqt.stream._stream) <= limit else
        f"{rqt.stream._stream[:start_len]}...{rqt.stream._stream[-end_len:]}"
    )

    rqt_repr: str = (
        f"{rqt.method}, "
        f"{rqt.utl}, "
        f"{rqt.headers}, "
        f"ByteStream({stream_data})"
    )
    return f"RESPX: Mocked request not found.\nRequest({msg})"

Do not like accessing the stream data like: rqt.stream._stream.
But if we want to avoid loops, I can not see another way...

Should result in:

RESPX: Mocked request not found.
Request('GET', 'https://example.com/hey?expand=0', Headers({'session': 'TheSuperSecretKey'}), ByteStream(b'{"code": 42}'))

With long json:

RESPX: Mocked request not found.
Request('GET', 'https://example.com/hey?expand=0', Headers({'session': 'TheSuperSecretKey'}), ByteStream(b'{"code": 42,"long_text":"just something looooo'...b'ssage_id":8365}'))

@lundberg
Copy link
Owner

lundberg commented Sep 1, 2023

IMO content/stream is not suited for output since it can be "whatever" and potentially give problems, e.g. binary, multipart, encoding, streaming etc.

Do not like accessing the stream data like: rqt.stream._stream.

Agree, we should touch private stuff.

The current repr of a httpx.Request, mainly httpx.URL, shows full path and query/params. That'll leave us headers not shown, and headers is kind-of also cluttered with automatic headers like host, content-type etc.

It would be nice if it wrote where the different occurred.

Isn't the failing test case enough here? Or is it maybe the stack trace that is skewed due to the mocking and should be looked at if we can do something there?

@tacozMacleo
Copy link
Author

tacozMacleo commented Sep 4, 2023

Like I said, the only other way is a loop. Can always change the way to:

    s_data = bytes([x for x in itertools.islice(rqt.stream, limit)])
    stream_data = f"{s_data}..."

The content/stream might be "whatever", but since it is printed as bytes (Which where done in the example), it do not print any none printable or any formatting characters, but instead the hex code.

The headers is cluttered?
If that is a problem the added headers can just be removed or better be used to parse the Content/Stream.

Or is it maybe the stack trace that is skewed due to the mocking and should be looked at if we can do something there?

I do not see how that could be make to help..

But the case still stand that the only thing missing for the library is good error Messages.
Optimally it should enable you to fix the error just from the error message. No matter from what end of the code you are...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants