Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oauth2-token-introspection-oss - problem with long tokens #26

Open
pexi opened this issue Oct 15, 2019 · 8 comments
Open

oauth2-token-introspection-oss - problem with long tokens #26

pexi opened this issue Oct 15, 2019 · 8 comments

Comments

@pexi
Copy link

pexi commented Oct 15, 2019

Tested the example configuration and Nginx seems to cut off the end of the token sent for introspection to an OAuth server.

I enabled debug logging and can see that the JS script is calling /_oauth2_send_introspection_request with the full token in place. But when the request is sent to the OAuth server the content lenght is trimmed to 1263 characters instead of 1660 characters in the token.

@lcrilly
Copy link
Collaborator

lcrilly commented Oct 18, 2019

Thanks for reporting this. Will investigate.

@idavollen
Copy link

idavollen commented Jan 3, 2020

I've encountered another issue it seems that nginx also adds extra line breaks on the incoming, long access token. therefore although the newly generated oauth2 access token is still fresh and valid, the proxy_pass for _oauth2_send_introspection_request always returns

{active: false}

However, when the same access token is being used with PostMan to the samen introspect endpoint http://localhost:8080/auth/realms/dev/protocol/openid-connect/token/introspect, it has successfully retrieved info on the concerned subject.

In order to further confirm this issue, I've explicitly added a new location on the same virtual host (127.0.0.1:9590) in nginx, which will proxy to the same endpoint for introspect, http://localhost:**8080**/auth/realms/dev/protocol/openid-connect/token/introspect

location /auth/ { proxy_set_header Host 127.0.0.1:8080; proxy_pass http://127.0.0.1:8080; }

When I tried to post the same access token with PostMan to http://localhost:**9590**/auth/realms/dev/protocol/openid-connect/token/introspect, the same issue is reproduced.

In order to investigate this issue, I've enabled io.undertow.server.handlers.RequestDumpingHandler with keycloak and found that nginx actually split the long access token into multiple lines (with extra line breaks), which has resulted in the active: false response from keycloak when introspect is being proxied to keycloak via nginx

The following examples show how the post data respectively look like with POST proxied by nginx and the POST directly communicated with introspect with PostMan

token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJUUUNjeXdscHlHdmhCWl9wSXFQN21mckVaaXl0VFl3NmFDUkY5V3h2V29rIn0.eyJqdGkiOiJhNmFmNGE1MS1mMDZkLTRhMjctODZiZS1mMDAxZWI4NGJmYmUiLCJleHAiOjE1NzgwNDMzMjUsIm5iZiI6MCwiaWF0IjoxNTc4MDQzMDI1LCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvZGV2IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImJiYWZiNDlkLWMxYzUtNDQwYS04OTEyLTNiNj
g4ZWQzOGEwNiIsInR5cCI6IkJlYXJlciIsImF6cCI6ImVtcGxveWVlLXNlcnZpY2UiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiI3ZjdjNDMwNy00OTI4LTRjMmMtYTFlOC05Y2FlMDYwMjhmY2UiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLC
J2aWV3LXByb2ZpbGUiXX0sImVtcGxveWVlLXNlcnZpY2UiOnsicm9sZXMiOlsidW1hX3Byb3RlY3Rpb24iXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsImNsaWVudElkIjoiZW1wbG95ZWUtc2VydmljZSIsImNsaWVudEhvc3QiOiJsb2NhbGhvc3QiLCJ1c2VyX25hbWUiOiJzZXJ2aWNlLWFjY291bnQtZW1wbG95ZWUtc2VydmljZSIsInByZWZlcnJlZF91c2VybmFtZSI6InNlcnZpY2UtYWNjb3VudC1lbXBsb3llZS1zZXJ2aWNlIiwiY2
xpZW50QWRkcmVzcyI6IjEyNy4wLjAuMSIsImVtYWlsIjoic2VydmljZS1hY2NvdW50LWVtcGxveWVlLXNlcnZpY2VAcGxhY2Vob2xkZXIub3JnIn0.PGI4TVlPWtQx1bhK7LPsS24TcHLAlRG1kRawaRpcO1AbDHenwa41Mg0HtriZdA_jSxhGTeYKLq-ygAppvnl7b7jKza_pWXdDVbRt7Ko88UPetmQuXIPA7C7yHkL_gQrL1XsYIyvvkxAnL_6w2odVA-OPb3zeC-ZVLcPQz6kcoyoE7BXi5GupBBsC1fwDXmwI8jk8DzbzRS8lsidnZJVS73ngJoD4x29u6LBXj2ZnmK7ZqMQyOQWBy9jAGy
CMNPwuF-wOFVG9vODfnZdgUDrJSZmsuA173AVqcdJl52l_42COYgoQxhCGFKnb22comRVeWlAj_bMBAsUH04HRLTJfyQ

token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJUUUNjeXdscHlHdmhCWl9wSXFQN21mckVaaXl0VFl3NmFDUkY5V3h2V29rIn0.eyJqdGkiOiJhNmFmNGE1MS1mMDZkLTRhMjctODZiZS1mMDAxZWI4NGJmYmUiLCJleHAiOjE1NzgwNDMzMjUsIm5iZiI6MCwiaWF0IjoxNTc4MDQzMDI1LCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvYXV0aC9yZWFsbXMvZGV2IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImJiYWZiNDlkLWMxYzUtNDQwYS04OTEyLTNiNjg4ZWQzOGEwNiIsInR5cCI6IkJlYXJlciIsImF6cCI6ImVtcGxveWVlLXNlcnZpY2UiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiI3ZjdjNDMwNy00OTI4LTRjMmMtYTFlOC05Y2FlMDYwMjhmY2UiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX0sImVtcGxveWVlLXNlcnZpY2UiOnsicm9sZXMiOlsidW1hX3Byb3RlY3Rpb24iXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsImNsaWVudElkIjoiZW1wbG95ZWUtc2VydmljZSIsImNsaWVudEhvc3QiOiJsb2NhbGhvc3QiLCJ1c2VyX25hbWUiOiJzZXJ2aWNlLWFjY291bnQtZW1wbG95ZWUtc2VydmljZSIsInByZWZlcnJlZF91c2VybmFtZSI6InNlcnZpY2UtYWNjb3VudC1lbXBsb3llZS1zZXJ2aWNlIiwiY2xpZW50QWRkcmVzcyI6IjEyNy4wLjAuMSIsImVtYWlsIjoic2VydmljZS1hY2NvdW50LWVtcGxveWVlLXNlcnZpY2VAcGxhY2Vob2xkZXIub3JnIn0.PGI4TVlPWtQx1bhK7LPsS24TcHLAlRG1kRawaRpcO1AbDHenwa41Mg0HtriZdA_jSxhGTeYKLq-ygAppvnl7b7jKza_pWXdDVbRt7Ko88UPetmQuXIPA7C7yHkL_gQrL1XsYIyvvkxAnL_6w2odVA-OPb3zeC-ZVLcPQz6kcoyoE7BXi5GupBBsC1fwDXmwI8jk8DzbzRS8lsidnZJVS73ngJoD4x29u6LBXj2ZnmK7ZqMQyOQWBy9jAGyCMNPwuF-wOFVG9vODfnZdgUDrJSZmsuA173AVqcdJl52l_42COYgoQxhCGFKnb22comRVeWlAj_bMBAsUH04HRLTJfyQ

@xeioex
Copy link

xeioex commented Feb 7, 2020

@idavollen Can you share your nginx.conf, nginx -V and njs version please?

I use nginx/1.17.0 and njs 0.3.8. Cannot reproduce it.

@idavollen
Copy link

@xeioex I've fixed my issue by adding the following:
proxy_request_buffering on;

@xeioex
Copy link

xeioex commented Feb 18, 2020

@idavollen

You mean inside "/_oauth2_send_introspection_request" location, right?
proxy_request_buffering is on by default, did you have proxy_request_buffering off inherited from http or server level?

@idavollen
Copy link

@xeioex I didn't explicitly turn it off. However, it works well after I added two lines inside _/oauth2_send_introspection_request


        proxy_set_header Host localhost:8080;
        proxy_request_buffering on;

@xeioex
Copy link

xeioex commented Feb 19, 2020

@idavollen

Thanks for response. Can you verify it again, please (by commenting out proxy_request_buffering on)? The reason I am asking is that proxy_request_buffering is on by default.

@idavollen
Copy link

@xeioex
I've located that it has nothing to do with request_buffering, but it is important to have config line below:
proxy_set_header Host localhost:8080;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants