Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It's easy to DDOS your implementation #21

Open
ruslantalpa opened this issue Jun 8, 2020 · 7 comments
Open

It's easy to DDOS your implementation #21

ruslantalpa opened this issue Jun 8, 2020 · 7 comments

Comments

@ruslantalpa
Copy link

Any postgrest application relying on pre-request for authentication is trivial to ddos and there is no way to block it.
PostgREST still has PostgREST/postgrest#1094 but at least that can be mitigated at the proxy level easily.
You've fixed a small problem (can't invalidate a JWT which can be easily worked around and that only affects 1 account) and created a big one, anyone without a valid session can mount a ddos attack (your service is trivially ddos-able, because "db-connection slots" are very limited and absolutely every request needs to execute this function, even the bad ones)

secondary point, i know pgcrypt docs say gen_random_bytes is cryptographically strong but i think the way you are using it is not.

from https://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/port/pg_strong_random.c

Our definition of "strong" is that it's suitable for generating random
salts and query cancellation keys, during authentication.

That function (and the underlying libs it relies on) does not have enough entropy to be secure. (i am not crypto expert or even have exp. in the field but) If it were this trivial to generate random session ids, PHP would just use urandom output as session ids (as it's done here at the core) and not use things like user ip,time,... as entropy when generating the session id (and even then there were enough vulnerabilities in this area in the past).

as a conclusion: This is a good/interesting "exercise" but it's way less "secure" as implemented here then just plain JWT's (and certainly not as a "recommended" way to implement sessions).
If you are really worried about "revocation problems" of jwt (which imo are way overblown* in practice) you can trivially use something like https://github.com/bungle/lua-resty-session and replace on the fly JWTs with plain cookies at the proxy level.

* assuming jwt's are stored as cookies and have reasonable expiration time, like 1-2 hours, they would be extremely hard for an attacker to get their hands on (just like cookies). Even if a jwt is compromised, things like "linking it" with the ip from where it was generated drastically reduces what the attacker can do with a leaked jwt (and thus not even needing to implement a "blacklist", which again, it's not rocket science).

@steve-chavez
Copy link
Collaborator

@ruslantalpa pgbouncer with transaction pooling(pending PostgREST/postgrest#1011) could neutralize the issue of reaching the max_connections limit.

I think systems should be able to scale down as well as to scale up. Sessions are easier to reason about and this approach plus pgbouncer could be well suited for intranet applications(could be stated up front in docs).

Still your concerns are real. Maybe postgrest can help with sessions. I would have to look more into it.

@ruslantalpa
Copy link
Author

pgbouncer does not help with anything. In relation to making sure "max-connetions" is never reached, postgrest pool serves the exact same purpose. pgbouncer/connection pool can not magically create more "parallel" db capacity. With this setup i can easily saturate db-pool value (or the equivalent pgbouncer value) by making request to any url (it does not even have be a valid one), the effect will be exactly the same as with reaching "max_connections", every other valid request will have to wait in line for the malicious ones to finish (and that logically results in timeouts).

I think systems should be able to scale down as well as to scale up

Scaling up/down can not help with this, it's dirt cheap for an attacker to make a http request, it's very expensive for you to check session validity in the db (you are doing 4 roundtrips to the db). He does not need to be authenticated, he does not need to know valid session ids and he does not even need valid urls. I've never seen in practice compromised accounts (on my services) because of CSRF or stuff like that (related to sessions) but i've seen 2 modest (maybe 3k rps) ddos attacks against a marketing wpengine site (with full page cache enabled, nothing reaching the db) and we could not keep the site up even with country wide blocks in the firewall. We were very grateful our api was not attacked.

Sessions are easier to reason about

while i don't see there what is complicated about jwt, it might be true for other ppl (though i suspect the "easy reasoning" actually comes from ready made libs that you can just call and they handle all the complexity and not that session are less complex/easier to reason about compared to jwt at their core)

this approach plus pgbouncer could be well suited for intranet applications

The original motivation for this approach was that "jwt's can not be implemented securely because they can not be revoked". If you now say that this setup is only right for intranet situations then jwt revocation in this setup is even less of an issue so i see nothing gained by going this route.

Sessions are great an all but this approach is not it and also postgrest is "stateless" and the minute postgrest starts to "support sessions" internally it looses that.

The right approach for sessions is to let the proxy manage that and transparently replace (or generate on the fly) jwts for proxy-postgrest communication

@steve-chavez
Copy link
Collaborator

it's very expensive for you to check session validity in the db (you are doing 4 roundtrips to the db)

This could probably be improved in the authenticate code. It should not do extra queries(or any other logic) if request.cookie.session_token is null.

but i've seen 2 modest (maybe 3k rps) ddos attacks against a marketing wpengine site (with full page cache enabled, nothing reaching the db) and we could not keep the site up even with country wide blocks in the firewall.

Under these scenarios, wouldn't every unauthenticated(anon) pgrst route put the db at peril?

The only alternative I see here is caching, in which PostgREST/postgrest#1176 could help.

But uncacheable requests that always hit the origin server(db) like login/signup would always carry the risk of ddos. In which case maybe pg shouldn't be in charge of doing login/signup?

@ruslantalpa
Copy link
Author

Under these scenarios, wouldn't every unauthenticated(anon) pgrst route put the db at peril?

If the api route is public (and it needs to be public) then there is no difference between postgrest and any other classic api implementation method, request comes in, it needs to reach the db. To protect such cases, every case is different, some need cache, some need rate limiting ... none of which is the task of postgrest. Yes these types of urls need some defense but they are the exception, not the rule.
The auth as implemented here, turns every private url (which generally don't need defending at the proxy since you can just ban 1 user) into a public one.

But uncacheable requests that always hit the origin server(db) like login/signup would always carry the risk of ddos

login/signup routes (which generally are the only public routes) are very easy to defend (at the proxy level) since you can impose drastic limits on calling them (like no more then 1 rps and after 3 failed ... ban), even for valid users so i see no risk here. You can not have such restrictions on other urls

@steve-chavez
Copy link
Collaborator

login/signup routes (which generally are the only public routes)

Hmm.. but since we allow anon(no JWT) routes then every pgrst route is public by default. Even if the requests fail because of lack of permissions they still execute the query and start a transaction.

Rejecting a request without an Authorization header on proxy has been the recommended solution for this until now. But I'm thinking we should offer a config option for requiring JWT for every request, otherwise reject without touching the db.

(Comment with a related request: PostgREST/postgrest-docs#326 (comment))

In this case a JWT would be also required for anon access, so setting this JWT would still be the job of the proxy. However we can save custom logic that would have to be applied for each proxy case(Nginx, Kong, Caddy, etc).

(like no more then 1 rps and after 3 failed ... ban)

Idea: If every anon gets a different JWT this logic could be included in the db login/signup functions.

@ruslantalpa
Copy link
Author

This jwt for anon is a wrong direction imo. Everything is fine as it is. Don’t go the route of trying to (Like hasura for ex) implement everything internally, things that are better handled in the proxy or the db. Postgrest should (and that was always the plan) do it’s part and empower the db and the proxy to do their job and should only take on tasks that can not be done in other parts of the stack (like prepared statements, aggregates, multy table inserts, postgis filters, those are truly useful features that open up whole new usecases, the other things are mostly complexity without much return).

@EdmundsEcho
Copy link

EdmundsEcho commented Mar 1, 2022

I've been using this to accelerate the development. Using the inherent power of postgres as you have is inherently attractive.

The discussion regarding DDOS attacks is helpful to me.

A couple of observations/questions

  1. Can much of the issue be addressed in the context of Kubernetes whereby the app provides a service that is only accessible within the cluster network?

  2. To protect against DDOS isn't throttling a generic solution; what prevents that approach here?

  3. Am I missing a bigger point?

- E

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants