Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection pooling in Cluster mode #124

Open
VishalGupta04 opened this issue Dec 17, 2018 · 10 comments
Open

Connection pooling in Cluster mode #124

VishalGupta04 opened this issue Dec 17, 2018 · 10 comments

Comments

@VishalGupta04
Copy link

Hello,

I have two question related to lua-cassandra driver specifically in Cluster mode:

  1. I see another issue with the same heading but don't see any details around this. Can you please provide more details if it's supported than how to set max connections and other connection details.
  2. README under cluster section says "all features from the cassandra module" but I don't see prepare function call support for cluster mode, please advise.

Thanks.

@thibaultcha
Copy link
Owner

thibaultcha commented Dec 18, 2018

Hi,

  1. Can you please provide more details if it's supported than how to set max connections and other connection details.

Setting a maximum amount of concurrent connections is not supported by OpenResty as of today. Connection pooling is for idle connections only. Connection queuing will be supported in the next OpenResty release.

As of today, you can use a request-aware LB policy (req_rr or req_dc_rr) which will ensure that the driver uses the same connection when several queries are made during the same request/response lifecycle. Not only does this reduces the number of handshakes, it can also reduce the number of concurrent connections open against Cassandra at a given time. Of course, there ultimately isn't a hard limit as of today (as stated above), so it'll depend on your traffic pattern and query frequency.

  1. README under cluster section says "all features from the cassandra module" but I don't see prepare function call support for cluster mode, please advise.

The cluster module supports auto-preparation of queries by simply specifying the { prepared = true } query option. See how the cluster:execute() documentation mentions that it receives such an options table (so do cluster:iterate() and cluster:batch()).

E.g.

local options = { prepared = true }
cluster:execute("SELECT * FROM my_table", args, options)

@VishalGupta04
Copy link
Author

Thanks for your quick response. I made suggested changes but I still see so many socket timeouts. I am running performance tests and seeing that for almost each request driver is creating a connection to DB server.
Any suggestion would be highly appreciated, Thanks.

@thibaultcha
Copy link
Owner

I cannot help you without seeing some code. Most likely, you are doing something wrong. OpenResty programming is tricky. Are you correctly closing opened connections? Or keeping them alive? Are you properly reusing the same connection across a given request?

How many connections are opened against your nodes also depends on the traffic pattern. If the number of connections that you are opening is higher than the size of your idle connection pool, you have high chances of opening a new connection for each query. But again, it is more likely because you are misusing this driver - OpenResty programming can be quite tricky.

@MaximeFrancoeur
Copy link

@thibaultcha Any update about :

Setting a maximum amount of concurrent connections is not supported by OpenResty as of today. Connection pooling is for idle connections only. Connection queuing will be supported in the next OpenResty release (we are working on the release itself which should be soon, but please do not ask "when" :)).

Thanks

@thibaultcha
Copy link
Owner

@MaximeFrancoeur Sure, connection queuing is available since OpenResty 1.15.8.1. See the new pool_size and backlog options of the tcpsock:connect method.

@MaximeFrancoeur
Copy link

@thibaultcha Do you have documentation about this for lua-cassandra?
Or how can we create a Cassandra connection pool?

@thibaultcha
Copy link
Owner

@MaximeFrancoeur Connection pools are already maintained as per the default behaviour documented in the ngx_http_lua module's README. If you want more control over the connection pools, PRs are welcome.

@MaximeFrancoeur
Copy link

@thibaultcha just to be sure. When we create a Cassandra Cluster it will create a pool with zero connection. When we will receive first request it will create a connection with 60 seconds timeout idle.
If the connection in the pool is already taken by another request it will create another one and the maximum is 30(default value).

This is right?

@thibaultcha
Copy link
Owner

@MaximeFrancoeur Pools are created by calling ngx_lua's tcpsock:setkeepalive.
If you use the Cluster module, it calls cassandra:setkeepalive() (see here), which itself calls host:setkeepalive() (see here).
By default, pools are maintained by host:port tuple and contain up to 30 connections (there should thus be one connection pool per Cassandra peer the Cluster module is connecting to). It is possible to customize the pools including their size, as well as the size of the backlog queue since OpenResty 1.15.8.1. See options of the tcpsock:connect method.
Note that OpenResty 1.13.6.2 and before used to specify the pool_size option in the setkeepalive() method (see an earlier version of the ngx_lua documentation).

As of today, this driver does not support customization of the connection pool (hence my comment for a PR above), but default pools are still in effect.

Finally, note that connection pools can also be customized (albeit globally) via the lua_socket_pool_* directives of the ngx_lua module, e.g. lua_socket_pool_size.

@thibaultcha
Copy link
Owner

A note on my previous comment in this thread:

Setting a maximum amount of concurrent connections is not supported by OpenResty as of today.

It is now supported as of OpenResty 1.15.8.1 (as I mention in my above comment). See the backlog option of the tcpsock:connect() method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants