Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement passive stateful tracking of backend server availability #186

Open
Castaglia opened this issue Nov 14, 2020 · 1 comment
Open
Assignees

Comments

@Castaglia
Copy link
Owner

Castaglia commented Nov 14, 2020

The scope of this feature is to implement passive tracking, keeping of state, of the availability of backend servers, per discussion here: #144 (comment)

By "passive", I mean that mod_proxy should record, in its database tables, when it fails to connect to a backend server (and the failure reason), and use that information for subsequent connections to that same backend, to e.g. skip that backend and use another. Thus mod_proxy will "passively" watch/track connections as they are triggered by frontend client connections, as opposed to a more active approach (which will be the focus of a future ticket/enhancement).

This feature will need to take into account a group/pool of backend addresses, as returned by a single DNS query, e.g. multiple A, SRV, TXT records. Should the entire group/URL be treated as unavailable if only one of them has issues?

In addition, the implementation will need to account for the decaying value of this state over time. Consider connecting to a particular backend, when then fails, and is marked for skipping. For how long should it be skipped? If all backend addresses discovered are marked as "unavailable", should mod_proxy try connecting to one of them anyway?

@wasabii
Copy link

wasabii commented Nov 21, 2020

To weigh in. You can make this as complicated or as easy as you want. Some proxies in other fields only consider the entire backend one sorted pool: failing connections move to the bottom for a time.

Others (Azure Application Gateway for instance) allow nested pools, with complicated failure states. This is useful for geo-distribution. For instance, you might have a East US pool and a West US pool. And those might be combined into a single root pool. With the top-level pool selecting being based on Geo-IP, while the others are based on health checks and weight or speed tests.

I don't think that level of complication is necessary for ProFTPD. I'd probably form one pool of IP addresses: and simply recursively navigate down the configured hostnames until there are none left, forming one pool.

For instance, if I configure ftp://a.domain.com and ftp://b.domain.com. And say, a has an SRV record which returns 2 records, each of which have CNAMES that return 4 IP addresses, and then B is a plain A record, I'd just expand that out to 9 IP addresses in total. I guess you'd have to figure out how to sort based on SRV weight, along with A records which carry no such field.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants