Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support port ranges or whole IPs in services #23864

Open
adimania opened this issue Apr 5, 2016 · 160 comments
Open

Support port ranges or whole IPs in services #23864

adimania opened this issue Apr 5, 2016 · 160 comments
Assignees
Labels
area/kube-proxy kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@adimania
Copy link
Contributor

adimania commented Apr 5, 2016

There are several applications like SIP apps or RTP which needs a lot of ports to run multiple calls or media streams. Currently there is no way to allow a range in ports in spec. So essentially I have to do this:

          - name: sip-udp5060
            containerPort: 5060
            protocol: UDP
          - name: sip-udp5061
            containerPort: 5061
            protocol: UDP

Doing above for 500 ports is not pretty. Can we have a way to allow port ranges like 5060-5160?

@bprashanth
Copy link
Contributor

Fyi you don't actually need to specify a port in the RC unless you're going to target it through a Service, but I guess this is a pain to do if you have even O(10) ports. I wonder if there's an easy client side solution that doesn't involve changing the Service (something like kubectl expose --port 3000-3020).

@bprashanth bprashanth added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Apr 6, 2016
@thockin
Copy link
Member

thockin commented Apr 6, 2016

The problem with port ranges is that the userspace kube-proxy can't handle
them, and that is still the fallback path. Until/unless we can totally EOL
that, we're rather limited.

Aside from that, I don't immediately see a reason against port ranges.

On Tue, Apr 5, 2016 at 5:55 PM, Prashanth B notifications@github.com
wrote:

Fyi you don't actually need to specify a port in the RC unless you're
going to target it through a Service, but I guess this is a pain to do if
you have even O(10) ports. I wonder if there's an easy client side solution
that doesn't involve changing the Service (kubectl expose --ports
3000-3001).


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#23864 (comment)

@adimania
Copy link
Contributor Author

adimania commented Apr 6, 2016

I do need to target the ports through a Service since that is the only way outside users would be able to place a call via SIP

@thockin
Copy link
Member

thockin commented Apr 6, 2016

The short story is it doesn't work right now. Changing it is not
impossible, but we would need to define what happens when the external LB
and/or kube-proxy doesn't support ranges. It's not on the roadmap, though,
so it would either have to be contributed or escalated.

On Tue, Apr 5, 2016 at 9:01 PM, Aditya Patawari notifications@github.com
wrote:

I do need to target the ports through a Service since that is the only way
outside users would be able to place a call via SIP


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#23864 (comment)

@bgrant0607 bgrant0607 removed the team/ux label Apr 6, 2016
@antonmry
Copy link

antonmry commented May 8, 2016

I'm starting to work in this issue. I will try to implement just the logic in kubctl package to map the port ranges to ports which seems a easy and useful workaround. Later I will check the LB and/or kube-proxy option.

golang and kubernetes are new for me so any help, ideas or guidance will be welcomed.

@lavalamp
Copy link
Member

@antonmry What API changes are you looking to do to support this? I glanced at your dev branch and it looks like there's a lot more copy-paste going on than I'd expect. I think I can save you effort if you talk out your desired changes here first.

@bgrant0607
Copy link
Member

@antonmry Please see:
https://github.com/kubernetes/kubernetes/tree/master/docs/devel/README.md
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
https://github.com/kubernetes/kubernetes/blob/master/docs/api.md
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api_changes.md

In general, if you're new to Kubernetes, an API change is not a good place to start. If you're looking for a way to contribute, please checkout issues labeled "help-wanted":
https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted

It was also stated above that we are not ready to accept this change.

@thockin
Copy link
Member

thockin commented May 16, 2016

This change, should we decide to do it, doesn't warrant a new API. It's just some field changes to Services and maybe Pods. The right place to start is with a proposal doc that details the change, the API compatibility concerns, the load-balancer implementation concerns, etc.

@brendandburns
Copy link
Contributor

@bgrant0607 @antonmry needs this change to make TeleStax (an RTC framework) work well in Kubernetes, so this isn't a "for fun" request, but rather one that's needed to support their application well.

I agree w/ @lavalamp and @thockin that there are simpler designs, and that a lightweight design proposal is the right way to get to agreement on design quickly.

@bgrant0607 bgrant0607 changed the title Cannot allow a port range in replication controller Cannot allow a port range in service May 16, 2016
@bgrant0607 bgrant0607 changed the title Cannot allow a port range in service Support port ranges in services May 16, 2016
@bgrant0607
Copy link
Member

@brendandburns I assumed there was a reason behind the request, but API changes are tricky and expensive. This particular change has been discussed since #1802.

In the best case, any API change would appear in 1.4. If it is blocking an application now, I suggest finding a solution that doesn't require an API change, potentially in parallel with an API and design proposal.

As mentioned above, ports don't need to be specified on pods. The trick is how to LB to them. Perhaps @thockin or @bprashanth has a suggestion.

@brendandburns
Copy link
Contributor

@bgrant0607 totally agree that this is targeted at 1.4 just wanted to give context on the motivation for the change.

@lavalamp
Copy link
Member

Yeah. Like @thockin is hinting, I also expected to see this start out as annotations on pod and/or service objects.

(as much as I hate that this is how new fields start, this is how new fields start.)

@antonmry
Copy link

Hi @bgrant0607, @thockin, @brendandburns, @lavalamp

I'm new to kubernetes development, so I was trying different things to test and see how the API works, even if in the beginning was different, as soon as I've realized the complexity of the issue, any of my dev branches have pretended to be a proposal or similar, so please, ignore them.

As far as I understand from your comments, for this issue it's better to have a new annotation than a new API version. I started a new API version looking for a way to keep my developments totally separated and then start the discussion. Also because the containerPort is mandatory so even with an annotation to indicate the range, I don't see how avoid the mandatory containerPort without change the API. How can I do it?.

Finally, please, feel free to manage this issue independently of the work I'm doing here. I would like to contribute and I appreciate your guidance and help but I don't pretend you have to accept my solution or hypothetical PR even if I would like be able to arrive to that point.

@lavalamp
Copy link
Member

One idea is to put the beginning of the range in the current field, and use an annotation of the form alpha-range-end-<port name>=<range end value> or some variation thereof, depending on how exactly the mapping will work. This is kinda hokey but should do the trick.

@ex3ndr
Copy link

ex3ndr commented Sep 5, 2016

We also need exposing large port range (10k+) for RTP/WebRTC and SIP frameworks. As SIP stack usually not stable and in some parts is very unpredictable it is really needed thing. BTW exposing port range is not what most of us want. We have dedicated IPs for media traffic and it could be nice to just map all traffic for specific IP to a service. Something like "dmz" and enable ability to expose port ranges for it.

@thockin
Copy link
Member

thockin commented Sep 6, 2016

I'd be OK to see some alpha feature enabling ranges (including whole IP)

On Mon, Sep 5, 2016 at 4:21 AM, Steve Kite notifications@github.com wrote:

We also need exposing large port range (10k+) for RTP/WebRTC and SIP
frameworks. As SIP stack usually not stable and in some parts is very
unpredictable it is really needed thing. BTW exposing port range is not
what most of us want. We have dedicated IPs for media traffic and it could
be nice to just map all traffic for specific IP to a service. Something
like "dmz" and enable ability to expose port ranges for it.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#23864 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVJT5USGJRY2LUEAtkv1MUcvEjH2Sks5qm_sjgaJpZM4H_7wH
.

@e-nDrju
Copy link

e-nDrju commented Oct 3, 2016

+1 to this idea. Specifying port ranges in services declarations is mandatory for any VoIP-related application. It would be great to have this done. Does anyone figured out a temporary workaround for this? Thanks in advance.

@bpulito
Copy link

bpulito commented Oct 13, 2016

+1. We also need support for a large port range to enable a media processing engine running in a Docker container that is part of a Pod. As others have mentioned, this is needed to process (RTP) media streams. The SIP protocol is limited to 5060 and 5061 so we don't need it for the SIP signaling. The issue is with the standard RTP port range which is 16384-32767. Its important to understand that these ports are typically allocated dynamically when a new media session is being processed but they need to be accessible on the public IP associated with the container. I agree that its critical for anyone wishing to use Kubernetes to orchestrate a Voip service that includes any form of media processing including gateways, call recording services, MCUs, etc. I'm early to Kubernetes but it seems that this is one of the few things holding us back from using it at this point. I'm interested in helping here as well.

@jeremyong
Copy link

+1 Welp also ran into this with a media server application (need host networking to have a stable static IP, clients connect on the standard RTP port range).

@cammm
Copy link

cammm commented Nov 2, 2016

Yes please, we would like this for a custom game server proxy which maps a range of external ports to petsets inside the cluster. UDP traffic, very similar to voice or media server applications I suppose.

Bonus points for being able to expose this service and have the GKE firewall rules work with the port range also.

@thockin
Copy link
Member

thockin commented Feb 2, 2023

For future readers:

Port ranges specifically is problematic because implementations like IPVS do not support them well and because NodePorts are very limited.

Doing whole-IP forwarding (either L4 or L3) is plausible but the work on that stalled (kubernetes/enhancements#2611) for lack of someone to drive it.

It may be better to focus on Gateway API (https://gateway-api.sigs.k8s.io/) as the vehicle to enable this, long term, but it doesn't seem impossible to do it in Service, if we can justify the use.

@thockin thockin changed the title Support port ranges in services Support port ranges or whole IPs in services Feb 2, 2023
@thockin thockin removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 16, 2023
@thockin
Copy link
Member

thockin commented Feb 16, 2023

Revisiting this old issue after discussion at sig-net today.

We still think this is valuable. Port ranges have some implementability problems. Whole IPs have some API problems. Of the two, I think API problems are probably more tractable.

We should probably implement this as a GatewayClass rather than literally in Service API. (@robscott @danwinship)

Like so many things, it needs a champion to drive it.

@aojea
Copy link
Member

aojea commented Feb 16, 2023

Like so many things, it needs a champion to drive it.

/unassign @prameshj
/assign

not commiting to the short term but maybe in the long term

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2023
@briantopping
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2023
@shaneutt
Copy link
Member

@briantopping could you please add some context to your update?

@briantopping
Copy link
Contributor

Hi @shaneutt, no problem. A backlog project has been to host a SIP telephone system in Kubernetes. Doesn't seem like we are there yet. Do we have another way to resolve this issue? Seems like this issue would have been closed if so.

@shaneutt
Copy link
Member

Thanks for the update!

@thockin pointed out that we could potentially sort this out in Gateway API, which is an increasingly preferable place to put functionality of various sorts due to it's much wider API surface area over Service, and multitude of implementations. I agree that it's worth looking into this over there, but as to his last point: we still are looking for a champion to drive this forward, someone to take this on and work towards its future (whether that be in Service or Gateway API, which is something they'll need to navigate).

@briantopping
Copy link
Contributor

Got it, makes great sense! I was avoiding the issue getting lost at the very least, agree that there's no reason to implement this in Service if it can be resolved in the Gateway API. No reason with our use case to divert resources that could be better spent over there.

Maybe this issue and it's history should be moved to that project, then annotated with a brief narrative that it came from the Service background and/or labelled.

I wonder if there are a significant number of issues that would qualify for this treatment?

Haven't read the full initiative there yet, but already a huge fan. Whatever we can do to not lose the momentum issues like this have built over years would probably benefit the direction it takes quite gracefully.

Happy to chat on Slack as well if there's some place I can help.

@shaneutt
Copy link
Member

We appreciate the interest, if you have some time to help move it forward that would be great!

Maybe this issue and it's history should be moved to that project, then annotated with a brief narrative that it came from the Service background and/or labelled.

Sure, in Gateway API we have a process called GEP that is inspired by KEP, but with a very heavy preference towards small PRs along an iterative graduation path. You'll wanna start by floating the idea around in the community (Slack, like you said or preferably at our weekly community meetings where the agenda is open) to gather some thoughts and try to build some consensus. Once you've started to get some signal from the community, a GEP to start making a proposal and highlighting the problem to be solved might be a good next step.

I wonder if there are a significant number of issues that would qualify for this treatment?

Indeed, this is what I meant by "increasingly preferable": we are finding ourselves fatigued and guarded against Service changes as it's ended up having scope problems. We are starting to think more and more about how the multi-resource and wider scoped Gateway API may be a better place for relevant functionality going forward.


Let us know how we can support you in driving this forward if you decide you want to dig in! #sig-network-gateway-api on Kubernetes slack is our main channel for help and ideas.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kube-proxy kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network.
Development

No branches or pull requests