Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow RegExp or wildcard for ingress domain and/or path #41881

Closed
deitch opened this issue Feb 22, 2017 · 115 comments
Closed

Allow RegExp or wildcard for ingress domain and/or path #41881

deitch opened this issue Feb 22, 2017 · 115 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@deitch
Copy link
Contributor

deitch commented Feb 22, 2017

Is this a request for help?

No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

  • ingress wildcard
  • ingress regexp

Is this a BUG REPORT or FEATURE REQUEST?

Feature, although if it exists and I don't see it, then bug for docs


As far as I can tell, k8s ingress supports (in theory), subdomain wildcards for ingresses (supposing the given controller supports them, of course), such as *.mydomain.com. There appear to be confusing issues (re: #39622 ) but in theory it supports it.

  1. I cannot find this in official documentation anywhere?
  2. Is there support for general wildcards?

The specific use case is as follows. I have a set of k8s resources being deployed to multiple environments, e.g. prod, uat, qa. Each one has its own inbound root domain, e.g. prod.mydomain.com, qa.mydomain.com, uat.mydomain.com.

When I create my k8s resources, I want to use identical ones between all the environments. If I have 3 exposed services (e.g. web, api, magic) in each domain, routed by subdomain, I shouldn't require 3 separate ingresses, one for each environment, per exposed service. Instead, I want to do something like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  rules:
  - host: "web.*"
    http:
      paths:
      - path: /
        backend:
          serviceName: web-service
          servicePort: 80

(repeat for services api and magic)

This allows my to tell my devs

  1. Pick your environment with kubectl config set-context prod (or uat or qa)
    2.Deploy consistently with kubectl apply -f configs/

Did I miss that somewhere? Or is it not currently supported?

@cmluciano
Copy link

wildcard support is in kubernetes/ingress-nginx#8

@deitch
Copy link
Contributor Author

deitch commented Feb 22, 2017

@cmluciano thanks. I saw that earlier, but didn't think it answers, because I thought:

  1. It mainly refers to the nginx implementation as an ingress controller, not the ingress spec.
  2. It allows wildcards as _sub_domain, but doesn't appear to allow for the parent domain.

I want to be able to do

host: "web.*"

And thus have any request whose host header starts with web. match.

@deitch
Copy link
Contributor Author

deitch commented Feb 27, 2017

@cmluciano and tried it, got an invalid spec error.

@hjacobs
Copy link

hjacobs commented Feb 27, 2017

I think the Ingress spec should clearly state how to treat wildcards/regexp. We have our own Ingress controller and HTTP proxy (Skipper), i.e. we would like to have a clear spec to implement (and not do our own potentially incompatible extension).

@deitch
Copy link
Contributor Author

deitch commented Feb 27, 2017

@hjacobs I looked at the source code, and it seems only to allow it as the left-most element.

Personally, I would be content to have host: myhost and have the ingress controller know to append .mysub.mydomain.com. But I have yet to find a controller that supports that.

@klausenbusk
Copy link
Contributor

+1 for this feature, although I don't need regex. I just want a way that I can specific multiple hosts for a ingress.

Like: kubernetes/ingress-nginx#87

Nginx supports it, http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name would remove quite a bit of boilerplate in my app. I would make a PR but my competency is not Go.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
spec:
rules:
hosts:
- 1.myawesomesite.co
- 2.myawesomesite.co
http:
paths:
- path: /
backend:
serviceName: awesomesite-web
servicePort: 80

@deitch
Copy link
Contributor Author

deitch commented Apr 7, 2017

No additional thoughts on this?

@rdsubhas
Copy link

rdsubhas commented May 2, 2017

Yep, this is somewhat essential for using minikube or kubernetes from dev-to-prod. We have dozens of developers using minikube and hosted/remote kubernetes VMs with IPs. Everyone cannot be given full domain names (/etc/hosts overrides and generally shoving this down the line into a DNS problem).

Almost all routers that I know of (haproxy, nginx, etc) allow routing based on trailing wildcards, i.e. only the subdomain matters, and the domain doesn't. But here, its reverse, the domain seems to matter and subdomain is allowed a wildcard, which I don't know but seems to be a not-so-common routing policy. I believe this is because Ingress was mostly designed with some specific limitations in mind - like the ambiguity around "how many trailing wildcards to allow", because wildcard in domains matches only up to a dot.

Would make a lot of sense to expose trailing wildcards, and allow the Ingress controller and/or application developer and/or cluster operator to take the final call, instead of having an unnecessary limitation.

@deitch
Copy link
Contributor Author

deitch commented May 2, 2017

Thanks @rdsubhas ; well explained.

@rdsubhas
Copy link

rdsubhas commented May 2, 2017

@deitch until then if anyone needs to do thins in the official nginx ingress, here is a shortcut:

  • Set host: <just the subdomain> in Ingress rule, e.g. host: myapp
  • And then use an init container in the nginx deployment (or daemonset) to replace server_name {{$server.Hostname}}; in the default nginx template with server_name {{$server.Hostname}} {{$server.Hostname}}.*;

@deitch
Copy link
Contributor Author

deitch commented May 3, 2017

@rdsubhas an init container where? In the nginx Deployment / DaemonSet (depending on your deployment preference)? I kind of did something similar with traefik, changing their template. Still, it shouldn't be this hard.

@robermorales
Copy link
Contributor

I used this container in the past with this purpose, and it did the job.

https://hub.docker.com/r/jwilder/nginx-proxy/

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@xiangpengzhao
Copy link
Contributor

/sig network

@k8s-ci-robot k8s-ci-robot added the sig/network Categorizes an issue or PR as relevant to SIG Network. label Jun 17, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 17, 2017
@deitch
Copy link
Contributor Author

deitch commented Aug 25, 2017

Any updates here?

@rdsubhas I am using traefik in one use case (nginx in another), and I did something similar. Actually, since nginx uses go templates, I was able to make the transformation right in the template itself.

The problem I now have is trying to use letsencrypt, most plugins (like traefik's built-in acme support or kube-lego) pull the Host: right out of the Ingress resource (probably correctly). That does mean that any such transformation is going to be missed, since it happens later. Now I need to build support in for that.

Is there any way to create "automatic transform on demand" for resources? E.g. my cluster will transform the Ingress resource upon receipt to modify the Host: section?

@debianmaster
Copy link

+1 for this

@kedare
Copy link

kedare commented Nov 14, 2017

I confirm this is a huge issue for us.
What is sad is that our ingress controller does support regex matching or wildcard at any position of the domain, but the configuration gets blocked because it doesn't pass validation at Kubernetes level.
Maybe there would be a flag like rawHost or bypassHostValidation to directly pass the value to the ingress controller without any kind of restrictions from Kubernetes ?

@deitch
Copy link
Contributor Author

deitch commented Nov 14, 2017

Ours does as well. Well, we use traefik, and no big deal to change the config to support it.

As a workaround, we use a string that never would be used anywhere in out .yml files (anydomain123 for what it's worth), and then have our traefik config parse and replace that with a wildcard.

Maybe there would be a flag like rawHost or bypassHostValidation

Or, just let it go through no matter what.

@Preskton
Copy link

Preskton commented Feb 9, 2018

This is extremely important to us as well -- our user story goes something like this:

As an Application Engineer, I want ingress rules to match on the start of a host pattern (like foo.* so that I can use the same ingress definition across environments (foo.dev, foo.qa, etc.) without transformation so that the risk of error switching across environments is reduced significantly. Without this feature, I'm not deploying the same thing to every environment, which can lead to errors and confusion during cluster operations and deployments.

There's a talk about "the spec" and "the code" in this thread - I could probably figure out the code, but how does "the spec" get approved/changed/whatever'd so that we can update the code to match our desired behavior?

@deitch
Copy link
Contributor Author

deitch commented Feb 10, 2018

I, too, would be happy to do a PR, but only if someone can point me into what is needed for the spec to change and then the code, and more importantly that it is supported in principle.

@DavidWylie
Copy link

DavidWylie commented Feb 19, 2018

This is a feature which allows ingress to be useful. Without it the yaml definition file for an app (ingress + deployment + service) cannot be environment independant. Without it you end up with a proxy application / loadbalancer infront of the ingress just to change the url to a common non routable value.

just supporting the * in the domain as a single part would be sufficient. ie
svc.*.domain

Or to support domain as a field. eg for x.y.com
Host x
Domain: y.com

With the ability to load the domain from config map. Then it would be handled in a simmilar way to the application config.

@rdsubhas
Copy link

rdsubhas commented Feb 19, 2018

I think, from a Spec perspective, having it as a FQDN helps a lot of "standard/kubernetes compliant" resources to happen. Although the API is called Ingress, this has a wider impact than just routing. Because if the domain is unspecified, then things like issuing letsencrypt certificates, or creating DNS names for services, etc. are going to become really hard. Different Kubernetes cloud providers would start interpreting the spec differently if its no longer a FQDN, which could lead to fragmentation (wildcards vs regexes, PCRE regex vs POSIX regex, etc).

While its true that development is hard, supporting domain prefixes could be done today. If we specify the host: foo - it is indeed considered a valid FQDN. We can instrument the ingress controller (or dns/ssl controllers) to automatically interpret the host as foo.<real-domain>. As a cluster admin, we presently patch the nginx ingress controller to add our production suffixes automatically if not present, and our engineers only use subdomains.

Just two cents, maybe pressuring downstream API controllers (like dns/ssl/ingress controllers, and maybe google cloud ingress ;) to support suffixes/prefixes to the host is a better/longer term solution, than making the spec itself widely open to interpretation and being stringly typed.

@DavidWylie
Copy link

@rdsubhas Looking into this in the issues form some of the implementors of the ingress controller spec. There almost standard response is its not in the spec so we dont want to differ from it.
Even just a standard field or annotation would be enough to get some of them moving in a shared direction. you seem to have hit the nail on the head with the foo. with that the real domain is a seperate piece of data which as an optional thing could be a string or loaded form a config map.

@deitch
Copy link
Contributor Author

deitch commented Feb 19, 2018

Looks like I stirred up a bit of a hornet's nest with this one... which means it matters to people.

Perhaps the strongest sign is that every time I interact with someone who deploys to kubernetes clusters with an automated CD pipeline across multiple environments, we always end up discussing, "so how did you hack around the Ingress Host: limitation?" If so many people are doing it, then it already is fragmenting, but at a per-deployment level. Fragmenting across ingress controllers might be an improvement. :-)

If one of the key goals of kubernetes is making services easy to deploy and - "automating devops" like Kelsey's interview last week, if you will - then the goal is practical: how do I make deploying services config-driven, reliable, reproducible, scalable. Requiring the config on each exposed service to be different from one environment to another makes an automated CD pipeline brittle and extremely difficult (and hacked and fragmented). I need to maintain separate config files, when they differ only by the ending of the domain.

I am unclear is to why the spec for an FQDN must be the spec for specifying what service X should respond to? (I also just violated the spec of not ending a sentence with a preposition, oops :-) )

Agreed wholeheartedly, it needs a standard, and the current one is forcing people to break it in implementation. I do think that using a standard that wasn't written for specifying services deployed and identified dynamically in multiple environments is the best one to use. If there isn't a better, let's define one. If you want to call it HostRegex:, that works, too. Call it HostPrefix, or anything you want.

maybe pressuring downstream API controllers

Isn't the way to pressure IngressController creators by defining a spec?

In any case, I much appreciate the feedback and time spent on this issue.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2021
@deitch
Copy link
Contributor Author

deitch commented Dec 25, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 25, 2022
@deitch
Copy link
Contributor Author

deitch commented Mar 25, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 25, 2022
@gp187
Copy link

gp187 commented Apr 18, 2022

What would be very useful is to match the subdomain to the service name. That would be super helpful.

Someone visits app.example.com then the Ingress would route to service.name = app

apiVersion: networking.k8s.io/v1
kind: Ingress
  name: serviceX-i
  namespace: examples
spec:
  rules:
  - host: *.example.com # if dns  = app.example.com
    http:
      paths:
      - backend:
          service:
            name: $1 # then name is `app`
            port:
              number: 3010
        path: /

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2022
@r-ising
Copy link

r-ising commented Jul 17, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2022
@deitch
Copy link
Contributor Author

deitch commented Oct 15, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2023
@r-ising
Copy link

r-ising commented Jan 13, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2023
@thockin
Copy link
Member

thockin commented Feb 2, 2023

Ingress has ImplementationSpecific path-types, which is the only reasonable way to do this.

Ingress also supports wildcards for hostname matching.

Beyond that, Ingress as an API is effectively "done". Further work needs to aim at the Gateway API https://gateway-api.sigs.k8s.io/

@thockin thockin closed this as completed Feb 2, 2023
@chrisjohnson
Copy link

chrisjohnson commented Feb 2, 2023

I think there's some misunderstanding, this feature has not been implemented. A path match does nothing to help with matching the host, ImplementationSpecific does not impact the host: "web.*" field.

Also, as mentioned 2 years ago the hostname field does not support wildcards, it only supports prefixes. Here is an example wildcard that it does not support:

host: "service-name.*"

@thockin
Copy link
Member

thockin commented Feb 2, 2023

Sorry, I used the wrong code for closing this.

Further work needs to aim at the Gateway API https://gateway-api.sigs.k8s.io/

There's not much interest in adding non-portable extensions to Ingress when Gateway is just about here.

@thockin thockin closed this as not planned Won't fix, can't repro, duplicate, stale Feb 2, 2023
@ztec
Copy link

ztec commented Aug 28, 2023

There's not much interest in adding non-portable extensions to Ingress when Gateway is just about here.

I don't know yet Gateway API, will look into it, but I find it a bit harsh to cut any development and evolution to an established and used API just because a new one is about here.

Is Ingress API deprecated ?

@thockin
Copy link
Member

thockin commented Aug 28, 2023

Anything we add to Ingress API still needs to be implemented by controller impls, which have indicated very low desire to double-track such work.

If a significant corpus of Ingress implementations could easily support this, I'm not going to hard block it. I would need to see that laid out, along with someone thinking thru the design here (probably not hard, similar to how PathType works I would guess (without thinking toooo hard on it).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests