Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inspect loadbalancer address from Envoy ingress objects #5987

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hligit
Copy link

@hligit hligit commented Nov 25, 2023

Contour sets load balancer address in Ingress/HTTPProxy object's status. The load balancer address is extracted from Envoy Service object's status. This doesn't work in configuration that Envoy Service is of type ClusterIP and an Envoy Ingress object is used for inbound. This patch adds support for inspecting the Envoy Ingress object for setting load balancer address.

A new flag --load-balancer-status is introduced as suggested by @tsaarni in #5083 (comment)

Flag --load-balancer-status takes value in one of below formats:

  1. hostname:fqdn1[,fqdn2]: This provides the same functionality as --ingress-status-address.
  2. sevice:namespace/name: same as flags envoy-service-name and envoy-service-namespace combined.
  3. ingress:namespace/name: new feature for getting address from Ingress objects.

Since the new flag covers functionality of some existing flags, if both --load-balancer-status and --ingress-status-address or envoy-service-name/namespace are specified, only value of --load-balancer-status is used and a warning message is logged.

This is based on kahirokunn's PR #5083 .

Fixes #4952

@hligit hligit requested a review from a team as a code owner November 25, 2023 10:58
@hligit hligit requested review from stevesloka and sunjayBhatia and removed request for a team November 25, 2023 10:58
Copy link

Hi @hligit! Welcome to our community and thank you for opening your first Pull Request. Someone will review it soon. Thank you for committing to making Contour better. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace

@hligit hligit force-pushed the watch-ingress-status branch 2 times, most recently from 293a3da to 98835b0 Compare November 26, 2023 11:07
Copy link

codecov bot commented Nov 28, 2023

Codecov Report

Attention: Patch coverage is 19.87952% with 133 lines in your changes are missing coverage. Please review.

Project coverage is 78.28%. Comparing base (4aee35a) to head (9640ce1).
Report is 263 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #5987      +/-   ##
==========================================
- Coverage   78.62%   78.28%   -0.34%     
==========================================
  Files         138      138              
  Lines       19631    19809     +178     
==========================================
+ Hits        15434    15508      +74     
- Misses       3894     3998     +104     
  Partials      303      303              
Files Coverage Δ
cmd/contour/servecontext.go 83.70% <100.00%> (+0.03%) ⬆️
internal/provisioner/model/names.go 100.00% <100.00%> (ø)
...ernal/provisioner/objects/deployment/deployment.go 89.07% <100.00%> (+0.33%) ⬆️
pkg/config/parameters.go 86.06% <ø> (ø)
internal/k8s/statusaddress.go 60.78% <0.00%> (-19.12%) ⬇️
cmd/contour/serve.go 20.98% <28.71%> (+1.63%) ⬆️

... and 11 files with indirect coverage changes

Copy link
Member

@tsaarni tsaarni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hligit, thanks for contributing, and continuing @kahirokunn's work!

I had small initial finding while doing a quick manual testing. Please check comments inline!

Since the new flag covers functionality of some existing flags, if both --load-balancer-status and --ingress-status-address or envoy-service-name/namespace are specified, only value of --load-balancer-status is used and a warning message is logged.

I think that for backwards compatibility, even in case of configuration options like these, it is "safer" to give priority for the old method and not the other way around. Often it may not make big difference, but in this case as the new option got assigned with default values, old option just stops working. It may be simplest just to swap the precedence and prioritise old options first.

Comment on lines 743 to 769
} else {
if lbAddress := contourConfiguration.Ingress.StatusAddress; len(lbAddress) > 0 {
s.log.WithField("loadbalancer-address", lbAddress).Info("Using supplied information for Ingress status")
lbsw.lbStatus <- parseStatusFlag(lbAddress)
} else {
// Register an informer to watch envoy's service if we haven't been given static details.
serviceHandler := &k8s.ServiceStatusLoadBalancerWatcher{
ServiceName: contourConfiguration.Envoy.Service.Name,
LBStatus: lbsw.lbStatus,
Log: s.log.WithField("context", "serviceStatusLoadBalancerWatcher"),
}

var handler cache.ResourceEventHandler = serviceHandler
if contourConfiguration.Envoy.Service.Namespace != "" {
handler = k8s.NewNamespaceFilter([]string{contourConfiguration.Envoy.Service.Namespace}, handler)
}

if err := s.informOnResource(&corev1.Service{}, handler); err != nil {
s.log.WithError(err).WithField("resource", "services").Fatal("failed to create services informer")
}

s.log.WithField("envoy-service-name", contourConfiguration.Envoy.Service.Name).
WithField("envoy-service-namespace", contourConfiguration.Envoy.Service.Namespace).
Info("Watching Service for Ingress status")
}
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we parse the old options also into envoyLoadBalancerStatus struct, we could handle them in the same code as the new options are handled (above this block) and this code could be deleted.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I've changed as suggested. Also extracted the block into its own function.

ServiceName: contourConfiguration.Envoy.Service.Name,
LBStatus: lbsw.lbStatus,
Log: s.log.WithField("context", "serviceStatusLoadBalancerWatcher"),
if contourConfiguration.Envoy.LoadBalancer != "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When user does not create ContourConfig CR, this will still never be empty string: It gets unconditionally set here with this value via this execution path which fills in the defaults. As result, old options never take effect, even when the new option was not explicitly set.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Thanks! I have removed setting value in Defaults().
Also changed the order of flags by inspecting if ingress-status-address is empty, than the new flag and lastly the envoy-service-name and envoy-service-namespace, because they always have default values set.

Do you think this is good enough or we should check if the envoy service flags are explicitly set and should take precedence of the new flags?

@hligit
Copy link
Author

hligit commented Dec 5, 2023

Thanks @tsaarni for reviewing and the guidance! Sorry for my late response!

I've addressed the comments in amended commits. There is one issue regarding flag precedence in my reply of your inline comment.

@hligit hligit requested a review from tsaarni December 5, 2023 22:16
Signed-off-by: Haitao Li <hli@atlassian.com>
@@ -253,6 +253,16 @@ type EnvoyConfig struct {
// +optional
Service *NamespacedName `json:"service,omitempty"`

// LoadBalancer specifies how Contour should set the ingress status address.
// If provided, the value can be in one of the formats:
// - hostname:<address,...>: Contour will use the provided comma separated list of addresses directly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently we support setting ingress status address via config flag/parameter here and it can be an IP or hostname

would be good to keep that functionality (and not just support hostnames) if we intend to deprecate the old method of configuring this

looks like that might just mean a naming change here and elsewhere since existing helpers are being used

Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2024
@kahirokunn
Copy link

keep

@tsaarni tsaarni removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 16, 2024
Copy link

github-actions bot commented Feb 6, 2024

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 6, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2024
@kahirokunn
Copy link

keep

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2024
@kahirokunn
Copy link

keep

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 12, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2024
@kahirokunn
Copy link

keep

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support getting LoadBalancerStatus from an Ingress (e.g. AWS ALB) or Gateway
4 participants