Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-30233: Filter IPs in majority group validation #6094

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

CrystalChun
Copy link
Contributor

@CrystalChun CrystalChun commented Mar 18, 2024

https://issues.redhat.com/browse/OCPBUGS-30233
Filter the IP addresses when creating connectivity groups to
hosts that belong within the machine network CIDRs
(if they exist) to prevent failing "belongs-to-majority"
validation. Previously, if the IPs were not filtered,
the validation would fail if hosts had IPs on different
NICs that couldn't connect to other hosts. These IPs
were not used and caused the "belongs-to-majority"
validation to fail.

List all the issues related to this PR

  • New Feature
  • Enhancement
  • Bug fix
  • Tests
  • Documentation
  • CI/CD

What environments does this code impact?

  • Automation (CI, tools, etc)
  • Cloud
  • Operator Managed Deployments
  • None

How was this code tested?

  • assisted-test-infra environment
  • dev-scripts environment
  • Reviewer's test appreciated: My test environment isn't capable of installing more than a SNO
  • Waiting for CI to do a full test run
  • Manual (Elaborate on how it was tested)
  • No tests needed

Checklist

  • Title and description added to both, commit and PR.
  • Relevant issues have been associated (see CONTRIBUTING guide)
  • This change does not require a documentation update (docstring, docs, README, etc)
  • Does this change include unit-tests (note that code changes require unit-tests)

Reviewers Checklist

  • Are the title and description (in both PR and commit) meaningful and clear?
  • Is there a bug required (and linked) for this change?
  • Should this PR be backported?

/cc @zaneb

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 18, 2024
@openshift-ci openshift-ci bot requested a review from zaneb March 18, 2024 21:49
@openshift-ci-robot
Copy link

@CrystalChun: This pull request references Jira Issue OCPBUGS-30233, which is invalid:

  • expected the bug to target the "4.16.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

https://issues.redhat.com/browse/OCPBUGS-30233
For the majority group validation, use only filtered the IP addresses of hosts that belong within the primary machine network CIDR to prevent failing validations from IPs that aren't required.

List all the issues related to this PR

  • New Feature
  • Enhancement
  • Bug fix
  • Tests
  • Documentation
  • CI/CD

What environments does this code impact?

  • Automation (CI, tools, etc)
  • Cloud
  • Operator Managed Deployments
  • None

How was this code tested?

  • assisted-test-infra environment
  • dev-scripts environment
  • Reviewer's test appreciated: My test environment isn't capable of installing more than a SNO
  • Waiting for CI to do a full test run
  • Manual (Elaborate on how it was tested)
  • No tests needed

Checklist

  • Title and description added to both, commit and PR.
  • Relevant issues have been associated (see CONTRIBUTING guide)
  • This change does not require a documentation update (docstring, docs, README, etc)
  • Does this change include unit-tests (note that code changes require unit-tests)

Reviewers Checklist

  • Are the title and description (in both PR and commit) meaningful and clear?
  • Is there a bug required (and linked) for this change?
  • Should this PR be backported?

/cc @zaneb

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Mar 18, 2024
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 18, 2024
Copy link

openshift-ci bot commented Mar 18, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Mar 18, 2024
Copy link

openshift-ci bot commented Mar 18, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: CrystalChun

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 18, 2024
@CrystalChun
Copy link
Contributor Author

@zaneb would this change cover the issue? If not, may I request your advice?

primaryMachineCIDR := ""
if network.IsMachineCidrAvailable(cluster) {
primaryMachineCIDR = network.GetMachineCidrById(cluster, 0)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to limit to a single MachineNetwork CIDR here?
This already will not work for dual stack clusters, because CreateL3MajorityGroup() is called for both IPv4 and IPv6, but this only passes the IPv4 MachineNetwork.
#6071 will allow multiple machine networks of each address family as well.

I would prefer to pass all of the MachineNetworks and check if each IP is in any of them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, I modified it to check IPs across multiple machine networks. I think the function I use to get all of the cluster's machine network CIDRs should encompass both address families?

internal/cluster/cluster.go Outdated Show resolved Hide resolved
@zaneb
Copy link
Member

zaneb commented Mar 19, 2024

would this change cover the issue?

I have difficulty following the connectivity groups code, but it seems plausible. If we ignore the extra IP addresses at the point where we're reading the inventory then it should work as if those interfaces did not exist, which is what I think we want.

@ori-amizur
Copy link
Contributor

This change is incorrect since we don't have machine-networks in multi-node UMN. So there is no reason to limit the connectivity to machine-networks.
In general the concept is that connectivity checks all possible connections.
The belongs-to-majority-group checks if there is possible connections between hosts.
@zaneb are you sure that the requested change does not allow potential failures to occur?

@@ -482,7 +482,13 @@ func newL3QueryFactory(hosts []*models.Host, family AddressFamily) (hostQueryFac
if err != nil {
return nil, err
}
value[ip.String()] = true
ipInCidr, err := IpInCidr(ip.String(), primaryMachineCIDR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This misses the whole point of creating connectivity-groups. The connectivity-groups should provide all connectivity information. Then the validation should filter weather it needs L2, L3 or specific networks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see...I couldn't find another place to filter ip addresses 😅 do you know of another place where we can filter them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually you are right. If we want to limit the addresses this is probably the place.
In general, in UMN any IP from any network can be part of the cluster. Therefore, no machine-networks are required for multi-node UMN.
Since no machine-network should be provided in the install-config, OCP doesn't expect them either.
So even if we limit the addresses we use in connectivity-check, does OCP limit its activity to these addresses?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm not too sure, @zaneb do you know if OCP will use these addresses? Also if no machine networks are required for UMN then how do we filter them?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OCP doesn't really seem to use the machine-networks except for validations in the installer.

In UMN hosts are allowed to be in different networks. Because assisted has a limitation of only 1 machine-network, I guess we just allow none to be specified. #6071 removes that limitation. We still don't want any of the machine networks to overlap with e.g. the clusterNetwork or serviceNetwork, so this enables us to do all of the validations.

In any event, the agent installer always passes the machineNetwork(s), even with UMN.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this change would work well for both cases where either multiple machine networks are specified or none are specified.

@ori-amizur
Copy link
Contributor

/test?

Copy link

openshift-ci bot commented Mar 19, 2024

@ori-amizur: The following commands are available to trigger required jobs:

  • /test e2e-agent-compact-ipv4
  • /test edge-assisted-operator-catalog-publish-verify
  • /test edge-ci-index
  • /test edge-e2e-ai-operator-ztp
  • /test edge-e2e-ai-operator-ztp-sno-day2-workers
  • /test edge-e2e-ai-operator-ztp-sno-day2-workers-late-binding
  • /test edge-e2e-metal-assisted
  • /test edge-e2e-metal-assisted-4-11
  • /test edge-e2e-metal-assisted-4-12
  • /test edge-e2e-metal-assisted-cnv
  • /test edge-e2e-metal-assisted-lvm
  • /test edge-e2e-metal-assisted-odf
  • /test edge-images
  • /test edge-lint
  • /test edge-subsystem-aws
  • /test edge-subsystem-kubeapi-aws
  • /test edge-unit-test
  • /test edge-verify-generated-code
  • /test images
  • /test mce-images

The following commands are available to trigger optional jobs:

  • /test e2e-agent-ha-dualstack
  • /test e2e-agent-sno-ipv6
  • /test edge-e2e-ai-operator-disconnected-capi
  • /test edge-e2e-ai-operator-ztp-3masters
  • /test edge-e2e-ai-operator-ztp-capi
  • /test edge-e2e-ai-operator-ztp-compact-day2-masters
  • /test edge-e2e-ai-operator-ztp-compact-day2-workers
  • /test edge-e2e-ai-operator-ztp-disconnected
  • /test edge-e2e-ai-operator-ztp-hypershift-zero-nodes
  • /test edge-e2e-ai-operator-ztp-multiarch-3masters-ocp
  • /test edge-e2e-ai-operator-ztp-multiarch-sno-ocp
  • /test edge-e2e-ai-operator-ztp-node-labels
  • /test edge-e2e-ai-operator-ztp-sno-day2-masters
  • /test edge-e2e-ai-operator-ztp-sno-day2-workers-ignitionoverride
  • /test edge-e2e-metal-assisted-4-13
  • /test edge-e2e-metal-assisted-4-14
  • /test edge-e2e-metal-assisted-4-15
  • /test edge-e2e-metal-assisted-bond
  • /test edge-e2e-metal-assisted-bond-4-14
  • /test edge-e2e-metal-assisted-day2
  • /test edge-e2e-metal-assisted-day2-arm-workers
  • /test edge-e2e-metal-assisted-day2-single-node
  • /test edge-e2e-metal-assisted-external
  • /test edge-e2e-metal-assisted-external-4-14
  • /test edge-e2e-metal-assisted-ipv4v6
  • /test edge-e2e-metal-assisted-ipv6
  • /test edge-e2e-metal-assisted-kube-api-late-binding-single-node
  • /test edge-e2e-metal-assisted-kube-api-late-unbinding-ipv4-single-node
  • /test edge-e2e-metal-assisted-kube-api-net-suite
  • /test edge-e2e-metal-assisted-mce-4-11
  • /test edge-e2e-metal-assisted-mce-4-12
  • /test edge-e2e-metal-assisted-mce-4-13
  • /test edge-e2e-metal-assisted-mce-4-14
  • /test edge-e2e-metal-assisted-mce-4-15
  • /test edge-e2e-metal-assisted-mce-sno
  • /test edge-e2e-metal-assisted-metallb
  • /test edge-e2e-metal-assisted-none
  • /test edge-e2e-metal-assisted-onprem
  • /test edge-e2e-metal-assisted-single-node
  • /test edge-e2e-metal-assisted-static-ip-suite
  • /test edge-e2e-metal-assisted-static-ip-suite-4-14
  • /test edge-e2e-metal-assisted-tang
  • /test edge-e2e-metal-assisted-tpmv2
  • /test edge-e2e-metal-assisted-upgrade-agent
  • /test edge-e2e-nutanix-assisted
  • /test edge-e2e-nutanix-assisted-2workers
  • /test edge-e2e-nutanix-assisted-4-14
  • /test edge-e2e-oci-assisted
  • /test edge-e2e-oci-assisted-4-14
  • /test edge-e2e-oci-assisted-iscsi
  • /test edge-e2e-vsphere-assisted
  • /test edge-e2e-vsphere-assisted-4-12
  • /test edge-e2e-vsphere-assisted-4-13
  • /test edge-e2e-vsphere-assisted-4-14
  • /test edge-e2e-vsphere-assisted-umn
  • /test push-pr-image

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-assisted-service-master-e2e-agent-compact-ipv4
  • pull-ci-openshift-assisted-service-master-edge-ci-index
  • pull-ci-openshift-assisted-service-master-edge-e2e-ai-operator-ztp
  • pull-ci-openshift-assisted-service-master-edge-e2e-metal-assisted
  • pull-ci-openshift-assisted-service-master-edge-images
  • pull-ci-openshift-assisted-service-master-edge-lint
  • pull-ci-openshift-assisted-service-master-edge-subsystem-aws
  • pull-ci-openshift-assisted-service-master-edge-subsystem-kubeapi-aws
  • pull-ci-openshift-assisted-service-master-edge-unit-test
  • pull-ci-openshift-assisted-service-master-edge-verify-generated-code
  • pull-ci-openshift-assisted-service-master-images
  • pull-ci-openshift-assisted-service-master-mce-images

In response to this:

/test?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ori-amizur
Copy link
Contributor

/test edge-e2e-metal-assisted-none

@openshift-ci openshift-ci bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Mar 20, 2024
@CrystalChun
Copy link
Contributor Author

/test edge-e2e-metal-assisted-none

@ori-amizur
Copy link
Contributor

/test edge-e2e-metal-assisted-none

It seems that the test passed,
But the machine-networks is empty: Here:
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openshift_assisted-service/6094/pull-ci-openshift-assisted-service-master-edge-e2e-metal-assisted-none/1770287733328056320/artifacts/e2e-metal-assisted-none/assisted-common-gather/artifacts/2024-03-20_03-51-58_82a1c33a-f02c-4afb-81e0-838e61b5069e/metadata.json

So I wonder where did the machine-networks come from? Needs to be checked.

@ori-amizur
Copy link
Contributor

It seems there is something wrong with the connectivity-groups.
The result of connectivity-groups is:
{"192.168.127.0/24":["6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3"],"192.168.145.0/24":[],"IPv4":["31b68c6d-cd3a-494c-8fb6-68784dda95ae","6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3","feeccca3-46e8-45c6-9ca0-fbde91638fb7"],"IPv6":["31b68c6d-cd3a-494c-8fb6-68784dda95ae","6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3","feeccca3-46e8-45c6-9ca0-fbde91638fb7"]}

So there is IPv6 connectivity but there are no IPv6 addresses.

@ori-amizur
Copy link
Contributor

/test edge-unit-test

Copy link

codecov bot commented Mar 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 68.36%. Comparing base (100a86f) to head (0b7cbc0).
Report is 23 commits behind head on master.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #6094      +/-   ##
==========================================
+ Coverage   68.35%   68.36%   +0.01%     
==========================================
  Files         239      239              
  Lines       35524    35584      +60     
==========================================
+ Hits        24283    24328      +45     
- Misses       9132     9144      +12     
- Partials     2109     2112       +3     
Files Coverage Δ
internal/cluster/cluster.go 66.42% <ø> (+0.29%) ⬆️
internal/network/connectivity_groups.go 87.32% <100.00%> (+0.04%) ⬆️
internal/network/machine_network_cidr.go 57.25% <100.00%> (+1.58%) ⬆️

... and 13 files with indirect coverage changes

@CrystalChun
Copy link
Contributor Author

It seems there is something wrong with the connectivity-groups. The result of connectivity-groups is: {"192.168.127.0/24":["6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3"],"192.168.145.0/24":[],"IPv4":["31b68c6d-cd3a-494c-8fb6-68784dda95ae","6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3","feeccca3-46e8-45c6-9ca0-fbde91638fb7"],"IPv6":["31b68c6d-cd3a-494c-8fb6-68784dda95ae","6043b820-6b7c-4b3b-a5c0-ca0f2b1a82dc","899a891c-ed6a-4e67-ba58-72588a95a9e0","c5895010-acda-4155-9b8d-7a4de74a82e3","feeccca3-46e8-45c6-9ca0-fbde91638fb7"]}

So there is IPv6 connectivity but there are no IPv6 addresses.

Oh I think it's because of the original condition here https://github.com/openshift/assisted-service/pull/6094/files#diff-e0b0e8e6b4de6465aa429384eadda9ed02b2ceb1300e6b95a2f89b8a0858f658R442
Changed it so now it shouldn't return if there are no ipv6 addresses

@CrystalChun
Copy link
Contributor Author

/test edge-unit-test

@CrystalChun
Copy link
Contributor Author

/test edge-e2e-metal-assisted-none

Copy link
Contributor

@ori-amizur ori-amizur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is incorrect because for multi-node UMN there are no machine-networks. Even if we add it and it works we cannot force it because this change is not backward compatible.
If you still want it, you might need to either do not filter anything if there are no machine-networks, or create a new factory for this, and use it only in cases when there are available machine-networks. In this case it should not be used for UMN.

}
}
if len(addresses) == len(foundAddresses) {
if len(foundAddresses) > 0 && len(addresses) == len(foundAddresses) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Good catch

internal/network/connectivity_groups.go Outdated Show resolved Hide resolved
@CrystalChun
Copy link
Contributor Author

/test edge-e2e-metal-assisted-none

@CrystalChun
Copy link
Contributor Author

CrystalChun commented Mar 21, 2024

Looks like the connectivity groups are accurate now
"connectivity_majority_groups": "{\"192.168.127.0/24\":[\"7f4b73b0-14d4-4695-a185-2b62666ee175\",\"b1860fd5-3a8a-45c9-bcc5-8b41a7ed21be\",\"b8e6124e-ff3c-4958-86ae-f326607f56df\"],\"192.168.145.0/24\":[],\"IPv4\":[\"366d5130-0dd2-4543-b20b-9c518dbecc9e\",\"6593e35f-4ee2-41ad-bcb7-f88bf14bdd1b\",\"7f4b73b0-14d4-4695-a185-2b62666ee175\",\"b1860fd5-3a8a-45c9-bcc5-8b41a7ed21be\",\"b8e6124e-ff3c-4958-86ae-f326607f56df\"],\"IPv6\":[]}",

https://issues.redhat.com/browse/OCPBUGS-30233
Filter the IP addresses when creating connectivity groups to
hosts that belong within the machine network CIDRs
(if they exist) to prevent failing "belongs-to-majority"
validation. Previously, if the IPs were not filtered,
the validation would fail if hosts had IPs on different
NICs that couldn't connect to other hosts. These IPs
were not used and caused the "belongs-to-majority"
validation to fail.
@CrystalChun
Copy link
Contributor Author

This change is incorrect because for multi-node UMN there are no machine-networks. Even if we add it and it works we cannot force it because this change is not backward compatible.
If you still want it, you might need to either do not filter anything if there are no machine-networks, or create a new factory for this, and use it only in cases when there are available machine-networks. In this case it should not be used for UMN.

Modified the filter to only be used if there are machine networks!
https://github.com/openshift/assisted-service/pull/6094/files#diff-e0b0e8e6b4de6465aa429384eadda9ed02b2ceb1300e6b95a2f89b8a0858f658R486

@CrystalChun CrystalChun marked this pull request as ready for review March 21, 2024 17:44
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 21, 2024
@CrystalChun
Copy link
Contributor Author

/retest

Copy link

openshift-ci bot commented Mar 22, 2024

@CrystalChun: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@zaneb
Copy link
Member

zaneb commented Mar 27, 2024

In general the concept is that connectivity checks all possible connections.
The belongs-to-majority-group checks if there is possible connections between hosts.

Right now it actually checks that there are no impossible connections between hosts. That's a very high bar.

@zaneb are you sure that the requested change does not allow potential failures to occur?

Failures could occur if the actual routes set up don't result in traffic going over the network explicitly specified as the machine network, and also don't result in a successful connection over the networks the traffic is actually sent over. But as you said, in standard assisted installations the machine network is not provided anyway. So this change will only affect ABI.

@zaneb
Copy link
Member

zaneb commented Mar 28, 2024

It appears that when UMN is enabled (so no VIPs) the interface used for the Node IP is the one that has the default route.
If we had the full routing information we could limit to that interface and not have to rely on the user input being correct. Do we?

@ori-amizur
Copy link
Contributor

It appears that when UMN is enabled (so no VIPs) the interface used for the Node IP is the one that has the default route. If we had the full routing information we could limit to that interface and not have to rely on the user input being correct. Do we?

I believe it needs to be tested

@ori-amizur
Copy link
Contributor

Right now it actually checks that there are no impossible connections between hosts. That's a very high bar.

It is all a matter of what OCP dictates.

Failures could occur if the actual routes set up don't result in traffic going over the network explicitly specified as the machine network, and also don't result in a successful connection over the networks the traffic is actually sent over.

There might be other reasons for connectivity failures. Besides, the edge cases need to be tested in order to verify that this change doesn't lead to installation failures.

But as you said, in standard assisted installations the machine network is not provided anyway. So this change will only affect ABI.

We currently do not limit adding machine-networks in UMN. We are not aware if the invoker is ABI.

@CrystalChun
Copy link
Contributor Author

/hold
This change doesn't actually act correctly with UMN and multiple machine networks defined

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 1, 2024
@CrystalChun CrystalChun marked this pull request as draft April 1, 2024 23:10
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 1, 2024
@CrystalChun
Copy link
Contributor Author

CrystalChun commented Apr 1, 2024

Also there's currently a validation that would fail this scenario (UMN, multiple machine nets)

if len(machineNetworks) > 1 {
err := errors.Errorf("Single-stack cluster cannot contain multiple Machine Networks")
return err
}

No more than one machine network can be defined if it's not dual-stack

@zaneb
Copy link
Member

zaneb commented Apr 4, 2024

Right now it actually checks that there are no impossible connections between hosts. That's a very high bar.

It is all a matter of what OCP dictates.

If the nodes can talk to each other on their kubelet IPs then OCP will be happy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants