Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pod-to-pod strict mode tests #1947

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

3u13r
Copy link
Contributor

@3u13r 3u13r commented Sep 1, 2023

This PR implements the pod-to-pod strict mode tests in the cilium-cli test suite. Currently, there are similar tests inside the legacy vagrant test suite (https://github.com/cilium/cilium/blob/836598a317565d4c53c678bb09173ae9fee5f54b/test/k8s/datapath_configuration.go#L387).

With this PR we could deprecate and remove the vagrant tests if we add strict mode versions to Cilium's conformance CI. The difference is that the vagrant tests control the Kubernetes environment and cilium configuration for each test. Therefore the test coverage regarding different cilium configurations is easier to achieve.

My question is: If the vagrant tests are "legacy", what is the best way to implement potentially disrupting tests which need to perform changes to the K8s environment? It seems like that this test suite was originally developed to implement tests which the user can execute in their production environment without any disruption. I guess this is the discussion you have right now: cilium/design-cfps#9 (comment).

Refactor parts of the test to reusable functions which can be called
from future tests as well.

Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
Now cilium-endpoint-slice and strict mode for WireGuard encryption
are detected.

Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
Previously only changes to the nodes were allowed.
Now, the test is also able to change the whole cluster's state.
Those tests should mainly be used for ephemeral tests clusters and NOT
be executed in a production environment. This is also why the flag is
hidden.

Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
This implements the pod-to-pod encryption test as part of the cli.
Previously, this test was part of the now outdated vagrant test suite
of cilium/cilium.

The test changes parts of the cilium deployment and should therefore
only be executed in non-production environments. This is ensured by
guarding the test behind the hidden --include-unsafe-tests flag.

Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
@3u13r 3u13r requested review from a team as code owners September 1, 2023 13:08
@3u13r 3u13r temporarily deployed to ci September 1, 2023 13:08 — with GitHub Actions Inactive
@brb brb self-requested a review September 1, 2023 13:21
Copy link
Member

@gandro gandro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I think --include-unsafe-tests is the right way to ensure those tests are not run by accident. Long-term, there will probably be another solution, but for now that's fine.

I do think that we should think about some other way to artificially delay IPCache propagation (e.g. by having some knob inside cilium-agent, e.g. a special test mode that delays the creation of CEPs or something), since the whole CES/Operator dance is potentially fragile and a rather roundabout way of achieving what we actually want. Having some test mode that delays IPCache updates could also be useful to test other policy related functionality (e.g. identity confusion or deny policy). But again, for now the approach taken in this PR is fine with me.

@brb
Copy link
Member

brb commented Sep 4, 2023

Thanks for the PR!

I wonder whether it's possible to implement tests w/o causing disruptions to an existing Cilium installation. My main worry that it might cause flakiness for other tests.

Thinking out loud, in the existing WG strict test we stop cilium-operator and remove some artifacts (CE CRDs) to simulate an IPcache delay, so that for a remote CE there is no corresponding IPcache entry, which results in the WG strict feature dropping the packets, as the strict CIDR is set to podCIDR.

The easiest way to simulate such an IPcache delay is to implement the test as a control-plane e2e test, which is currently discussed. TL;DR start two instances of cilium-agent in two connected network namespaces from Go, and then mock / modify one of the agents IPcache updates path to introduce delays (cc @margamanterola / @joamaki / @ti-mo).

I'm not sure when we gonna have such control tests available soon, so we need to decide whether to merge this PR and risk flakiness, or live for now with the legacy Vagrant / ginkgo tests. I'm leaning towards the latter.

@margamanterola
Copy link
Member

@brb Could you elaborate on the flakiness risk? Is there any way we could avoid that risk that doesn't require the control plane tests?

@brb
Copy link
Member

brb commented Sep 5, 2023

@margamanterola The test needs to scale down cilium-operator replicas count to 0, then removes some CRDs, runs tests, and brings back cilium-operator replica count to a previous. The problem I see is that some CLI test cases might be missing proper wait statements which could result in flakiness (the missing waits are notorious in Cilium / CLI). Another issue is that we assume the CLI connectivity tests won't modify Cilium instances. But if you think that control-plane e2e doesn't happen soon, then we should consider that option.

@3u13r
Copy link
Contributor Author

3u13r commented Sep 5, 2023

I see potential flakiness reasons in:

  • the test not resetting the operator scale correctly
  • not being able to concurrently execute tests, which is also not possible for the other network policy tests I guess. The tests in this PR just also ignore any K8s namespace boundary.

If there are waits missing in the other tests we always add verification that everything was restored correctly. We kinda do this already since we first test that the traffic is blocked and then also test that after the operator was scaled up again and the endpoint we previously deleted has been restored, the connection succeeds.

To summarize: we delete one endpoint and explicitly wait for its regeneration. We can still have missing endpoints but I don't think the risk is much higher than when e.g. new pods are spawned.

I mostly ported the test:

  • to migrate away from the legacy tests should they be shut down in the future
  • to get a feel how to develop strict tests to develop a node-to-node strict mode via TDD

If there's another test framework/suite which better fits the requirements of those tests I'm happy to wait.

Copy link
Contributor

@derailed derailed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@3u13r Nice work! Thank you for this PR! Just a few small picks...

}

func (s *podToPodStrictEncryption) Run(ctx context.Context, t *check.Test) {
ct := t.Context()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we need this tmp var?

ct := t.Context()
client := ct.RandomClientPod()

var server check.Pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Better to use a pointer for this so we can check the outcome of the loop.
i,e what happens if we could not locate a matching pod?

return savedReplicas
}
waitForIPCacheEntry := func(clientPod, dstPod *check.Pod) {
timeout := time.After(20 * time.Second)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Perhaps using consts here for the timeout and sleep durations would make it easier to adjust in the future?

break
}

time.Sleep(500 * time.Millisecond)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Per above... constant?

@brb
Copy link
Member

brb commented Sep 7, 2023

@3u13r Are you in Cilium's Slack? Let's discuss it in #development channel (I am martynas there).

})
}

func testNoTrafficLeak(ctx context.Context, t *check.Test, s check.Scenario,
// PodToPodEncryption is a test case which checks the following:
// - There is a connectivity between pods on different nodes when any
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// - There is a connectivity between pods on different nodes when any
// - There is connectivity between pods on different nodes when any

// PodToPodEncryption is a test case which checks the following:
// - There is a connectivity between pods on different nodes when any
// encryption mode is on (either WireGuard or IPsec).
// - No unencrypted packet is leaked.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify what is meant by 'leaked' here?

@brb brb marked this pull request as draft September 11, 2023 14:15
@brb brb removed their request for review December 4, 2023 09:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants