New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative test setup #9283
Declarative test setup #9283
Conversation
Visit the preview URL for this PR (updated for commit f784298): https://gloo-edge--pr9283-npolshak-declarative-7t317rco.web.app (expires Tue, 16 Apr 2024 16:29:47 GMT) 🔥 via Firebase Hosting GitHub Action 🌎 Sign: 77c2b86e287749579b7ff9cadb81e099042ef677 |
@@ -921,6 +921,18 @@ build-test-chart: ## Build the Helm chart and place it in the _test directory | |||
helm package --destination $(TEST_ASSET_DIR) $(HELM_DIR) | |||
helm repo index $(TEST_ASSET_DIR) | |||
|
|||
# Sets up environment for running e2e tests | |||
.PHONY: setup-declarative-env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had imagined that the benefit for a declarative environment, is that the config we supply/declare is what defines what should be run. What if instead of having env variables that gate the steps, we just say the processor of the config says "if docker is to true, build the images", "if chart needs to be built, build the chart"...etc? That way we isolate complexity to this declarative setup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I the separate docker/helm/declarative-setup is how GME handles it. I think it's possible to have the declarative setup also build the images/package the helm chart, but it's a little tricky because I think we'd want it to be identical to what we run in CI so we'd want to somehow reuse the make targets. I don't think it makes sense to hard code all the images/helm chart paths in the declarative setup code to do this since then that could drift.
test/setup/cmd/setup.go
Outdated
} | ||
|
||
var istioctlBinary string | ||
if os.Getenv(defaults.IstioctlVersionEnv) != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we separate what is defined in the config and what is defined in the command line? I would have imagined that we could instead have the config support a definition of the istio version and we could parse it from there. Is there a reason we wouldn't want to do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I think having Istio set in the env makes it easier to switch between versions with just export ISTIOCTL_VERSION=1.18.2
. But maybe it doesn't make as much sense on the edge side if we're not switching between Istio versions that often?
I like having often changed values (cluster names, Istio versions, image versions, etc.) set as env values, and then values that are not changed as often (test images, etc.) hard coded.
kubeConfig string | ||
) | ||
|
||
if cluster.KindConfig != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More and more I kind of think we should split environment setup into 2 separate pieces of config:
- My cluster(s) definition
- My infrastructure setup (helm charts, additional services)
I think this will make it easier to support non-kind environments, and also allow devs to easily re-use cluster that may exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The declarative setup does already re-use clusters that exists. If the KindConfig is not set, it'll just reuse the cluster with the same name from your kubeconfig. Eventually I think we'd want to add support loading images built from source to remote registries (AWS, GCP, etc ...), but if you have your kubeconfig pointing to a real cluster, this setup should still work (if you've manually uploaded the images)!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having separate config makes it a little harder to read config for multi cluster setups (so that's why we have it in one file in GME: https://github.com/solo-io/gloo-mesh-enterprise/blob/main/test/gloo-core/gloo-core-multi-cluster.yaml#L8). Maybe on the edge side it makes more sense to separate the files, but I think that would still make it tricky to test multi cluster fed setups.
test/setup/helpers/helpers.go
Outdated
} | ||
|
||
// Merge maps, merge priority is left to right (i.e. right maps will overwrite left maps) | ||
func MergeValueMaps(maps ...map[string]interface{}) map[string]interface{} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these utilities could be defined outside of our test package, and in a top level utils package, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep agreed! The merge functionality is pretty useful!
var components []string | ||
var lock sync.Mutex | ||
|
||
func ListTimers() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our codegen library, we have https://github.com/solo-io/solo-kit/blob/main/pkg/code-generator/metrics/metrics.go. Perhaps we could re-use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or we can move these to the metrics util? Do you have an example that uses the solo-kit metrics? All of the examples seem to initialize the metrics aggregator in init() and then call it only once per operation like here? Versus we want to time it per cluster:
timerFn := helpers.TimerFunc((fmt.Sprintf("[%s] %s helm installation", cluster.GetName(), name)))
defer timerFn()
Logger *zap.SugaredLogger | ||
} | ||
|
||
func (o Opts) InstallChart(ctx context.Context, chart *types.Chart, cluster *kubernetes.Cluster) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of this mirrors our CLI installer (which uses helm under the hood): https://github.com/solo-io/gloo/blob/main/projects/gloo/cli/pkg/cmd/install/installer.go. I think we should try to consolidate these as much as possible, and ideally the logic for the installer should live in the CLI, and we can just invoke a top-level go function from here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, this is installing any chart defined in the config file. It should be more flexible that the cli installer. If I wanted to install GME or Otel along side Edge, this helm installer would work for that.
The user facing glooctl
CLI tool depends on all of these imports:
"github.com/solo-io/gloo/install/helm/gloo/generate"
"github.com/solo-io/gloo/pkg/cliutil"
"github.com/solo-io/gloo/pkg/cliutil/helm"
"github.com/solo-io/gloo/pkg/version"
"github.com/solo-io/gloo/projects/gloo/cli/pkg/cmd/options"
"github.com/solo-io/gloo/projects/gloo/cli/pkg/constants"
"github.com/solo-io/gloo/projects/gloo/cli/pkg/helpers"
This file is agnostic to any gloo dependencies or constants and is only used for test env setup. It's not meant to be a released cli tool.
"github.com/solo-io/go-utils/contextutils" | ||
) | ||
|
||
func Install( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be defined in the Istio commands of our CLI: https://github.com/solo-io/gloo/blob/main/projects/gloo/cli/pkg/cmd/istio/root.go. Again, we can just call the function we need, instead of relying on the CLI artifact, but it might allow us to easily re-use code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say no, mostly because the istioctl
cli tool should be responsible for installing Istio. Even on the gloo-mesh side, meshctl
doesn't install Istio. I think it would give glooctl
too much responsibility if we owned the Istio installation and it would be be more work to maintain.
Having this as a non user facing testing tool gives us flexibility, but doesn't require we maintain this in a real released artifact (so we don't have to test across istio versions, etc.)
Closing in favor of new framework |
The declarative environment builder tool can setup Kubernetes test cluster(s) with IstioOperators, Helm Charts, and any Applications required for testing. This is based on the declarative setup tool used in GME.
How do I get started?
A make target is provided to run the setup. A config file needs to be provided via
CONFIG
variable.To test the simple gloo gateway setup:
This can also be run directly with:
Make sure the images and helm chart are present before running the go command directly.
What happens if I already have a (partial) setup?
Running the declarative setup will not automatically fail if you have different resources already setup. This is slightly different from the regression tests which require you to set
SKIP_TEARDOWN
orSKIP_INSTALL
.For example:
helm upgrade -i
and upgrade the existing installation.How do I write my own config?
Configuration is typically provided via yaml file. Usage is usually coupled with
make
targetssince a lot of default environment variables are defined there.
The configuration file is broken into a list of clusters, each with a distinct set of capabilities:
ISTIO_VERSION
environment variable, or the installed istioctl binary if not set for installation.