Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Declarative test setup #9283

Closed
wants to merge 46 commits into from
Closed

Declarative test setup #9283

wants to merge 46 commits into from

Conversation

npolshakova
Copy link
Contributor

@npolshakova npolshakova commented Mar 28, 2024

The declarative environment builder tool can setup Kubernetes test cluster(s) with IstioOperators, Helm Charts, and any Applications required for testing. This is based on the declarative setup tool used in GME.

How do I get started?

A make target is provided to run the setup. A config file needs to be provided via CONFIG variable.

To test the simple gloo gateway setup:

CONFIG=test/setup/example_configs/gloo-gateway-setup.yaml make setup-declarative-env

This can also be run directly with:

go run ./test/setup setup -c test/setup/example_configs/gloo-gateway-setup.yaml

Make sure the images and helm chart are present before running the go command directly.

What happens if I already have a (partial) setup?

Running the declarative setup will not automatically fail if you have different resources already setup. This is slightly different from the regression tests which require you to set SKIP_TEARDOWN or SKIP_INSTALL.

For example:

  1. If you have a kind cluster already running, the declarative setup will use the existing kind cluster
  2. If you already have a namespace, the declarative setup will use the existing namespace, add any labels defined in the config file and continue.
  3. If you have gloo already installed, the declarative setup will run a helm upgrade -i and upgrade the existing installation.

How do I write my own config?

Configuration is typically provided via yaml file. Usage is usually coupled with make targets
since a lot of default environment variables are defined there.

The configuration file is broken into a list of clusters, each with a distinct set of capabilities:

clusters:
    - name: management-cluster
      management: true
      kindConfig:
        ...
      istioOperators:
        ...
      charts:
        ...
      namespaces:
        ...
      apps:
        ...
      images:
        ...
  • kindConfig: The configuration for the KinD cluster
  • istioOperators: Installation mechanism for Istio uses istio operators. This is a list of istio operator configs to install into the cluster. Uses the istioctl set by the ISTIO_VERSION environment variable, or the installed istioctl binary if not set for installation.
  • charts: helm charts to install into the cluster.
    • name: name of the chart.
    • namespace: namespace to install the chart into
    • path: if true, the chart is installed from the local filesystem. Otherwise, it is installed from the configured helm repo.
    • values: values to pass to the chart.
  • namespaces: namespaces to create in the cluster. Labels can be specified on the namespace.
  • apps: applications to install into the cluster. Follows the k8s resources apis and includes deployments, serviceaccounts, services, configmaps, etc.
  • images: images to load into the cluster. If an image override is defined on the helm chart, the setup command will also load that into the kind cluster so images only need to be defined in one place.

@npolshakova npolshakova requested a review from a team as a code owner March 28, 2024 14:53
@github-actions github-actions bot added the keep pr updated signals bulldozer to keep pr up to date with base branch label Mar 28, 2024
Copy link

github-actions bot commented Mar 28, 2024

Visit the preview URL for this PR (updated for commit f784298):

https://gloo-edge--pr9283-npolshak-declarative-7t317rco.web.app

(expires Tue, 16 Apr 2024 16:29:47 GMT)

🔥 via Firebase Hosting GitHub Action 🌎

Sign: 77c2b86e287749579b7ff9cadb81e099042ef677

@@ -921,6 +921,18 @@ build-test-chart: ## Build the Helm chart and place it in the _test directory
helm package --destination $(TEST_ASSET_DIR) $(HELM_DIR)
helm repo index $(TEST_ASSET_DIR)

# Sets up environment for running e2e tests
.PHONY: setup-declarative-env
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had imagined that the benefit for a declarative environment, is that the config we supply/declare is what defines what should be run. What if instead of having env variables that gate the steps, we just say the processor of the config says "if docker is to true, build the images", "if chart needs to be built, build the chart"...etc? That way we isolate complexity to this declarative setup?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I the separate docker/helm/declarative-setup is how GME handles it. I think it's possible to have the declarative setup also build the images/package the helm chart, but it's a little tricky because I think we'd want it to be identical to what we run in CI so we'd want to somehow reuse the make targets. I don't think it makes sense to hard code all the images/helm chart paths in the declarative setup code to do this since then that could drift.

}

var istioctlBinary string
if os.Getenv(defaults.IstioctlVersionEnv) != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we separate what is defined in the config and what is defined in the command line? I would have imagined that we could instead have the config support a definition of the istio version and we could parse it from there. Is there a reason we wouldn't want to do that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think having Istio set in the env makes it easier to switch between versions with just export ISTIOCTL_VERSION=1.18.2. But maybe it doesn't make as much sense on the edge side if we're not switching between Istio versions that often?

I like having often changed values (cluster names, Istio versions, image versions, etc.) set as env values, and then values that are not changed as often (test images, etc.) hard coded.

kubeConfig string
)

if cluster.KindConfig != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More and more I kind of think we should split environment setup into 2 separate pieces of config:

  1. My cluster(s) definition
  2. My infrastructure setup (helm charts, additional services)

I think this will make it easier to support non-kind environments, and also allow devs to easily re-use cluster that may exist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The declarative setup does already re-use clusters that exists. If the KindConfig is not set, it'll just reuse the cluster with the same name from your kubeconfig. Eventually I think we'd want to add support loading images built from source to remote registries (AWS, GCP, etc ...), but if you have your kubeconfig pointing to a real cluster, this setup should still work (if you've manually uploaded the images)!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having separate config makes it a little harder to read config for multi cluster setups (so that's why we have it in one file in GME: https://github.com/solo-io/gloo-mesh-enterprise/blob/main/test/gloo-core/gloo-core-multi-cluster.yaml#L8). Maybe on the edge side it makes more sense to separate the files, but I think that would still make it tricky to test multi cluster fed setups.

}

// Merge maps, merge priority is left to right (i.e. right maps will overwrite left maps)
func MergeValueMaps(maps ...map[string]interface{}) map[string]interface{} {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these utilities could be defined outside of our test package, and in a top level utils package, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep agreed! The merge functionality is pretty useful!

var components []string
var lock sync.Mutex

func ListTimers() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our codegen library, we have https://github.com/solo-io/solo-kit/blob/main/pkg/code-generator/metrics/metrics.go. Perhaps we could re-use it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or we can move these to the metrics util? Do you have an example that uses the solo-kit metrics? All of the examples seem to initialize the metrics aggregator in init() and then call it only once per operation like here? Versus we want to time it per cluster:

timerFn := helpers.TimerFunc((fmt.Sprintf("[%s] %s helm installation", cluster.GetName(), name)))
defer timerFn()

test/setup/defaults/defaults.go Outdated Show resolved Hide resolved
test/setup/config/config.go Outdated Show resolved Hide resolved
Logger *zap.SugaredLogger
}

func (o Opts) InstallChart(ctx context.Context, chart *types.Chart, cluster *kubernetes.Cluster) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of this mirrors our CLI installer (which uses helm under the hood): https://github.com/solo-io/gloo/blob/main/projects/gloo/cli/pkg/cmd/install/installer.go. I think we should try to consolidate these as much as possible, and ideally the logic for the installer should live in the CLI, and we can just invoke a top-level go function from here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, this is installing any chart defined in the config file. It should be more flexible that the cli installer. If I wanted to install GME or Otel along side Edge, this helm installer would work for that.

The user facing glooctl CLI tool depends on all of these imports:

	"github.com/solo-io/gloo/install/helm/gloo/generate"
	"github.com/solo-io/gloo/pkg/cliutil"
	"github.com/solo-io/gloo/pkg/cliutil/helm"
	"github.com/solo-io/gloo/pkg/version"
	"github.com/solo-io/gloo/projects/gloo/cli/pkg/cmd/options"
	"github.com/solo-io/gloo/projects/gloo/cli/pkg/constants"
	"github.com/solo-io/gloo/projects/gloo/cli/pkg/helpers"

This file is agnostic to any gloo dependencies or constants and is only used for test env setup. It's not meant to be a released cli tool.

"github.com/solo-io/go-utils/contextutils"
)

func Install(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be defined in the Istio commands of our CLI: https://github.com/solo-io/gloo/blob/main/projects/gloo/cli/pkg/cmd/istio/root.go. Again, we can just call the function we need, instead of relying on the CLI artifact, but it might allow us to easily re-use code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say no, mostly because the istioctl cli tool should be responsible for installing Istio. Even on the gloo-mesh side, meshctl doesn't install Istio. I think it would give glooctl too much responsibility if we owned the Istio installation and it would be be more work to maintain.

Having this as a non user facing testing tool gives us flexibility, but doesn't require we maintain this in a real released artifact (so we don't have to test across istio versions, etc.)

@sam-heilbron sam-heilbron added work in progress signals bulldozer to keep pr open (don't auto-merge) and removed keep pr updated signals bulldozer to keep pr up to date with base branch labels Apr 19, 2024
@npolshakova
Copy link
Contributor Author

Closing in favor of new framework

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
work in progress signals bulldozer to keep pr open (don't auto-merge)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants