Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share Outputs from AWS/AZURE Components with other Components #857

Open
nichtraunzer opened this issue Oct 13, 2022 · 9 comments
Open

Share Outputs from AWS/AZURE Components with other Components #857

nichtraunzer opened this issue Oct 13, 2022 · 9 comments
Assignees
Labels
enhancement New feature or request

Comments

@nichtraunzer
Copy link
Member

nichtraunzer commented Oct 13, 2022

Recently we have been contacted from a usecase developer who asks for the following feature:

For instance, we have a front-end (an EDP managed OpenShift Angular app) that needs to the know the URL for our back-end API (API Gateway deployed via AWS Quickstarter (Terraform))

The current cloud components does not support that feature but I'd like to propose the following solution:
Assuming we have an ODS project X, which contains one AWS component called be (backend), a fronted component fe and an mro component (release manager).

  • Since the pipeline for ods-infra components will be executed for each environment (dev/test/prod) individually we can write component outputs into a configmap during each deployment stage.
  • To preserve uniqueness the configmap entry shall follow a pattern: components name plus a suffix, eg. for our example be-outputs. This entry will contain the outputs in JSON format.
  • We only need to take care that the ods-infra component pipeline will be executed before the frontend => mro component.

What do you think? Any suggestions or alternative solutions?

(I add an image for better illustration)
image

@nichtraunzer nichtraunzer added the enhancement New feature or request label Oct 13, 2022
@nichtraunzer nichtraunzer self-assigned this Oct 13, 2022
@metmajer
Copy link
Member

Love this, @nichtraunzer. Use of ConfigMaps makes absolute sense. Ideally, Terraform outputs marked as sensitive would go into a Secret vs. a ConfigMap.

@nichtraunzer
Copy link
Member Author

Good point ... managing sensitive information ... that requires additional brainpower 🙈

@metmajer
Copy link
Member

metmajer commented Oct 13, 2022

@nichtraunzer I assume we would only need to separate sensitive from non-sensitive data within outputs.tf (which is marked accordingly if the property sensitive is applied to the output in outputs.tf) into a Secret and a ConfigMap respectively. I suggest to play around with the jq CLI a bit for this. Maybe something for Alberto?

@albertpuente
Copy link

I like the idea. Does this mean though that BuildConfigs for components with dependencies will need to be changed by the users to mount the output ConfigMaps coming from preceeding components?
As a dumber alternative, we could just have a configuration volume always mounted for ods-infra BuildConfig that users can use to store I/O, and leave to them (the users) decide naming conventions and the process.

@metmajer
Copy link
Member

@albertpuente volume mounts would also need to be part of a deployment descriptor. I like your idea, yet the only benefit I see is that a single shared volume could several files produced by various producers. However, this would require that such a volume does get mounted, otherwise the producer of a file would fail. No?

@albertpuente
Copy link

If we are strictly speaking of the ods-infra quickstarter, it only comes with a BuildConfig in this case, there is no odsQuickstarterStageCreateOpenShiftResources in its Jenkinsfile (it does not create nor use any other resource, such as a Deployment as far as I can see).
As long as the volume claim is mounted as ReadWriteMany, it will up to the different builds to use it properly.

@metmajer
Copy link
Member

@albertpuente true! On the other hand, consumers would need to ingest the file(s) on the shared volume and apply values at initialisation time. With a ConfigMap, we could have all configurations applied as environment variables, which would be in line with 12-factor apps best practices. We should also consider ease-of-use.

@tbugfinder
Copy link
Contributor

"OT" - ACK also leverages ConfigMaps for tracking of resources.

@paulmayer
Copy link
Contributor

paulmayer commented Sep 27, 2023

I had a need for this, so I implemented this today (if the solution sounds overall appealing, I see if I find time to contribute back to the QS template):

  1. Create a Helm chart in the QS (in /chart following ODS convention)

The chart contains:

  • a passthroughConfigmap resource with
data:
  config.json: |-
{{ .Files.Get .Values.configPassthrough.path | nindent 4 }}
  • a passthroughSecret resource with
data:
  secret.json: |-
    {{ .Files.Get .Values.secretPassthrough.path | b64enc }}

a toggle in chart/values.yaml that can be used to switch configmap/secret passthrough on or off:

# Pass through uncritical values from
# terraform outputs from a file in configPassthrough.path
configPassthrough:
  enabled: true
  path: outputs/config.json

# Pass through sensitive values from
# terraform outputs from a file in secretPassthrough.path
secretPassthrough:
  enabled: true
  path: outputs/secret.json
  1. A number of additional Makefile variables
HELM_CHART_DIR := ./chart
OUTPUT_DIR := outputs
OUTPUT_NAME := output.json
CONFIG_NAME := config.json
SECRET_NAME := secret.json
VALUES_NAME := values.json
OPENSHIFT_NAMESPACE ?=
COMPONENT_ID ?=
  • OUTPUT_DIR is where the terraform outputs are stored
  • OUTPUT_NAME is the name of the terraform output file in JSON format (terraform output -json)
  • CONFIG_NAME is the name under OUTPUT_DIR where the non-sensitive outputs are stored
  • SECRET_NAME is the name under OUTPUT_DIR where sensitive outputs are stored
  • VALUES_NAME is a reformatted key: value version of terraform outputs that's passed as helm values file (see more below, important to realise the main extension point where users can hook in)
  • OPENSHIFT_NAMESPACE and COMPONENT_ID are required to install/upgrade the helm release in the target namespace
  1. A modified make deploy target

Relevant bits:

	TF_IN_AUTOMATION=1 TF_WORKSPACE="$(TF_WORKSPACE)" terraform output -json > "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME)"
	cat "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME)" | jq 'to_entries | map(select(.value.sensitive == false)) | map({key: .key} + {value: .value.value}) | from_entries' > "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(CONFIG_NAME)"
	cat "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME)" | jq 'to_entries | map(select(.value.sensitive == true)) | map({key: .key} + {value: .value.value}) | from_entries' > "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(SECRET_NAME)"
	cat "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME)" | jq 'to_entries | map({key: .key} + {value: .value.value}) | from_entries' > "$(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(VALUES_NAME)"

Produces terraform output and stores JSON in $(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME). Subsequently jq is used to produce JSON-formatted key: value mappings of sensitive/not sensitive content. $(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(OUTPUT_NAME) contains sensitive and not sensitive information without type/sensitive info (just key: value).

	HELM_DIFF_IGNORE_UNKNOWN_FLAGS=true helm diff upgrade \
		-n $(OPENSHIFT_NAMESPACE) \
		$(COMPONENT_ID) \
		$(HELM_CHART_DIR) \
		--install \
		--atomic \
		--no-color \
		--three-way-merge \
		--normalize-manifests \
		-f $(HELM_CHART_DIR)/values.yaml \
		-f $(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(VALUES_NAME) \
		--set configPassthrough.path=$(OUTPUT_DIR)/$(CONFIG_NAME) \
		--set secretPassthrough.path=$(OUTPUT_DIR)/$(SECRET_NAME)

	HELM_DIFF_IGNORE_UNKNOWN_FLAGS=true helm upgrade \
		-n $(OPENSHIFT_NAMESPACE) \
		$(COMPONENT_ID) \
		$(HELM_CHART_DIR) \
		--install \
		--atomic \
		-f $(HELM_CHART_DIR)/values.yaml \
		-f $(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(VALUES_NAME) \
		--set configPassthrough.path=$(OUTPUT_DIR)/$(CONFIG_NAME) \
		--set secretPassthrough.path=$(OUTPUT_DIR)/$(SECRET_NAME)

Produces diff and deploys the chart resources using helm. Importantly, we ware passing -f $(HELM_CHART_DIR)/$(OUTPUT_DIR)/$(VALUES_NAME) which exposes all terraform outputs as values to helm. This means users can extend the chart to deploy any other resource in their Openshift namespace.

  1. Wrap infastructure staging in withEnv

Unfortunately, we need to ensure that COMPONENT_ID and OPENSHIFT_NAMESPACE are set (available in the context, but not available to make targets I believe unless the shared lib is updated accorginly):

    withEnv([
      "OPENSHIFT_NAMESPACE=${context.projectId}-${context.environment}",
      "COMPONENT_ID=${context.componentId}"
    ]) {
      odsComponentStageInfrastructure(context, [cloudProvider: 'AWS'])
    }

Limitations:

  • Obviously this implicitly relies on a SA with permissions to deploy in the respective namespaces (not sure whether that might be an issue with prod?)
  • It also relies on the jenkins-agent having all the required tools (e.g. helm) - not sure whether there are any plans to "slim down" the cloud agents
  • Probably there is a "leaner" way that uses also terraform to provision the secret/configmap (and potentially additional resources)?
  • Not atomic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants