Skip to content

Latest commit

 

History

History
318 lines (279 loc) · 22.9 KB

File metadata and controls

318 lines (279 loc) · 22.9 KB

Terraform Kubernetes Engine Module

This module handles opinionated Google Cloud Platform Kubernetes Engine cluster creation and configuration with Node Pools, IP MASQ, Network Policy, etc.Beta features are enabled in this submodule. The resources/services/activations/deletions that this module will create/trigger are:

  • Create a GKE cluster with the provided addons
  • Create GKE Node Pool(s) with provided configuration and attach to cluster
  • Replace the default kube-dns configmap if stub_domains are provided
  • Activate network policy if network_policy is true
  • Add ip-masq-agent configmap with provided non_masquerade_cidrs if configure_ip_masq is true

Sub modules are provided from creating private clusters, beta private clusters, and beta public clusters as well. Beta sub modules allow for the use of various GKE beta features. See the modules directory for the various sub modules.

Compatibility

This module is meant for use with Terraform 0.12. If you haven't upgraded and need a Terraform 0.11.x-compatible version of this module, the last released version intended for Terraform 0.11.x is 3.0.0.

Usage

There are multiple examples included in the examples folder but simple usage is as follows:

module "gke" {
  source                     = "terraform-google-modules/kubernetes-engine/google//modules/beta-public-cluster"
  project_id                 = "<PROJECT ID>"
  name                       = "gke-test-1"
  region                     = "us-central1"
  zones                      = ["us-central1-a", "us-central1-b", "us-central1-f"]
  network                    = "vpc-01"
  subnetwork                 = "us-central1-01"
  ip_range_pods              = "us-central1-01-gke-01-pods"
  ip_range_services          = "us-central1-01-gke-01-services"
  http_load_balancing        = false
  horizontal_pod_autoscaling = true
  network_policy             = true
  istio = true
  cloudrun = true
  dns_cache = false

  node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "n1-standard-2"
      node_locations     = "us-central1-b,us-central1-c"
      min_count          = 1
      max_count          = 100
      local_ssd_count    = 0
      disk_size_gb       = 100
      disk_type          = "pd-standard"
      image_type         = "COS"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "project-service-account@<PROJECT ID>.iam.gserviceaccount.com"
      preemptible        = false
      initial_node_count = 80
    },
  ]

  node_pools_oauth_scopes = {
    all = []

    default-node-pool = [
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = true
    }
  }

  node_pools_metadata = {
    all = {}

    default-node-pool = {
      node-pool-metadata-custom-value = "my-node-pool"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = true
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}

Then perform the following commands on the root folder:

  • terraform init to get the plugins
  • terraform plan to see the infrastructure plan
  • terraform apply to apply the infrastructure build
  • terraform destroy to destroy the built infrastructure

Inputs

Name Description Type Default Required
add_cluster_firewall_rules Create additional firewall rules bool "false" no
authenticator_security_group The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in format gke-security-groups@yourdomain.com string "null" no
basic_auth_password The password to be used with Basic Authentication. string "" no
basic_auth_username The username to be used with Basic Authentication. An empty value will disable Basic Authentication, which is the recommended configuration. string "" no
cloudrun (Beta) Enable CloudRun addon string "false" no
cluster_autoscaling Cluster autoscaling configuration. See more details object <map> no
cluster_ipv4_cidr The IP address range of the kubernetes pods in this cluster. Default is an automatically assigned CIDR. string "null" no
cluster_resource_labels The GCE resource labels (a map of key/value pairs) to be applied to the cluster map(string) <map> no
config_connector (Beta) Whether ConfigConnector is enabled for this cluster. bool "false" no
configure_ip_masq Enables the installation of ip masquerading, which is usually no longer required when using aliasied IP addresses. IP masquerading uses a kubectl call, so when you have a private cluster, you will need access to the API server. string "false" no
create_service_account Defines if service account specified to run nodes should be created. bool "true" no
database_encryption Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: "ENCRYPTED"; "DECRYPTED". key_name is the name of a CloudKMS key. object <list> no
default_max_pods_per_node The maximum number of pods to schedule per node string "110" no
description The description of the cluster string "" no
disable_legacy_metadata_endpoints Disable the /0.1/ and /v1beta1/ metadata server endpoints on the node. Changing this value will cause all node pools to be recreated. bool "true" no
dns_cache (Beta) The status of the NodeLocal DNSCache addon. bool "false" no
enable_binary_authorization Enable BinAuthZ Admission controller string "false" no
enable_intranode_visibility Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network bool "false" no
enable_kubernetes_alpha Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days. bool "false" no
enable_network_egress_export Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic. bool "false" no
enable_resource_consumption_export Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. bool "true" no
enable_shielded_nodes Enable Shielded Nodes features on all nodes in this cluster bool "true" no
enable_vertical_pod_autoscaling Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it bool "false" no
firewall_inbound_ports List of TCP ports for admission/webhook controllers list(string) <list> no
firewall_priority Priority rule for firewall rules number "1000" no
gce_pd_csi_driver (Beta) Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. bool "false" no
grant_registry_access Grants created cluster-specific service account storage.objectViewer role. bool "false" no
horizontal_pod_autoscaling Enable horizontal pod autoscaling addon bool "true" no
http_load_balancing Enable httpload balancer addon bool "true" no
identity_namespace Workload Identity namespace. (Default value of enabled automatically sets project based namespace [project_id].svc.id.goog) string "enabled" no
initial_node_count The number of nodes to create in this cluster's default node pool. number "0" no
ip_masq_link_local Whether to masquerade traffic to the link-local prefix (169.254.0.0/16). bool "false" no
ip_masq_resync_interval The interval at which the agent attempts to sync its ConfigMap file from the disk. string "60s" no
ip_range_pods The name of the secondary subnet ip range to use for pods string n/a yes
ip_range_services The name of the secondary subnet range to use for services string n/a yes
issue_client_certificate Issues a client certificate to authenticate to the cluster endpoint. To maximize the security of your cluster, leave this option disabled. Client certificates don't automatically rotate and aren't easily revocable. WARNING: changing this after cluster creation is destructive! bool "false" no
istio (Beta) Enable Istio addon string "false" no
istio_auth (Beta) The authentication type between services in Istio. string "AUTH_MUTUAL_TLS" no
kalm_config (Beta) Whether KALM is enabled for this cluster. bool "false" no
kubernetes_version The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region. string "latest" no
logging_service The logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and none string "logging.googleapis.com/kubernetes" no
maintenance_end_time Time window specified for recurring maintenance operations in RFC3339 format string "" no
maintenance_recurrence Frequency of the recurring maintenance window in RFC5545 format. string "" no
maintenance_start_time Time window specified for daily or recurring maintenance operations in RFC3339 format string "05:00" no
master_authorized_networks List of master authorized networks. If none are provided, disallow external access (except the cluster node IPs, which GKE automatically whitelists). object <list> no
monitoring_service The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none string "monitoring.googleapis.com/kubernetes" no
name The name of the cluster (required) string n/a yes
network The VPC network to host the cluster in (required) string n/a yes
network_policy Enable network policy addon bool "true" no
network_policy_provider The network policy provider. string "CALICO" no
network_project_id The project ID of the shared VPC's host (for shared vpc support) string "" no
node_metadata Specifies how node metadata is exposed to the workload running on the node string "GKE_METADATA_SERVER" no
node_pools List of maps containing node pools list(map(string)) <list> no
node_pools_labels Map of maps containing node labels by node-pool name map(map(string)) <map> no
node_pools_metadata Map of maps containing node metadata by node-pool name map(map(string)) <map> no
node_pools_oauth_scopes Map of lists containing node oauth scopes by node-pool name map(list(string)) <map> no
node_pools_tags Map of lists containing node network tags by node-pool name map(list(string)) <map> no
node_pools_taints Map of lists containing node taints by node-pool name object <map> no
non_masquerade_cidrs List of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading. list(string) <list> no
pod_security_policy_config enabled - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created. object <list> no
project_id The project ID to host the cluster in (required) string n/a yes
region The region to host the cluster in (optional if zonal cluster / required if regional) string "null" no
regional Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!) bool "true" no
registry_project_id Project holding the Google Container Registry. If empty, we use the cluster project. If grant_registry_access is true, storage.objectViewer role is assigned on this project. string "" no
release_channel (Beta) The release channel of this cluster. Accepted values are UNSPECIFIED, RAPID, REGULAR and STABLE. Defaults to UNSPECIFIED. string "null" no
remove_default_node_pool Remove default node pool while setting up the cluster bool "false" no
resource_usage_export_dataset_id The ID of a BigQuery Dataset for using BigQuery as the destination of resource usage export. string "" no
sandbox_enabled (Beta) Enable GKE Sandbox (Do not forget to set image_type = COS_CONTAINERD to use it). bool "false" no
service_account The service account to run nodes as if not overridden in node_pools. The create_service_account variable default value (true) will cause a cluster-specific service account to be created. string "" no
skip_provisioners Flag to skip all local-exec provisioners. It breaks stub_domains and upstream_nameservers variables functionality. bool "false" no
stub_domains Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server map(list(string)) <map> no
subnetwork The subnetwork to host the cluster in (required) string n/a yes
upstream_nameservers If specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conf list(string) <list> no
zones The zones to host the cluster in (optional if regional cluster / required if zonal) list(string) <list> no

Outputs

Name Description
ca_certificate Cluster ca certificate (base64 encoded)
cloudrun_enabled Whether CloudRun enabled
dns_cache_enabled Whether DNS Cache enabled
endpoint Cluster endpoint
horizontal_pod_autoscaling_enabled Whether horizontal pod autoscaling enabled
http_load_balancing_enabled Whether http load balancing enabled
identity_namespace Workload Identity namespace
intranode_visibility_enabled Whether intra-node visibility is enabled
istio_enabled Whether Istio is enabled
location Cluster location (region if regional cluster, zone if zonal cluster)
logging_service Logging service used
master_authorized_networks_config Networks from which access to master is permitted
master_version Current master kubernetes version
min_master_version Minimum master kubernetes version
monitoring_service Monitoring service used
name Cluster name
network_policy_enabled Whether network policy enabled
node_pools_names List of node pools names
node_pools_versions List of node pools versions
pod_security_policy_enabled Whether pod security policy is enabled
region Cluster region
release_channel The release channel of this cluster
service_account The service account to default running nodes as if not overridden in node_pools.
type Cluster type (regional / zonal)
vertical_pod_autoscaling_enabled Whether veritical pod autoscaling is enabled
zones List of zones in which the cluster resides

node_pools variable

The node_pools variable takes the following parameters:

Name Description Default Requirement
accelerator_count The number of the guest accelerator cards exposed to this instance 0 Optional
accelerator_type The accelerator type resource to expose to the instance " " Optional
enable_secure_boot Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. false Optional
enable_integrity_monitoring Enables monitoring and attestation of the boot integrity of the instance. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the instance is created. true Optional
auto_repair Whether the nodes will be automatically repaired true Optional
autoscaling Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage true Optional
auto_upgrade Whether the nodes will be automatically upgraded true (if cluster is regional) Optional
disk_size_gb Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB 100GB Optional
disk_type Type of the disk attached to each node (e.g. 'pd-standard' or 'pd-ssd') pd-standard Optional
effect Effect for the taint Required
image_type The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool COS Optional
initial_node_count The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. Defaults to the value of min_count " " Optional
key The key required for the taint Required
local_ssd_count The amount of local SSD disks that will be attached to each cluster node 0 Optional
machine_type The name of a Google Compute Engine machine type n1-standard-2 Optional
max_count Maximum number of nodes in the NodePool. Must be >= min_count 100 Optional
max_pods_per_node The maximum number of pods per node in this cluster null Optional
max_surge The number of additional nodes that can be added to the node pool during an upgrade. Increasing max_surge raises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater. 1 Optional
max_unavailable The number of nodes that can be simultaneously unavailable during an upgrade. Increasing max_unavailable raises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater. 0 Optional
min_count Minimum number of nodes in the NodePool. Must be >=0 and <= max_count. Should be used when autoscaling is true 1 Optional
name The name of the node pool Required
node_count The number of nodes in the nodepool when autoscaling is false. Otherwise defaults to 1. Only valid for non-autoscaling clusers Required
node_locations The list of zones in which the cluster's nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. Defaults to cluster level node locations if nothing is specified " " Optional
node_metadata Options to expose the node metadata to the workload running on the node Optional
preemptible A boolean that represents whether or not the underlying node VMs are preemptible false Optional
sandbox_type Sandbox to use for pods in the node pool Required
service_account The service account to be used by the Node VMs " " Optional
tags The list of instance tags applied to all nodes Required
value The value for the taint Required
version The Kubernetes version for the nodes in this pool. Should only be set if auto_upgrade is false " " Optional

Requirements

Before this module can be used on a project, you must ensure that the following pre-requisites are fulfilled:

  1. Terraform and kubectl are installed on the machine where Terraform is executed.
  2. The Service Account you execute the module with has the right permissions.
  3. The Compute Engine and Kubernetes Engine APIs are active on the project you will launch the cluster in.
  4. If you are using a Shared VPC, the APIs must also be activated on the Shared VPC host project and your service account needs the proper permissions there.

The project factory can be used to provision projects with the correct APIs active and the necessary Shared VPC connections.

Software Dependencies

Kubectl

Terraform and Plugins

Configure a Service Account

In order to execute this module you must have a Service Account with the following project roles:

  • roles/compute.viewer
  • roles/compute.securityAdmin (only required if add_cluster_firewall_rules is set to true)
  • roles/container.clusterAdmin
  • roles/container.developer
  • roles/iam.serviceAccountAdmin
  • roles/iam.serviceAccountUser
  • roles/resourcemanager.projectIamAdmin (only required if service_account is set to create)

Additionally, if service_account is set to create and grant_registry_access is requested, the service account requires the following role on the registry_project_id project:

  • roles/resourcemanager.projectIamAdmin

Enable APIs

In order to operate with the Service Account you must activate the following APIs on the project where the Service Account was created:

  • Compute Engine API - compute.googleapis.com
  • Kubernetes Engine API - container.googleapis.com

File structure

The project has the following folders and files:

  • /: root folder
  • /examples: Examples for using this module and sub module.
  • /helpers: Helper scripts.
  • /scripts: Scripts for specific tasks on module (see Infrastructure section on this file).
  • /test: Folders with files for testing the module (see Testing section on this file).
  • /main.tf: main file for the public module, contains all the resources to create.
  • /variables.tf: Variables for the public cluster module.
  • /output.tf: The outputs for the public cluster module.
  • /README.MD: This file.
  • /modules: Private and beta sub modules.