Skip to content

Terraform module to provision battle-tested, batteries-included and secure GCP GKE Cluster with nginx-ingress and fully automated DNS (external-dns) + TLS/SSL management (cert-manager + Letsencrypt).

License

aleksandar-babic/gke-kubernetes-tf-starter

Repository files navigation

gke-kubernetes-tf-starter

Main workflow action

Terraform module to provision battle-tested, batteries-included and secure GCP GKE Cluster with nginx-ingress and fully automated DNS (external-dns) + TLS/SSL management (cert-manager + Letsencrypt).

The module deploys following resources:

  • Custom VPC Network with:
    • GKE subnet (including pods and services secondary ranges)
    • CloudNat [optional]
  • OpenVPN Server [optional]
    • With built-in user management (more details below) controllable through variables
  • Cloud DNS zones
    • Controlled dynamically through variables
  • GKE Standard cluster with:
    • Optional regional or zonal cluster modes
    • Optional private cluster type
    • Dynamic node pools controlled through the variables
    • Dynamic cluster autoscaler configuration
    • HPA enabled
    • VPA disabled
    • Removed default node pools
    • Master Authorized Networks - GKE subnet by default, optional additional configurable through variable
  • Helm charts deployed in the cluster [optional]:

Provisioning

The module is using GCP Storage Bucket as state backend.

It is recommended to use tfenv and setup appropriate Terraform version defined in .terraform-version file in each environment modules.

Additionally, gcloud with appropriate permissions to project is required in order to provision any resources.

Development

Local

The following tools have to be installed in order to run pre-commit successfully:

Setup of the git hooks can be done with pre-commit install. To force pre-commit checks on all files run pre-commit run -a.

Github Actions

This module is using Github Actions to run pre-commit on push and pull-request events. The workflows can be found here.

Deployment

The following steps are required to deploy:

terraform init -backend-config="bucket=<state_storage_bucket_name>"

terraform apply

The module sets common values for the variables in terraform.auto.tfvars, additional variable overrides might be required, examples shown below:

Private cluster

# Variables needed for deployment of the private cluster
provider_project_id    = "<gcp_project_id>"
cloud_dns_zone_domains = [
  "<domain_name1>"
]

helm_cert_manager_issuer_email = "<issuer_email>"

openvpn_users = ["<vpn_username1>"]

If the host that runs terraform apply does not have direct access to the VPC, it is recommended to initially also set helm_deploy_enabled to false, as private cluster is only reachable through VPC or VPN connection and helm deployments will time out. After the deployment runs successfully, connect the host to the VPC/VPN and run apply again with helm_deploy_enabled set to true.

Public cluster

# Variables needed for deployment of the public cluster
provider_project_id    = "<gcp_project_id>"
cloud_dns_zone_domains = [
  "<domain_name1>"
]

helm_cert_manager_issuer_email = "<issuer_email>"

openvpn_users = ["<vpn_username1>"]

gke_additional_master_authorized_networks = [
  {
    cidr_block   = "<trusted_cidr>"
    display_name = "<user_friendly_name>"
  }
]
gke_private_cluster_enabled               = false

It is required to add the CIDR of the host that runs terraform apply to the gke_additional_master_authorized_networks array in order to be able to deploy the helm charts (if enabled).

OpenVPN

This module can optionally also deploy the OpenVPN server that can be used to access any VPC internal resources.

VPN users are managed directly in Terraform through the variable openvpn_users which is a list of strings where each string represents the username of the user.

After successful Terraform apply, OpenVPN config files can be found in local directory openvpn.

Actual private key used for the OpenVPN is stored in Terraform state, and it is possible to retrieve all user profiles at any time simply by running Terraform apply command.

By default, all the client traffic is routed through the VPN server.

Output directory openvpn is in .gitignore so sensitive data such as private keys do not end up versioned in the git repository.

Ingress, DNS and SSL/TLS

Default ingress controller is nginx-ingress, if no explicit annotations are set, this is the ingress controller that will be used. Alternatively, it is possible to use built-in GCE ingress with the following annotation kubernetes.io/ingress.class: "gce" set to Ingress resource.

For the domains specified in var.cloud_dns_zone_domains appropriate Cloud DNS zones will be created. external-dns is configured to automagically create DNS records.

SSL/TLS Certificate management is handled by cert-manager through Letsencrypt ACME cluster issuer.

All the above allows seamless ingress setup that automatically handles path based routing, SSL/TLS certificates, CloudDNS external DNS record management.

Public (External) Ingress examples

Example Ingress manifest utilizing all 3 components:

Nginx Ingress Controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: service-a
  annotations:
    external-dns.alpha.kubernetes.io/hostname: service-a.example.com.
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  rules:
    - host: service-a.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginxsvc
                port:
                  number: 80
  tls:
    - hosts:
        - service-a.example.com
      secretName: service-a-example-com

Above manifest will expose service nginxsvc on service-a.example.com with HTTPS.

GCE (GCP native) Ingress Controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: service-b
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "<global_static_ip_name>"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    external-dns.alpha.kubernetes.io/hostname: "service-b.example.com."
spec:
  rules:
    - host: service-b.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginxsvc
                port:
                  number: 80
  tls:
    - hosts:
        - service-b.example.com
      secretName: service-b-example-com

<global_static_ip_name> needs to be replaced with the actual name of the global static ip reserved!**

Above manifest will expose service nginxsvc on service-b.example.com with HTTPS.

Private (Internal) Ingress examples

Nginx Ingress Controller

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: service-c-internal
  annotations:
    external-dns.alpha.kubernetes.io/hostname: service-c.internal.example.com.
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx-internal
  rules:
    - host: service-c.internal.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginxsvc
                port:
                  number: 8080
  tls:
    - hosts:
        - service-c.internal.example.com
      secretName: service-c-internal-example-com

Above manifest will expose service nginxsvc on service-c.example.com with HTTPS internally.

GCE (GCP native) Internal Ingress Controller

Using gcp-internal ingress is possible, but requires additional setup such as proxy-only subnets. More details in the official documentation.

For most use-cases using nginx-internal ingress is much simpler solution.

Module details

Requirements

Name Version
terraform >=1.1.5
helm 2.4.1
kubectl 1.13.1
kubernetes 2.7.1

Providers

Name Version
google 3.90.1
kubernetes 2.7.1

Modules

Name Source Version
cloud-nat registry.terraform.io/terraform-google-modules/cloud-nat/google ~> 2.0.0
helm_charts ./modules/helm-charts n/a
kubernetes-engine registry.terraform.io/terraform-google-modules/kubernetes-engine/google//modules/private-cluster 18.0.0
network registry.terraform.io/terraform-google-modules/network/google 4.0.1
openvpn registry.terraform.io/DeimosCloud/openvpn/google 1.2.4

Resources

Name Type
google_compute_firewall.openvpn resource
google_compute_router.this resource
google_dns_managed_zone.domains resource
kubernetes_cluster_role_binding.admin resource
google_client_config.default data source
google_compute_zones.available data source

Inputs

Name Description Type Default Required
app-bu Identifier of the owner (either an Application or Business Unit) string "ops" no
cloud_dns_zone_domains List of the domains that should have Cloud DNS zones created. list(string) n/a yes
cloudnat_enabled Flag to enable or disable deployment of the CloudNat. (required with private cluster) bool true no
env Environment identifier for the resources. string "dev" no
gke_additional_master_authorized_networks List of the additional Master Authorized Networks for the GKE cluster.
list(object({
cidr_block = string
display_name = string
}))
[] no
gke_cluster_admins List of users that will have cluster-admin role binding created. list(string) [] no
gke_cluster_autoscaling Cluster autoscaling configuration.
object({
enabled = bool
min_cpu_cores = number
max_cpu_cores = number
min_memory_gb = number
max_memory_gb = number
gpu_resources = list(object({ resource_type = string, minimum = number, maximum = number }))
})
n/a yes
gke_initial_node_count Number of the cluster nodes deployed initially in default node pool. number 0 no
gke_kubernetes_version Version of the Kubernetes to run on GKE cluster. string "latest" no
gke_maintenance_start_time UTC time for the maintenance window of the GKE cluster. string "04:00" no
gke_node_pools Node pools to be created for the GKE cluster.
list(object({
name = string
machine_type = string
preemptible = bool
disk_type = string
disk_size_gb = number
autoscaling = bool
auto_repair = bool
sandbox_enabled = bool
cpu_manager_policy = string
cpu_cfs_quota = bool
enable_integrity_monitoring = bool
enable_secure_boot = bool
image_type = string
}))
n/a yes
gke_private_cluster_enabled Flag to either enable private endpoint and nodes, or use regular public endpoint and nodes with public ips. bool true no
gke_regional_cluster_enabled Flag to either enable regional (true) or zonal (false) mode for cluster. bool false no
helm_cert_manager_issuer_email Email to be configured for Letsencrypt ACME notifications. (ignored with helm_deploy_enabled false) string n/a yes
helm_deploy_enabled Flag to enable or disable deployment of the helm-charts module into the cluster. bool true no
helm_external_nginx_ingress_enabled Flag to enable or disable deployment of the nginx-ingress external ingress controller. (ignored with helm_deploy_enabled false) bool true no
helm_internal_nginx_ingress_enabled Flag to enable or disable deployment of the nginx-ingress internal ingress controller. (ignored with helm_deploy_enabled false) bool true no
openvpn_users List of the OpenVPN users to be created. (if list is empty, OpenVPN instance will not be created) list(string) [] no
prefix Prefix to add to the resources. string "starter" no
provider_project_id Project to be used within the GCP Provider. string n/a yes
provider_region Region to be used within the GCP provider. string "europe-west3" no
subnet_cidrs Object with mappings for the subnet CIDRs based on the context.
object({
gke = string
gke_services = string
gke_pods = string
})
n/a yes

Outputs

Name Description
gke_cluster_name Name of the created GKE cluster.
vpc_network_self_link Self-link of the created network.
vpc_subnet_self_links Self-links of the created network subnets.

About

Terraform module to provision battle-tested, batteries-included and secure GCP GKE Cluster with nginx-ingress and fully automated DNS (external-dns) + TLS/SSL management (cert-manager + Letsencrypt).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages