Skip to content

Latest commit

 

History

History
 
 

compute-mig

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

GCE Managed Instance Group module

This module allows creating a managed instance group supporting one or more application versions via instance templates. Optionally, a health check and an autoscaler can be created, and the managed instance group can be configured to be stateful.

This module can be coupled with the compute-vm module which can manage instance templates, and the net-ilb module to assign the MIG to a backend wired to an Internal Load Balancer. The first use case is shown in the examples below.

Stateful disks can be created directly, as shown in the last example below.

Examples

This example shows how to manage a simple MIG that leverages the compute-vm module to manage the underlying instance template. The following sub-examples will only show how to enable specific features of this module, and won't replicate the combined setup.

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  create_template = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source      = "./fabric/modules/compute-mig"
  project_id  = "my-project"
  location    = "europe-west1-b"
  name        = "mig-test"
  target_size = 2
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name              = "default"
  }
}
# tftest modules=2 resources=2

Multiple versions

If multiple versions are desired, use more compute-vm instances for the additional templates used in each version (not shown here), and reference them like this:

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source      = "./fabric/modules/compute-mig"
  project_id  = "my-project"
  location    = "europe-west1-b"
  name        = "mig-test"
  target_size = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  versions = {
    canary = {
      instance_template = module.nginx-template.template.self_link
      target_type = "fixed"
      target_size = 1
    }
  }
}
# tftest modules=2 resources=2

Health check and autohealing policies

Autohealing policies can use an externally defined health check, or have this module auto-create one:

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link,
    subnetwork = var.subnet.self_link,
    nat        = false,
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source = "./fabric/modules/compute-mig"
  project_id = "my-project"
  location     = "europe-west1-b"
  name       = "mig-test"
  target_size   = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  auto_healing_policies = {
    health_check      = module.nginx-mig.health_check.self_link
    initial_delay_sec = 30
  }
  health_check_config = {
    type    = "http"
    check   = { port = 80 }
    config  = {}
    logging = true
  }
}
# tftest modules=2 resources=3

Autoscaling

The module can create and manage an autoscaler associated with the MIG. When using autoscaling do not set the target_size variable or set it to null. Here we show a CPU utilization autoscaler, the other available modes are load balancing utilization and custom metric, like the underlying autoscaler resource.

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source = "./fabric/modules/compute-mig"
  project_id = "my-project"
  location     = "europe-west1-b"
  name       = "mig-test"
  target_size   = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  autoscaler_config = {
    max_replicas                      = 3
    min_replicas                      = 1
    cooldown_period                   = 30
    cpu_utilization_target            = 0.65
    load_balancing_utilization_target = null
    metric                            = null
  }
}
# tftest modules=2 resources=3

Update policy

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source = "./fabric/modules/compute-mig"
  project_id = "my-project"
  location     = "europe-west1-b"
  name       = "mig-test"
  target_size   = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  update_policy = {
    type                 = "PROACTIVE"
    minimal_action       = "REPLACE"
    min_ready_sec        = 30
    max_surge_type       = "fixed"
    max_surge            = 1
    max_unavailable_type = null
    max_unavailable      = null
  }
}
# tftest modules=2 resources=2

Stateful MIGs - MIG Config

Stateful MIGs have some limitations documented here. Enforcement of these requirements is the responsibility of users of this module.

You can configure a disk defined in the instance template to be stateful for all instances in the MIG by configuring in the MIG's stateful policy, using the stateful_disk_mig variable. Alternatively, you can also configure stateful persistent disks individually per instance of the MIG by setting the stateful_disk_instance variable. A discussion on these scenarios can be found in the docs.

An example using only the configuration at the MIG level can be seen below.

Note that when referencing the stateful disk, you use device_name and not disk_name.

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  attached_disks = [{
    name        = "repd-1"
    size        = null
    source_type = "attach"
    source      = "regions/${var.region}/disks/repd-test-1"
    options = {
      mode         = "READ_ONLY"
      replica_zone = "${var.region}-c"
      type         = "PERSISTENT"
    }
  }]
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source = "./fabric/modules/compute-mig"
  project_id = "my-project"
  location     = "europe-west1-b"
  name       = "mig-test"
  target_size   = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  autoscaler_config = {
    max_replicas                      = 3
    min_replicas                      = 1
    cooldown_period                   = 30
    cpu_utilization_target            = 0.65
    load_balancing_utilization_target = null
    metric                            = null
  }
  stateful_config = {
    per_instance_config = {},
    mig_config = {
      stateful_disks = {
        persistent-disk-1 = {
          delete_rule = "NEVER"
        }
      }
    }
  }
}
# tftest modules=2 resources=3

Stateful MIGs - Instance Config

Here is an example defining the stateful config at the instance level.

Note that you will need to know the instance name in order to use this configuration.

module "cos-nginx" {
  source = "./fabric/modules/cloud-config-container/nginx"
}

module "nginx-template" {
  source     = "./fabric/modules/compute-vm"
  project_id = var.project_id
  name       = "nginx-template"
  zone     = "europe-west1-b"
  tags       = ["http-server", "ssh"]
  network_interfaces = [{
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
    nat        = false
    addresses  = null
  }]
  boot_disk = {
    image = "projects/cos-cloud/global/images/family/cos-stable"
    type  = "pd-ssd"
    size  = 10
  }
  attached_disks = [{
    name        = "repd-1"
    size        = null
    source_type = "attach"
    source      = "regions/${var.region}/disks/repd-test-1"
    options = {
      mode         = "READ_ONLY"
      replica_zone = "${var.region}-c"
      type         = "PERSISTENT"
    }
  }]
  create_template  = true
  metadata = {
    user-data = module.cos-nginx.cloud_config
  }
}

module "nginx-mig" {
  source = "./fabric/modules/compute-mig"
  project_id = "my-project"
  location     = "europe-west1-b"
  name       = "mig-test"
  target_size   = 3
  default_version = {
    instance_template = module.nginx-template.template.self_link
    name = "default"
  }
  autoscaler_config = {
    max_replicas                      = 3
    min_replicas                      = 1
    cooldown_period                   = 30
    cpu_utilization_target            = 0.65
    load_balancing_utilization_target = null
    metric                            = null
  }
  stateful_config = {
    per_instance_config = {
      # note that this needs to be the name of an existing instance within the Managed Instance Group
      instance-1 = {
        stateful_disks = {
          persistent-disk-1 = {
            source = "test-disk", 
            mode = "READ_ONLY",
            delete_rule= "NEVER",
          },
        },
        metadata = {
          foo = "bar"
        },
        update_config = {
          minimal_action                   = "NONE",
          most_disruptive_allowed_action   = "REPLACE",
          remove_instance_state_on_destroy = false, 
        },
      },
    },
    mig_config = {
      stateful_disks = {
      }
    }
  }
}
# tftest modules=2 resources=4

Variables

name description type required default
default_version Default application version template. Additional versions can be specified via the versions variable. object({…})
location Compute zone, or region if regional is set to true. string
name Managed group name. string
project_id Project id. string
auto_healing_policies Auto-healing policies for this group. object({…}) null
autoscaler_config Optional autoscaler configuration. Only one of 'cpu_utilization_target' 'load_balancing_utilization_target' or 'metric' can be not null. object({…}) null
health_check_config Optional auto-created health check configuration, use the output self-link to set it in the auto healing policy. Refer to examples for usage. object({…}) null
named_ports Named ports. map(number) null
regional Use regional instance group. When set, location should be set to the region. bool false
stateful_config Stateful configuration can be done by individual instances or for all instances in the MIG. They key in per_instance_config is the name of the specific instance. The key of the stateful_disks is the 'device_name' field of the resource. Please note that device_name is defined at the OS mount level, unlike the disk name. object({…}) null
target_pools Optional list of URLs for target pools to which new instances in the group are added. list(string) []
target_size Group target size, leave null when using an autoscaler. number null
update_policy Update policy. Type can be 'OPPORTUNISTIC' or 'PROACTIVE', action 'REPLACE' or 'restart', surge type 'fixed' or 'percent'. object({…}) null
versions Additional application versions, target_type is either 'fixed' or 'percent'. map(object({…})) null
wait_for_instances Wait for all instances to be created/updated before returning. bool null

Outputs

name description sensitive
autoscaler Auto-created autoscaler resource.
group_manager Instance group resource.
health_check Auto-created health-check resource.

TODO

  • [✓] add support for instance groups