Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement GCPMachinePool using MIGs #297

Open
CecileRobertMichon opened this issue May 13, 2020 · 20 comments · May be fixed by #901
Open

Implement GCPMachinePool using MIGs #297

CecileRobertMichon opened this issue May 13, 2020 · 20 comments · May be fixed by #901
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@CecileRobertMichon
Copy link

CecileRobertMichon commented May 13, 2020

/kind feature

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

Implement GCPMachinePool using Managed Instance Groups. The GCPMachinePool implementation should follow the same package / group structure as CAPI and be added to the exp package, the experimental feature package. The feature should be gated using the MachinePool flag used in CAPI.

See also: kubernetes-sigs/cluster-api-provider-azure#483 and CAPI Machine Pool Proposal

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Note: As noted below in comments, the kubernetes-sigs/cluster-api#7107 also be supported/implemented as part of this

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label May 13, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 11, 2020
@CecileRobertMichon
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 11, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 9, 2020
@detiber
Copy link
Member

detiber commented Dec 1, 2020

/lifecycle frozen
/help

@k8s-ci-robot
Copy link
Contributor

@detiber:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/lifecycle frozen
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 1, 2020
@jayesh-srivastava
Copy link
Member

Hey, would like to work on this. Can I assign it myself?

@jayesh-srivastava
Copy link
Member

/assign

@evanfreed
Copy link
Contributor

@jayesh-srivastava just curious if you are still working on this? Myself or another colleague of mine might have some cycles to pick this up if you need any help.

@AverageMarcus
Copy link
Member

@CecileRobertMichon could we also add as a requirement to this ticket that (if actually merged) the externally-managed autoscaller annotation is also supported / implemented as part of this?

@CecileRobertMichon
Copy link
Author

@AverageMarcus added a note in description

@AverageMarcus
Copy link
Member

Perfect! Thank you!

@nsilve
Copy link

nsilve commented Sep 8, 2022

I have pinged @jayesh-srivastava via Slack and he told me that he is not able to work on this issue due to personal commitments. @evanfreed and myself are going to work on it unless there is any objection.

@evanfreed
Copy link
Contributor

evanfreed commented Sep 9, 2022

/assign @nsilve

@evanfreed
Copy link
Contributor

/unassign @jayesh-srivastava

@nsilve
Copy link

nsilve commented Sep 12, 2022

Azure does not support any kind of templating for instance creation, so all the configuration should be included into VMSS (so into AzureMachinePool). But both AWS and GCP have discrete template resources for ASG (launch template)/MIG (instance template) so we are having the following options:

  1. manage GCP instance templates internally to GCP Machine Pool controller (like CAPA)
  2. manage GCP instance templates via a separate GCP Instance Template controller
  3. something else

Right now they are not managed at all since ControlPlane machines are not created via MIG, so GCPMachineTemplate configuration is passed directly to GCPMachine. Since no actual GCP Instance Template is created right now, we will need to find a solution for it if we proceed with solution 2 because we will need to create only the instance templates which are going to be used for workers created via MIG (although creating another instance template for Control Plane won't hurt anyone but it is going to be a unused resource).

Moreover, based on GCP documentation, instance templates can be used to create both virtual machine (VM) instances and managed instance groups (MIGs). That means that we could probably define GCPMachineTemplate and use them directly into GCPMachinePool (not tested)

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
  name: workload
spec:
  template:
    spec:
      image: projects/elastic-esp-dev/global/images/capi-ubuntu-2004-k8s-v1-24-1
      instanceType: n2-standard-2

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachinePool
metadata:
  name: workload
spec:
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: GCPMachineTemplate
      name: workload

Any ideas/comments/objections?

@nsilve
Copy link

nsilve commented Sep 22, 2022

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 22, 2022
@nsilve
Copy link

nsilve commented Dec 2, 2022

Plans changed so I am afraid that I cannot proceed with that.

@nsilve
Copy link

nsilve commented Dec 2, 2022

/unassign @nsilve

@BrennenMM7
Copy link

/assign

@BrennenMM7 BrennenMM7 linked a pull request Apr 18, 2023 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants