Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Creating mirroring config from running clusters #304

Open
dmesser opened this issue Feb 2, 2022 · 8 comments
Open

[RFE] Creating mirroring config from running clusters #304

dmesser opened this issue Feb 2, 2022 · 8 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@dmesser
Copy link
Contributor

dmesser commented Feb 2, 2022

What is desired

The configuration file for oc mirror needs to be assembled by the user. Some users may have running clusters that they simply want to clone. oc mirror should have a feature that allows to create mirroring configuration that corresponds to a cluster that it is being pointed at. The result is a mirror that can be used to run that same cluster disconnected from public registries.

What you expected to happen?

oc mirror would make use of the oc context that it is running in as a plugin to connect to a particular cluster. It would need sufficient permissions to introspect the cluster and survey the following data points:

  • cluster version running
  • installed CatalogSources / operator catalogs
  • installed operators, including their versions and channels
  • bonus points: determining installed helm releases and their source charts

Duplicate results need to be removed from the above sets.

The output of the survey should be a ready-to-run mirror configuration file that is stored on disk or sent to stdout for reuse.

@afflom
Copy link
Contributor

afflom commented Feb 3, 2022

I have some thoughts and questions about this:

This would be very helpful for users transitioning from lab environments to subsequent iterations of their practiced cluster deployments. There is a lot to be said for easing the process of authoring the imageset config. We had originally pushed back on this request in favor of an ansible role or script to retrieve this information, but this might be a good fit for oc-mirror.

@dinhxuanvu If an operator name and its version are dumped from a cluster, is there any correlation within the cluster to determine the catalog that the operator was installed from? How does the operator hub configuration play into this? Or will we have to get the name and version and then use the oc-mirror list workflows to discover this information?

@dmesser I do not see time in the schedule to implement this feature in 4.11. What release are you targeting?

@joehuizenga
Copy link

I have hacked up a simple PoC in goLang that kinda does this, if there is interest, I can look into how to share it, right now its inside an ibm git repo

logic

  • get all catalogsources running in cluster

    • run them and use grpc to get heads of channel from each catalog
    • build cache catalog:pkg:channel -> bundle version
  • get all the operators installed in the cluster and loop thru them

    • get the subscription that installed the operator
      • get the information from subscription and add/merge an entry to the ISC
  • get all operandRegistries installed in the cluster and loop thru them

    • get most of the subscription info from the operandRegistry
      • operandRegistry DOES NOT have version info, use the cache to get the head of channel version (at this point in time) so if someone did install something from the operandRegistry we KNOW which version (IN THEORY would be installed)
      • add/merge an entry to the ISC
operators:
- catalog: quay.io/huizenga/cs:heads-only
  packages:
  - channels:
    - name: v3
    name: ibm-licensing-operator-app
    versions:
    - 1.11.0
  - channels:
    - name: v3
    name: ibm-mongodb-operator-app
    versions:
    - 1.9.0
  - channels:
    - name: v3
    name: ibm-cert-manager-operator
    versions:
    - 3.16.0
  - channels:
    - name: v3
    name: ibm-iam-operator
    versions:
    - 3.12.2
  - channels:
    - name: v3
    name: ibm-healthcheck-operator-app
    versions:
    - 3.15.0

@dinhxuanvu
Copy link
Member

dinhxuanvu commented Feb 4, 2022

Hi folks,

While I can see the thought behind this, at the same time I'm not sure about oc-mirror generating a config is a practical use case or make sense given what oc-mirror is for primarily.

From the operator mirror perspective, I'm a bit concerned about the prospect of filing the startingVersion field. Perhaps, in this scenario, we only need to the head of the channel and basically place it at startingVersion. Then we will only mirror the latest version for each channel. On the other hand, we can do some graph tracing, find the oldest version and then use it at the startingVersion. This is where things may go haywire in the sense that there can be some derivations between the original cluster and the next cluster in term of operator version availability.

On the question from Alex, finding the installed version is actually quite difficult here without touching other APIs from OLM. oc-mirror is only using FBC API. It currently doesn't touch Subscription API which is needed in this case to find out what is the InstalledCSV on the cluster. This to me sounds like something oc-mirror shouldn't do. If we are not careful here, we literally turn oc-mirror into some sort of OLM controller and query OLM resources on the cluster.

@afflom
Copy link
Contributor

afflom commented Feb 6, 2022

It sounds like the maintenance of this feature would be a distraction to the development of oc-mirror. If that's the case, I think that the resource lookups that this feature requires might be best performed by oc and then templated into an imageset-confg with something like ansible/jinja.

@dmesser
Copy link
Contributor Author

dmesser commented Feb 16, 2022

@dinhxuanvu @afflom I agree with your assessment. This seems like something useful but maybe out of scope for the oc mirrorcommand itself. Putting it into a separate plugin would probably keep the code base clean. However it also invites for potential fragmentation and deviation if its going to happen in an entirely separate project. Obviously the output of this command would need to yield a working image set config. If the format of that config ever changes or gets new features, there is potential for this tool be incompatible.

@dinhxuanvu Yes, such a tool needs to touch probably OLM APIs to do it's job, but I don't see how that is a problem.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2022
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 16, 2022
@dmesser
Copy link
Contributor Author

dmesser commented Jun 20, 2022

/lifecycle frozen

@openshift-ci openshift-ci bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants