Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to verify operator role for addons requiring additional credential requests for sts clusters #970

Open
KevFan opened this issue Dec 14, 2022 · 11 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@KevFan
Copy link

KevFan commented Dec 14, 2022

When installing the RHOAM addon, it takes several runs of the install addon command to correctly apply the addon onto a ROSA sts cluster.

Below is a sample output (aws account id has been redacted):

➜  ~ rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true -
W: Addon 'managed-api-service' needs access to resources in account '0124567910'
I: Created role 'chfan-sts-y8x4-redhat-rhoam-cloud-resources-operator-sts-credent' with ARN 'arn:aws:iam::0124567910:role/chfan-sts-y8x4-redhat-rhoam-cloud-resources-operator-sts-credent'
E: Failed to add operator role to cluster 'chfan-sts': Failed to verify operator role for cluster '20jnic2pu9seuitl5ta0frmc8evn1tc9'
➜  ~ rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true --s3-access-key-id s3-key --s3-secret-access-key s3-secret
W: Addon 'managed-api-service' needs access to resources in account '0124567910'
E: Failed to add operator role to cluster 'chfan-sts': Failed to verify operator role for cluster '20jnic2pu9seuitl5ta0frmc8evn1tc9'
➜  ~ rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true 
W: Addon 'managed-api-service' needs access to resources in account '0124567910'
I: Created role 'chfan-sts-y8x4-redhat-rhoam-3scale-sts-s3-credentials' with ARN 'arn:aws:iam::0124567910:role/chfan-sts-y8x4-redhat-rhoam-3scale-sts-s3-credentials'
E: Failed to add operator role to cluster 'chfan-sts': Failed to verify operator role for cluster '20jnic2pu9seuitl5ta0frmc8evn1tc9'
➜  ~ rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true 
W: Addon 'managed-api-service' needs access to resources in account '0124567910'
E: Failed to add operator role to cluster 'chfan-sts': Failed to verify operator role for cluster '20jnic2pu9seuitl5ta0frmc8evn1tc9'
➜  ~ rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true -
W: Addon 'managed-api-service' needs access to resources in account '0124567910'
I: Enabling interactive mode
? Billing Model: standard
I: Add-on 'managed-api-service' is now installing. To check the status run 'rosa list addons -c chfan-sts'
I: To install this addOn again in the future, you can run:
   rosa install addon --cluster chfan-sts managed-api-service -y --rosa-cli-required true --cidr-range 10.1.0.0/26 --notification-email chfan@redhat.com --addon-managed-api-service 1 --addon-resource-required true --billing-model standard

It seems there is a delay between adding the required addon credenital requests and being available on the cluster api which fails some sort of validation.

This occurs on even the latest rosa version - 1.2.10 (at the time of writing) and is not the best user experience since the error looks to be transient

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2023
@laurafitzgerald
Copy link
Contributor

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2023
@laurafitzgerald
Copy link
Contributor

wanting to raise this one again as we've faced it again.

@oharan2
Copy link

oharan2 commented May 30, 2023

I've also faced this issue more then once - any updates? (version 1.2.22)

@KevFan
Copy link
Author

KevFan commented May 30, 2023

I've also created this JIRA that tracks this - https://issues.redhat.com/browse/SDA-7568 and it looks like it has marked for an upcoming sprint

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2023
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 28, 2023
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Oct 29, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 29, 2023

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@KevFan
Copy link
Author

KevFan commented Oct 31, 2023

/reopen
/remove-lifecycle rotten
/lifecycle frozen

@openshift-ci openshift-ci bot reopened this Oct 31, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 31, 2023

@KevFan: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten
/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants