Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ErrImagePull for Noobaa CLI v.5.12.4 #1196

Open
djjudas21 opened this issue Aug 10, 2023 · 4 comments
Open

ErrImagePull for Noobaa CLI v.5.12.4 #1196

djjudas21 opened this issue Aug 10, 2023 · 4 comments

Comments

@djjudas21
Copy link

Environment info

  • NooBaa Operator Version: v5.12.4
  • Platform: Kubernetes, MicroK8s v1.26.7

Actual behavior

I'm doing a green field installation of the Noobaa Operator via the Noobaa CLI, and running into ErrImagePull. I installed Noobaa CLI v5.12.4 but the deployment is trying to pull container image v5.12.0, which does not exist on Docker Hub.

Expected behavior

It should pull v5.12.4

Steps to reproduce

[jonathan@latitude ~]$ brew install noobaa/noobaa/noobaa
Running `brew update --auto-update`...
==> Homebrew collects anonymous analytics.
Read the analytics documentation (and how to opt-out) here:
  https://docs.brew.sh/Analytics
No analytics have been recorded yet (nor will be during this `brew` run).

Installing from the API is now the default behaviour!
You can save space and time by running:
  brew untap homebrew/core
==> Downloading https://formulae.brew.sh/api/formula_tap_migrations.jws.json
####################################################################################################################################################################################################################################### 100.0%
==> Auto-updated Homebrew!
Updated 5 taps (helm/tap, fairwindsops/tap, noobaa/noobaa, homebrew/core and homebrew/cask).
==> Fetching dependencies for noobaa/noobaa/noobaa: go
==> Fetching go
==> Downloading https://ghcr.io/v2/homebrew/core/go/manifests/1.20.7
####################################################################################################################################################################################################################################### 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/go/blobs/sha256:4a2d74187e6fa58781dd0fc4643ee847a3a2b5bffd103f048e532f53fd469d86
####################################################################################################################################################################################################################################### 100.0%
==> Installing noobaa/noobaa/noobaa dependency: go
==> Pouring go--1.20.7.x86_64_linux.bottle.tar.gz
🍺  /home/linuxbrew/.linuxbrew/Cellar/go/1.20.7: 11,997 files, 240.8MB
==> Fetching noobaa/noobaa/noobaa
==> Cloning https://github.com/noobaa/noobaa-operator.git
Cloning into '/home/jonathan/.cache/Homebrew/noobaa--git'...
==> Checking out tag v5.12.4
HEAD is now at ce3a871 Merge pull request #1145 from nimrod-becker/backport_to_5_12
==> Installing noobaa from noobaa/noobaa
==> go mod vendor
==> go generate
==> go build
🍺  /home/linuxbrew/.linuxbrew/Cellar/noobaa/5.12.4: 3 files, 66.0MB, built in 2 minutes 21 seconds
==> Running `brew cleanup noobaa`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
[jonathan@latitude ~]$ kubectl create ns noobaa
namespace/noobaa created
[jonathan@latitude ~]$ kubens noobaa
Context "microk8s" modified.
Active namespace is "noobaa".
[jonathan@latitude ~]$ noobaa install
INFO[0000] CLI version: 5.12.0                          
INFO[0000] noobaa-image: noobaa/noobaa-core:master-20220913 
INFO[0000] operator-image: noobaa/noobaa-operator:5.12.0 
INFO[0000] noobaa-db-image: centos/postgresql-12-centos7 
INFO[0000] Namespace: noobaa                            
INFO[0000]                                              
INFO[0000] CRD Create:                                  
INFO[0000] ✅ Already Exists: CustomResourceDefinition "noobaas.noobaa.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "backingstores.noobaa.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "namespacestores.noobaa.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "bucketclasses.noobaa.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "noobaaaccounts.noobaa.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "objectbucketclaims.objectbucket.io" 
INFO[0000] ✅ Already Exists: CustomResourceDefinition "objectbuckets.objectbucket.io" 
INFO[0000]                                              
INFO[0000] Operator Install:                            
INFO[0000] ✅ Already Exists: Namespace "noobaa"         
INFO[0000] ✅ Created: ServiceAccount "noobaa"           
INFO[0000] ✅ Created: ServiceAccount "noobaa-endpoint"  
INFO[0000] ✅ Created: Role "noobaa"                     
INFO[0000] ✅ Created: Role "noobaa-endpoint"            
INFO[0000] ✅ Created: RoleBinding "noobaa"              
INFO[0000] ✅ Created: RoleBinding "noobaa-endpoint"     
INFO[0000] ✅ Created: ClusterRole "noobaa.noobaa.io"    
INFO[0000] ✅ Created: ClusterRoleBinding "noobaa.noobaa.io" 
INFO[0000] ✅ Created: Deployment "noobaa-operator"      
INFO[0000]                                              
INFO[0000] System Create:                               
INFO[0000] ✅ Already Exists: Namespace "noobaa"         
INFO[0000] ✅ Created: NooBaa "noobaa"                   
INFO[0000]                                              
INFO[0000] NOTE:                                        
INFO[0000]   - This command has finished applying changes to the cluster. 
INFO[0000]   - From now on, it only loops and reads the status, to monitor the operator work. 
INFO[0000]   - You may Ctrl-C at any time to stop the loop and watch it manually. 
INFO[0000]                                              
INFO[0000] System Wait Ready:                           
INFO[0000] ⏳ System Phase is "". Deployment "noobaa-operator" is not ready: ReadyReplicas 0/1 
INFO[0003] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0006] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0009] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0012] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0015] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0018] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0021] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0024] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
INFO[0027] ⏳ System Phase is "". Pod "noobaa-operator-588b8fd64d-tpstb" is not yet ready: Phase="Pending". ContainersNotReady (containers with unready status: [noobaa-operator]). ContainersNotReady (containers with unready status: [noobaa-operator]).  
^C
[jonathan@latitude ~]$ kubectl get po
NAME                               READY   STATUS         RESTARTS   AGE
noobaa-operator-588b8fd64d-sq5gs   0/1     ErrImagePull   0          13s
[jonathan@latitude ~]$ kubectl describe po noobaa-operator-588b8fd64d-sq5gs
Name:                 noobaa-operator-588b8fd64d-sq5gs
Namespace:            noobaa
Priority:             10000
Priority Class Name:  normal-priority
Service Account:      noobaa
Node:                 kube04/192.168.0.55
Start Time:           Thu, 10 Aug 2023 21:02:50 +0100
Labels:               app=noobaa
                      noobaa-operator=deployment
                      pod-template-hash=588b8fd64d
Annotations:          cni.projectcalico.org/containerID: 4656999149f801a91618d43a1dc88408c9da8e7169c4387c19e0a0d53afcbb4e
                      cni.projectcalico.org/podIP: 10.1.102.112/32
                      cni.projectcalico.org/podIPs: 10.1.102.112/32
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.1.102.112
IPs:
  IP:           10.1.102.112
Controlled By:  ReplicaSet/noobaa-operator-588b8fd64d
Containers:
  noobaa-operator:
    Container ID:   
    Image:          noobaa/noobaa-operator:5.12.0
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     250m
      memory:  512Mi
    Requests:
      cpu:     250m
      memory:  512Mi
    Environment:
      OPERATOR_NAME:          noobaa-operator
      POD_NAME:               noobaa-operator-588b8fd64d-sq5gs (v1:metadata.name)
      WATCH_NAMESPACE:        noobaa (v1:metadata.namespace)
      NOOBAA_CLI_DEPLOYMENT:  true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqnwt (ro)
      /var/run/secrets/openshift/serviceaccount from oidc-token (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  oidc-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3600
  kube-api-access-lqnwt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  100s               default-scheduler  Successfully assigned noobaa/noobaa-operator-588b8fd64d-sq5gs to kube04
  Normal   BackOff    21s (x4 over 94s)  kubelet            Back-off pulling image "noobaa/noobaa-operator:5.12.0"
  Warning  Failed     21s (x4 over 94s)  kubelet            Error: ImagePullBackOff
  Normal   Pulling    6s (x4 over 99s)   kubelet            Pulling image "noobaa/noobaa-operator:5.12.0"
  Warning  Failed     4s (x4 over 95s)   kubelet            Failed to pull image "noobaa/noobaa-operator:5.12.0": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/noobaa/noobaa-operator:5.12.0": failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:849ad0464b245e2b0ad6295e53efc80dae6b81532fbdfb609070975219969bea: not found
  Warning  Failed     4s (x4 over 95s)   kubelet            Error: ErrImagePull
@djjudas21
Copy link
Author

I worked around by doing:

noobaa install --operator-image='noobaa/noobaa-operator:5.12.4' --noobaa-image='noobaa/noobaa-core:5.12.4'

@liranmauda
Copy link
Contributor

Hi @djjudas21
Great workaround 🙂
homebrew was updated to 5.14.2
Is this still happening?

@huyleyye
Copy link

huyleyye commented Feb 21, 2024

Hi @djjudas21 Great workaround 🙂 homebrew was updated to 5.14.2 Is this still happening?

5.14.4still have this question

Warning Failed 28s kubelet Failed to pull image "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": failed to resolve reference "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": failed to do request: Head "https://gcr.io/v2/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar/manifests/v20221117-v0.1.0-22-g0e67387": dial tcp: lookup gcr.io on 1.1.1.1:53: read udp 1.1.1.1:51150->1.1.1.1:53: i/o timeout

@huyleyye
Copy link

Hi @djjudas21 Great workaround 🙂 homebrew was updated to 5.14.2 Is this still happening?

5.14.4still have this question

Warning Failed 28s kubelet Failed to pull image "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": failed to resolve reference "gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20221117-v0.1.0-22-g0e67387": failed to do request: Head "https://gcr.io/v2/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar/manifests/v20221117-v0.1.0-22-g0e67387": dial tcp: lookup gcr.io on 1.1.1.1:53: read udp 1.1.1.1:51150->1.1.1.1:53: i/o timeout

openshift can install but kubenetes can nor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants