You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From slave, join the master network
kubeadm join --token=123456.1234567890123456 192.168.91.133
Pod definition
apiVersion: v1
kind: Pod
metadata:
name: test-pod
labels:
name: test
spec:
containers:
- image: 123.456.789.0:9595/test
name: test
ports:
- containerPort: 8443
imagePullSecrets:
- name: my-secret
Then, tried to create a pod. The configured pod image is located in the nexus docker repository.I am getting the below trace while describe the the pod
Name: test-pod
Namespace: default
Node: ubuntu-child/192.168.91.134
Start Time: Thu, 16 Feb 2017 12:26:56 +0530
Labels: name=test
Status: Pending
IP: 10.44.0.2
Controllers: <none>
Containers:
test:
Container ID:
Image: 123.456.789.0:9595/test
Image ID:
Port: 8443/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkj94 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkj94:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkj94
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
9s 9s 1 {default-scheduler } Normal Scheduled Successfully assigned test-pod to ubuntu-child
7s 7s 1 {kubelet ubuntu-child} spec.containers{test} Normal Pulling pulling image "123.456.789.0:9595/test"
7s 7s 1 {kubelet ubuntu-child} spec.containers{test} Warning Failed Failed to pull image "123.456.789.0:9595/test": Error: image test:latest not found
7s 7s 1 {kubelet ubuntu-child} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "test" with ErrImagePull: "Error: image test:latest not found"
7s 7s 1 {kubelet ubuntu-child} spec.containers{test} Normal BackOff Back-off pulling image "123.456.789.0:9595/test"
7s 7s 1 {kubelet ubuntu-child} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "test" with ImagePullBackOff: "Back-off pulling image \"123.456.789.0:9595/test\""
The text was updated successfully, but these errors were encountered:
From the slave and master, I could pull the private repository. But, the problem is existing when kubectl tries to pull the image from the private repository though I have added my secret in the pod definition.
Is this a BUG REPORT or FEATURE REQUEST?: BUG
Kubernetes version (use kubectl version):
Environment:
Cloud provider or hardware configuration: 2GB RAM/50GB HDD VM
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial
Kernel (e.g. uname -a):
Linux ubuntu 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Install tools: kubeadm, kubectl, docker
Others:NA
What happened: ImagePullBackOff while pulling from a private repository
What you expected to happen: It shloud pulled the image from the private repository
How to reproduce it (as minimally and precisely as possible):
Then , Docker login
docker login 123.456.789.0:9595
docker info
Containers: 87
Running: 18
Paused: 0
Stopped: 69
Images: 175
Server Version: 1.12.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 384
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-21-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.937 GiB
Name: ubuntu
ID: FXD7:JQJZ:HO3R:D2NK:RWYL:7DCY:PC2M:43PM:MA7C:QSPN:4RGS:5W6H
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
123.456.789.0:9595
127.0.0.0/8
docker -v
Docker version 1.12.3, build 6b644ec
Initiate kubeadm in master
kubeadm init --token 123456.1234567890123456 --api-advertise-addresses 192.168.91.133
Create kubernates secret
kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 --docker-username=admin --docker-password=XXXX --docker-email=test@xyz.com
Created the weive network
kubectl apply -f https://git.io/weave-kube
From slave, join the master network
kubeadm join --token=123456.1234567890123456 192.168.91.133
Pod definition
The text was updated successfully, but these errors were encountered: