Skip to content
This repository has been archived by the owner on Nov 18, 2022. It is now read-only.

Cannot deploy #25

Open
ray65536 opened this issue Jun 8, 2018 · 12 comments
Open

Cannot deploy #25

ray65536 opened this issue Jun 8, 2018 · 12 comments

Comments

@ray65536
Copy link

ray65536 commented Jun 8, 2018

minikube version: v0.27.0
OS - Ubuntu 16.04

Events:
  Type     Reason                 Age                 From               Message
  ----     ------                 ----                ----               -------
  Normal   Scheduled              58m                 default-scheduler  Successfully assigned redis-server-0 to minikube
  Normal   SuccessfulMountVolume  58m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "opt"
  Normal   SuccessfulMountVolume  58m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "redis-server-volume"
  Normal   SuccessfulMountVolume  58m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-8bkfj"
  Normal   Pulling                57m                 kubelet, minikube  pulling image "redis:3.2"
  Normal   Pulled                 56m                 kubelet, minikube  Successfully pulled image "redis:3.2"
  Normal   Pulled                 55m (x4 over 56m)   kubelet, minikube  Container image "redis:3.2" already present on machine
  Normal   Created                55m (x5 over 56m)   kubelet, minikube  Created container
  Warning  Failed                 55m (x5 over 56m)   kubelet, minikube  Error: failed to start container "redis-server": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/opt/bin/k8s-redis-ha-server\": stat /opt/bin/k8s-redis-ha-server: no such file or directory": unknown
  Warning  BackOff                2m (x232 over 56m)  kubelet, minikube  Back-off restarting failed container
@andyjeffries
Copy link

I get this too, a shame, I'll have to keep looking... Connecting to Kubernetes 1.9 cluster.

@markjacksonfishing
Copy link

I tried it a few different way including bypassing the run.sh and moving all the the contents there to the Dockerfile. Still no luck :-(

@MPJHorner
Copy link

Anyone get any progress with this?

@markjacksonfishing
Copy link

markjacksonfishing commented Jul 25, 2018

I personally moved on from this. I built my own and will be opening up my work to the public on Friday

@fulvi0
Copy link

fulvi0 commented Oct 17, 2018

Does anyone get some luck about this issue? @markyjackson-taulia could you share your work? thx

@markjacksonfishing
Copy link

@fulvi0 my work is here: https://github.com/draios/sysdigcloud-kubernetes

@reyou
Copy link

reyou commented Dec 31, 2018

Check this PR, it fixes this issue.
#23

@FreundB
Copy link

FreundB commented Jan 17, 2019

System-Version:

Kubernetes version: v1.13.1 (deployed with Juju on bare metal)
OS - Ubuntu 18.04

What I did:

  • Downloaded "example"-directory
  • Executed "kubectl create -f /example"

This happens:

redis_error

Did I missed something?

@reyou
Copy link

reyou commented Jan 17, 2019

Check this #23

@reyou
Copy link

reyou commented Jan 17, 2019

Also while I was experiencing with redis, a master pod with N amount of slaves, with persistent volume, and and redis-persistence enabled is almost high-available.

Even if master is gone, kubernetes just launches another one in split second, and redis master hooks back to persistent storage. All it happens under 3-5 seconds, and if you are OK with that, you will not even need that complex custom configuration of sentinel. And while master is down, you can still read from slaves.

@markjacksonfishing
Copy link

Using a sentinel is a good thing here

@FreundB
Copy link

FreundB commented Jan 17, 2019

Made my day! It works from the beginning.

Maybe I take a deeper look next time.
I want to build multiple Redis-clusters in Kubernetes and learn how to use them.
With this solution I can finally start my journey.
I don't know about sentinel, but I'll see how to use it.

Cheers

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants