Developing with minikube
The following recipes will help set-up and configure a development virtual machine to use a minikube as a target cluster. Two alternatives are suggested:
- a lightweight environment, with minikube running as docker container(s) inside the development VM
- a more isolated environment with minikube running as a second virtual machines on the same host
- Virtualbox installed on host (tested with Virtualbox versions 5.1.18-5.1.30 and 5.2.0)
- an existing Linux development VM (tested with Xubuntu 16.04-17.10)
This option creates a minikube cluster, using docker, running on the same machine as the development. Requires minikube .
- Download and install a compatible kubectl (matches k8s and Istio requirements) and minikube (release 0.19+ to support local, hypervisor free, execution).
# download and install kubectl ...
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# ... and minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.22.3/minikube-linux-amd64 \
&& chmod +x minikube && sudo mv minikube /usr/local/bin/
- Start the minikube cluster with required options (k8s version, API master extensions, etc. Note the use of --vm-driver=none)
# start minikube ...
sudo -E minikube start \
--extra-config=apiserver.Admission.PluginNames="Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,GenericAdmissionWebhook,ResourceQuota" \
--kubernetes-version=v1.7.5 --vm-driver=none
# set the kubectl context to minikube (this overwrites ~/.kube and ~/.minikube, but leaves files' ownership as root:root)
sudo -E minikube update-context
# either use sudo on all kubectl commands, or chown/chgrp to your user
# sudo chown -R $USER $HOME/.kube && sudo chgrp -R $USER $HOME/.kube \
# && sudo chown -R $USER $HOME/.minikube && sudo chgrp -R $USER $HOME/.minikube
# this will write over any previous configuration)
# wait for the cluster to become ready/accessible via kubectl
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; \
until sudo kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1; done
sudo -E kubectl cluster-info
To stop the cluster run sudo -E minikube stop
.
As of version 0.23, since minikube uses the host's docker daemon, it may leave "orphaned" containers. These are still present on the host. Future minikube versions may perform correct cleanup on exit. As a workaround, you may terminate all minikube spawned containers using the following commands (possibly added as aliases to ~/.bashrc
).
alias minikube-kill = `docker rm $(docker kill $(docker ps -a --filter="name=k8s_" --format="{{.ID}}"))`
alias minikube-stop = `docker stop $(docker ps -a --filter="name=k8s_" --format="{{.ID}}")`
The above assumes that all and only containers created by minikube have a name prefixed with k8s_
.
- The development machine will typically be already configured virtual network connection to the outside world, and requires a second virtual network interface to connect to minikube.
- minikube and kubectl installed on the host (see, for example, instructions here)
- Download and install a compatible kubectl (matches k8s and Istio requirements) and minikube (any recent release, tested with 0.18+).
# download and install kubectl ...
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# ... and minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.22.3/minikube-linux-amd64 \
&& chmod +x minikube && sudo mv minikube /usr/local/bin/
Start minikube from the host machine's command line and wait for its initialization to complete:
# start minkube, optionally passing in --driver=virtualbox
minikube start \
--extra-config=apiserver.Admission.PluginNames="Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,GenericAdmissionWebhook,ResourceQuota" \
--kubernetes-version=v1.7.5
Once the minikube VM is running:
- use Virtualbox tools to note the host-only adapter used by minikube. Upon creation of a new VM, minikube typically creates a new host-only adapter, instead of reusing an existing one.
- determine the minikube machine's IP address, by running
minikube ip
(we'll refer to this as$minikube-ip
later)
Place the two virtual machines on the same host network. Configure the development VM with a second network card, connected to the same host-only adapter name as the minikube VM - don't change minikube's host-only adapter.
Set minikube machine's IP address using a static configuration. This avoids having development machine's minikube Kubernetes context from becoming stale on minikube restarts. Replace $minikube-ip
with the output of minikube ip
from the previous step:
minikube ssh "echo 'pkill udhcpc && ifconfig eth1 $minikube-ip netmask 255.255.255.0 broadcast 192.168.99.255 up' \
| sudo tee /var/lib/boot2docker/bootlocal.sh > /dev/null"
If not using minikube's default CIDR (192.168.99.1/24
), be sure to replace the netmask and broadcast address accordingly.
To retain the static IP configuration, always start minikube through the command line (minikube start
) and not through the Virtualbox management GUI.
Run the following, on the host, to determine the user access token to minikube cluster.
kubectl describe secrets
Name: default-token-xj982
...
token: eyJhb...<redacted>...
Copy the token's value as $minikube-token
and then proceed to configure kubectl
, in the development machine, to use the minikube cluster:
kubectl config set-cluster minikube -server=https://$minikube-ip:8443 --insecure-skip-tls-verify=true
kubectl config set-credentials minikube --token=$minikube-token
kubectl config set-context minikube --cluster=minikube --user=minikube
kubectl config use-context minikube
Run kubectl get nodes
inside the development virtual machine to confirm the configuration has been set correctly.
Visit istio.io to learn how to use Istio.
- Preparing for Development Mac
- Preparing for Development Linux
- Troubleshooting Development Environment
- Repository Map
- GitHub Workflow
- Github Gmail Filters
- Using the Code Base
- Developing with Minikube
- Remote Debugging
- Verify your Docker Environment
- Istio Test Framework
- Working with Prow
- Test Grid
- Code Coverage FAQ
- Writing Good Integration Tests
- Test Flakes
- Release Manager Expectations
- Preparing Istio Releases
- 1.5 Release Information
- 1.6 Release Information
- 1.7 Release Information
- 1.8 Release Information
- 1.9 Release Information
- 1.10 Release Information
- 1.11 Release Information
- 1.12 Release Information
- 1.13 Release Information
- 1.14 Release Information
- 1.15 Release Information
- 1.16 Release Information
- 1.17 Release Information
- 1.18 Release Information
- 1.19 Release Information
- 1.20 Release Information
- 1.21 Release Information
- 1.22 Release Information
- Collecting Logs and Debug Info
- Dependency FAQ
- Working with discuss.istio.io
- Developing with and hosting upon OpenShift
- Adapter Dev Guide
- Adapter Walkthrough
- Attribute Generating Adapter Walkthrough
- Route Directive Adapter Development Guide
- Out of Tree Adapter Walkthrough
- Running a Local Instance
- Template Dev Guide
- Using a Custom Adapter
- Publishing Adapters and Templates to istio.io
- Enabling Envoy Authorization Service and gRPC Access Log Service With Mixer