-
Create Ubuntu EC2 instance
-
Install
awscli
curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
apt install unzip python
unzip awscli-bundle.zip
#sudo apt-get install unzip - if you dont have unzip in your system
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
or if facing any python issue awscliv2
#awscli v2 installation
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
sudo ./aws/install
# Output
You can now run: /usr/local/bin/aws --version
#check installation
/usr/local/bin/aws --version
#or
aws --version
- Install
kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
-
Create an IAM user/role with Route53, EC2, IAM and S3 full access. In my case I create role name called
k8s-role
-
Attach IAM user/role to ubuntu server
Note: If you create IAM user with programmatic access then provide Access keys.
aws configure
- Install
kops
on ubuntu instance:
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
-
Create a Route53 private hosted zone (you can create Public hosted zone if you have a domain)
-
Create an S3 bucket
aws s3 mb s3://dev.k8s.mkn400.tk
- Expose environment variable:
#set variable for temporary up to current session or current console
export KOPS_STATE_STORE=s3://dev.k8s.mkn400.tk
or
#set variable for permanent
echo "export KOPS_STATE_STORE=s3://dev.k8s.mkn400.tk" >> /etc/profile
- Create SSH keys before creating cluster
ssh-keygen
- Create kubernetes cluster definitions on S3 bucket
kops create cluster --cloud=aws --zones=ap-south-1a --name=dev.k8s.mkn400.tk --dns-zone=mkn400.tk --dns private
- Create kubernetes cluser and default it will create one master node and two worker nodes.
kops update cluster dev.k8s.mkn400.tk --yes
- Validate your cluster
kops validate cluster
- To list nodes
kubectl get nodes
Deploying Nginx container on Kubernetes
- Deploying
nginx
Container
kubectl run sample-nginx --image=nginx --replicas=2 --port=80
kubectl get pods
kubectl get deployments
- Expose the deployment as service. This will create an ELB in front of those 2 containers and allow us to publicly access them:
kubectl expose deployment sample-nginx --port=80 --type=LoadBalancer
kubectl get services -o wide
- Don't forgot To delete cluster after practice otherwise AWS will cost
kops delete cluster dev.k8s.mkn400.tk --yes
Troubleshoot Issue:
"Permission denied (publickey)" or "Authentication failed, permission denied"
Solution:
must cd to ~/.SSH cd ~/.ssh kops create secret --state s3://k8s.mkn400.tk --name k8s.mkn400.tk sshpublickey admin -i id_rsa.pub kops update cluster k8s.mkn400.tk --yes kops rolling-update cluster k8s.mkn400.tk --yes
after above command it will remove and create new kubernetes master and nodes and output show Ready
and try to reconnecct
ssh -i <ssh-key.pub> ubuntu@api-k8s.mkn400.tk
it will work.