Skip to content

Latest commit

 

History

History
123 lines (108 loc) · 3.53 KB

install_kubernetes_cluster_on_aws_use_kops.md

File metadata and controls

123 lines (108 loc) · 3.53 KB

Install Kubernetes cluster on AWS using kops

  1. Create Ubuntu EC2 instance

  2. Install awscli

 curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
 apt install unzip python
 unzip awscli-bundle.zip
 #sudo apt-get install unzip - if you dont have unzip in your system
 ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

or if facing any python issue awscliv2

#awscli v2 installation
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
sudo ./aws/install
# Output
You can now run: /usr/local/bin/aws --version
#check installation
/usr/local/bin/aws --version
#or
aws --version
  1. Install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
 chmod +x ./kubectl
 sudo mv ./kubectl /usr/local/bin/kubectl
  1. Create an IAM user/role with Route53, EC2, IAM and S3 full access. In my case I create role name called k8s-role

  2. Attach IAM user/role to ubuntu server

Note: If you create IAM user with programmatic access then provide Access keys.

aws configure
  1. Install kops on ubuntu instance:
 curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
 chmod +x kops-linux-amd64
 sudo mv kops-linux-amd64 /usr/local/bin/kops
  1. Create a Route53 private hosted zone (you can create Public hosted zone if you have a domain)

  2. Create an S3 bucket

aws s3 mb s3://dev.k8s.mkn400.tk
  1. Expose environment variable:
#set variable for temporary up to current session or current console
export KOPS_STATE_STORE=s3://dev.k8s.mkn400.tk

or

#set variable for permanent
echo "export KOPS_STATE_STORE=s3://dev.k8s.mkn400.tk" >> /etc/profile
  1. Create SSH keys before creating cluster
ssh-keygen
  1. Create kubernetes cluster definitions on S3 bucket
kops create cluster --cloud=aws --zones=ap-south-1a --name=dev.k8s.mkn400.tk --dns-zone=mkn400.tk --dns private

⚠️ Be very careful here!: Zones

  1. Create kubernetes cluser and default it will create one master node and two worker nodes.
kops update cluster dev.k8s.mkn400.tk --yes
  1. Validate your cluster
kops validate cluster
  1. To list nodes
kubectl get nodes

Deploying Nginx container on Kubernetes

  1. Deploying nginx Container
kubectl run sample-nginx --image=nginx --replicas=2 --port=80
kubectl get pods
kubectl get deployments
  1. Expose the deployment as service. This will create an ELB in front of those 2 containers and allow us to publicly access them:
kubectl expose deployment sample-nginx --port=80 --type=LoadBalancer
kubectl get services -o wide
  1. Don't forgot To delete cluster after practice otherwise AWS will cost
kops delete cluster dev.k8s.mkn400.tk --yes

Troubleshoot Issue:

"Permission denied (publickey)" or "Authentication failed, permission denied"

Solution:

must cd to ~/.SSH cd ~/.ssh kops create secret --state s3://k8s.mkn400.tk --name k8s.mkn400.tk sshpublickey admin -i id_rsa.pub kops update cluster k8s.mkn400.tk --yes kops rolling-update cluster k8s.mkn400.tk --yes

after above command it will remove and create new kubernetes master and nodes and output show Ready

and try to reconnecct

ssh -i <ssh-key.pub> ubuntu@api-k8s.mkn400.tk

it will work.