Skip to content

Instructions on how to setup an Oracle Kubernetes Engine (OKE) cluster

License

Notifications You must be signed in to change notification settings

oracle-quickstart/oke-prerequisites

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

oke-prerequisites

These are instructions on how to setup an Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster along with a Terraform module to automate part of that process for use with the Oracle Cloud Infrastructure Quick Start examples.

Prerequisites

First off you'll need to do some pre deploy setup. That's all detailed here.

Clone the Module

Now, you'll want a local copy of this repo. You can make that with the commands:

git clone https://github.com/oracle/oke-quickstart-prerequisites.git
cd oke-quickstart-prerequisites/terraform
ls

We now need to initialize the directory with the module in it. This makes the module aware of the OCI provider. You can do this by running:

terraform init

This gives the following output:

Deploy

Now for the main attraction. Let's make sure the plan looks good:

terraform plan

That gives:

If that's good, we can go ahead and apply the deploy:

terraform apply

You'll need to enter yes when prompted. The apply should take about five minutes to run. Once complete, you'll see something like this:

Viewing the Cluster in the Console

We can check out our new cluster in the console by navigating here.

Similarly, the IaaS machines running the cluster are viewable here.

Setup the Terminal

To interact with our cluster, we need kubectl on our local machine. Instructions for that are here. I'm a big fan of easy and on a Mac, so I just ran:

brew install kubectl

That gave me this:

We're also probably going to want helm. Once again, brew is our friend. If you're on another platform, take a look here.

brew install kubernetes-helm

That gave me this:

The terraform apply dumped a Kubernetes config file called config. By default, kubectl expects the config file to be in ~/.kube/config. So, we can put it there by running:

mkdir ~/.kube
mv config ~/.kube

We can make sure this all worked by running this command to check out the nodes in our cluster:

kubectl get nodes

That should give something like:

Make yourself Admin

You probably want your kubectl set up so that you're a cluster admin. Otherwise your access to your new cluster will be limited. There are some instructions on that here. You'll need to grab your user OCID (possibly from the console, here) and then run a command like:

kubectl create clusterrolebinding myadmin --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq

That gives this:

Destroy the Deployment

When you no longer need the OKE cluster, you can run this to delete the deployment:

terraform destroy

You'll need to enter yes when prompted. Once complete, you'll see something like this:

Releases

No releases published

Packages

No packages published

Languages