[001.3] Quick and dirty Kubernetes

Kubernetes Moving Parts

Subscribe now

Quick and dirty Kubernetes [12.13.2017]

We now know a bit more about the required moving parts of Kubernetes. Let’s walk through a couple ways in which we can get our hands on a Kubernetes Cluster. So, we can learn to operate it.


Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop, which makes it easy for users looking to try out Kubernetes or develop with it day-to-day.

The single dependency Minikube has is a hypervisor. It runs the single node cluster in a virtual machine. By default Minikube comes with hooks to support most hypervisors we have today. So, let’s use Oracle’s Virtualbox, a free, open-source hypervisor that Minikube hooks on by default in it.

You can install both virtualbox and minikube on a Mac simply by using Homebrew Cask.

To do that, run:

brew cask install virtualbox
brew cask install minikube

With minikube and it’s dependency in place, we can start it by running:

minikube start

And voilà, we have a single node kubernetes cluster on our laptop!

kubectl get nodes


Kops is a tool that helps you manage the lifecycle of a production-grade, highly available, Kubernetes clusters from the command line. It supports both Amazon Web Services and Google Cloud Engine. VMware vSphere is in beta and alpha support at the moment as well.

You can also install kops on MacOS with brew. To do so, simply run:

brew install kops

Kops requires a couple extra dependencies. In this case, we'll be creating our cluster on AWS. So, we need to create a Route53 hosted zone for the domain that we'll be using with our cluster. Then, create an S3 bucket to store out cluster's info and state.

We'll use aws cli to create those for us. To create the kubernetes.lucazz.me subdomain, run the following command:

aws route53 create-hosted-zone --name lucazz.me
aws s3 mb s3://kubernetes.lucazz.me

After meeting kops' dependencies, we can pass an environment variable called KOPS_STATE_STORE with our recently created S3 bucket and have kops configure and build our cluster with this:

KOPS_STATE_STORE=s3://kubernetes.lucazz.me kops create cluster \
        --cloud=aws \
        --dns-zone=ZU5E08U59ODY6 \
        --zones=us-east-1a,us-east-1b,us-east-1c,us-east-1e \
        --master-zones=us-east-1a,us-east-1b,us-east-1c \
        --kubernetes-version=v1.8.4 \

kops update cluster kubernetes.lucazz.me --yes

After a couple minutes your cluster will be ready. You can check on the status with the following command:

  kops validate cluster

Google’s GKE

Google Kubernetes Engine is a managed environment for deploying containerized applications on Google Cloud. It is by far the easiest and most streamlined way to get your hands on a Kubernetes Cluster today. All you need is a Google Cloud account and to have glcloud installed. One of the major benefits of using Google Kubernetes engine is that you don’t have to manage the Master nodes. Google does that for you. On top of that, Google doesn’t charge you for the master nodes too. You’ll end up paying only for the Kubernetes nodes you spin up in that cluster.

To spin up a new Kubernetes cluster on GKE, you’ll need to configure gcloud to use the project you’ve created on your Google Cloud Admin interface. In order to configure gcloud, simply run:

gcloud config set project daily-drip
gcloud config set compute/zone us-west1-a

This will set up gcloud to use your project’s API on their United States West 1a region. To create and get your kubernetes cluster access credentials, run the follow commands:

gcloud container clusters create daily-drip
gcloud container clusters get-credentials daily-drip

There you have it! Your own Google Kubernetes Engine cluster.

Lets try it out. We’ll create a sample deployment, and expose it with a Google Cloud Load Balancer. To do so, we’ll do this:

  kubectl get nodes
  kubectl run hello-server --image=gcr.io/google-samples/hello-app:1.0 W--port 8080
  kubectl expose deployment hello-server --type="LoadBalancer"
  watch kubectl get service hello-server
  curl <external ip>:8080/

Kubernetes Distros

There’s also a couple Kubernetes distros, new projects and products built on top of kubernetes. Those are - but not limited to - the following:


OpenShift Origin is an application platform where developers and teams can build, test, deploy, and run their applications. OpenShift Origin also serves as the upstream code base upon which OpenShift Online and OpenShift Container Platform are built.

Canonical Distribution of Kubernetes

This is pure Kubernetes tested across the widest range of clouds with modern metrics and monitoring, brought to you by the people who deliver Ubuntu.


Tectonic is a secure, automated, and hybrid enterprise Kubernetes platform. It automates operational tasks, enables platform portability and multi-cluster management. It is always current with the latest upstream OSS. So, it often eliminates vendor lock-in.


Today we saw a couple ways to spin up a new kubernetes cluster. You can pick and choose which options are best for you. However, ultimately there are no wrong options here. Pick the solutions that best fit your needs!