By now, you’ve inevitably seen or heard the word Kubernetes somewhere — but you may not know what it means or what it’s used for. Kubernetes is an
open-source system for automating deployment, scaling and management of containerized applications like docker, rkt or CRI-O. It relies on over a decade of experience running production workloads run at Google" (Borg). Kubernetes aims to solve the following real-world problems:
Container Deployments and Orchestration by using a declarative management of its objects, with configuration files. You simply state in a config file what the world should look like to you, and Kubernetes makes sure that the state of your world matches what’s described there.
Optimizes Compute Resources Utilization by automatically placing containers based on their resource requirements and other constraints.
Provides a Self-Healing Future to your application by managing container life cycles — replacing, rescheduling and restarting containers that eventually fail.
Allows You To Push Code into your cluster and to easily roll back any emerging changes.
A quick example
We have this Production app. It's a simple monolithic rails application built for demonstration purposes. So far, all we’ve done is put it into a Docker container. Now, let's run it!
Spinning up a docker container is really easy. You just say:
docker run -p 8080:3000 dailydrip/production:master
And the app is available at localhost port 8080!
Hmmm…. Why isn’t it working?
Duh! We need to have postgres running — preferably in another container — to run our app. Alright, no biggie, let's try doing it like this:
docker run --name postgres -e POSTGRES_USER=runner -p "5432:5432" postgres:9.6.6
docker run --name rails -p 8080:3000 --link=postgres -e PGHOST="postgres" dailydrip/production:master
Now lets attach to the rails container and run our migrations with:
docker exec rails rake db setup
Great; it worked! However, that’s not quite how we’re going to deploy this in production, right? What about secrets? What about scaling this app? How am I supposed to make sure these are always running?
Enter Kubernetes, our hero! By simply defining how we want things to be in our cluster and having Kubernetes make scheduling decisions, monitor containers’ health, and manage which endpoints they should use to interact with each other. We’re simplifying the way we deploy and manage our application’s lifecycle.
Here’s a quick example of what a Kubernetes deployment looks like, regarding only the baseline requirements to have an app running:
It's as easy as typing:
kubectl create -f kubernetes/
minikube service production --url
We’ve answered a few obvious questions for you already :
Can I use Kubernetes locally?
- Yes; in the next episode we’ll cover a project called Minikube that allows you to operate a single node of Kubernetes locally!
What are the benefits of using Kubernetes locally?
- As we’ve just seen, the declarative nature of Kubernetes allows us to worry less and less about HOW to converge to the state we want. We can also try changes to our manifests before sending them to a live cluster.
Where I can set up a Kubernetes cluster?
- Kubernetes is provider-agnostic. You can run your cluster across any of multiple availability zones, regions and providers — even bare metal machines!
Can I use Kubernetes with Docker?
- As we’ll see in later videos, Kubernetes is just a tool that helps you manage containers, whether they’re Docker, RKT or CRI-O.
Are Heroku or other PaaS using Kubernetes?
Do I really need Kubernetes?
- As Kelsey Hightower put it: If you already have an provider-agnostic, binpacked, automated, orchestrated, elastic, continuously deployed system that provides deploying and rollback mechanisms, you don’t. ;-)
Who is Kubernetes for?
Who is Kubernetes for? As Kelsey Hightower, Brendan Burns and Joe Beda said in their book
Kubernetes: Up and Running: Dive into the future of infrastructure, Kubernetes - even though not a silver bullet for all your problems - is dedicated to every sysadmin who has woken up at 3 a.m. to restart a process. Every developer who pushed code to production only to find it didn’t run like it did on their laptop. Every systems architect who mistakenly pointed a load test at the production service because of a leftover hostname that they hadn’t updated.
Too many moving parts?
Do we really need to set this all up to use Kubernetes?
The short answer here is: yes. The way Kubernetes is built allows these moving parts to interact with each other through well-defined contracts between them. That way, we get a pretty solid separation of concerns. Think of it as unix-style: Do one single job, and do it well.
Let’s see what those moving parts are, and what each is responsible for.
Today we took a glance of what kubernetes is and what it aims to fix in our workflow. We've seen how the kubectl
kube c-t-l command line client works and an overall understanding of what abstractions it can provide.