[001.4] Operating our cluster

Operating our Kubernetes Cluster & Building a CI pipeline

Subscribe now

Operating our cluster [01.03.2018]

Now that our cluster is set up and ready, let's try to deploy our app in it. We'll walk you through a couple of basic Kubernetes manifests required to deploy our app.

As you can see, we don't have any pods or new namespaces running on our cluster. A namespace is an abstraction for a virtual cluster running in our cluster. Let's create one for our produciton app.

But first, let’s check our namespace manifest content. As you can see, it's a pretty simple manifest: it has its own metadata with a set of labels to apply to this namespace.

Labels are how we tag our kubernetes resources and objects to reference them later on. To create our namespace, let's simply run:

kubectl create -f kubernetes/namespace_production.yml

We now have our production namespace.

In order for our app to work properly, it has to connect to Github and a Postgres database. Those parameters are injected into our containers through Kubernetes secrets as our container's environment variables. Let's write those out in a file and import it into our cluster. To import our secrets, let's simply run:

kubectl create --namespace=production secret generic produciton-secrets --from-file=kubernetes/secrets

Good; as you can see, those secrets were created in our production namespace and are not available or accessible from the default or any other namespace in our cluster. Let's prep our infrastructure to run our app. In order to persist our Postgres data outside of the cluster in a way Kubernetes can keep track of, we'll create a volume and use it as our data volume for the Postgres container. To create a volume in Google Cloud Engine, simply run:

gcloud compute disk create --size 200GB postgres-volume

You can see the recently created disk with:

gcloud compute disk list

With the Postgres volume in place, let's create the Postgres container deployment. Here's what a common deployment that leverages a cloud volume looks like: The first thing we can see is the kind of manifest this is: a deployment. Here we set the number of pods we want our Kubernetes to run for this deployment. The Postgres container will use the Postgres 9.6.6 Docker image and bind to the 5432 node port. Here we are injecting into that Postgres container all the required environment variables for it to be able to provision itself on the first time it boots up. And last but not least, we're now binding that Google cloud volume we've just created to our container and mounting it to the container's Here's how Kubernetes is able to figure out that that's the volume we've just created. To create our Postgres deployment we simply run:

kubectl create --namespace=production create -f kubernetes/deployment_postgres.yml

Let's see that deployment in action:

kubectl --namespace=production get pods

As we can see here, the deployment is still not complete: the kubelet has created the pod, but the Postgres container is not ready yet. We can describe the deployment to get a peek under the hood with:

kubectl --namespace=production describe deployment postgres

The available condition is not met yet, because there aren’t enough healthy replicas running. Let's give it a little more time for it to provision itself.

Great! It has been successfully deployed!

Now let's expose that service. The Postgres is an internal service that only gets traffic from our app's pods, so we don't need to expose those ports to the internet.

Here's what a common internal service looks like: The first thing we can see is the Kind of manifest this is: a service. Again, here we're tagging our service. Naming it. Specifying which port it will expose. And applying a selector: In this service, expose all pods that match the label “tier equals database.” To create our service, simply run:

kubectl --namespace=production create -f kubernetes/deployment_postgres.yml

When we query our cluster for our services, with kubectl --namespace=production get services, we can now see our Postgres service and verify that it has a clusterIP attached to its pod. Alright! Everything checks out with our Postgres deployment. Let's move on to deploying our app. Here's what our app's deployment looks like: Again, the first thing we notice here is the manifest Kind: It's a deployment. Followed by some metadata; within that metadata, we have our labels. Here we set the number of pods we want our deployment to have at any given time. In this case, we have two. We name and tag those pods when they're spun up. We set the specifications of our pods, like the containers it'll have in it, name them, define which docker image to use, and inject our environment variables and secrets in those containers through environment variables To create our app's deployment, simply run:

kubectl --namespace=production create -f kubernetes/namespace_production.yml

And there you have it, two copies of our pod running on our cluster. Now, to expose that deployment through a Google Cloud Load Balancer. This service is a little different from the one we just created; let's take a look: Again, we have our manifest kind here set to service. But we have a new field here — the type field — and we set it to Load Balancer, then set it to listen to the HTTP port and forward all traffic inbound to the exposed ports on our rails pods. To create our app's service, simply run:

kubectl --namespace=production create -f kubernetes/deployment_rails.yml

When the service type is set to Load Balancer and we're running Kubernetes on a supported cloud provider, like Google Cloud Engine or Amazon Web Services, the Kube Cloud Controller takes over from there and issues the required API calls to provision our Load Balancer based on the configs set on that manifest. It might take a while for the Load Balancer to be ready, and we'll see that External IP field lists pending as its value, and will do that until the Load Balancer is ready. Let's get only our Load Balancer's IP now.

You can parse the output of your Kubectl commands with bash commands such as AWK, GREP or SED just like we're doing here. We're fetching the output of out kubectl command, grepping for the line that has on it produciton and cutting out only the 4th column.

Now that we have our Load Balancer ready let's curl our Load Balancer to check out our app: This time we're embedding the same command into a curl, so we can fetch that IP address with it.

Whoops! We're getting a 500 error from the rails container. Why is that?

Oh, right, we have to run our database migrations first!

Kubernetes have a different kind of object to address one-off tasks like running a rails migration. It's called jobs. Let's look at a job manifest:

Again, right at the top of our manifest, we have our kind set to job. We add some metadata about our job.

And we specify our job's template. Which containers should be spun up, with which image — we can even override which command should run inside that container. In this case, we'll be using the same container our app is in, only overriding the container's command to run our rake tasks. And of course, inject the same variables we have earlier so our rails job can run properly Pay attention to the last line of this manifest, where we set the restart policy to never; this is why this container will not be rescheduled once it’s terminated. To create a job, you might already have guessed that we run:

kubectl --namespace=production create -f kubernetes/job_migrations.yml

Now let's give it a minute to run. We can query our job's description for the pod status to figure out whether or not our job has finished. As you can see, we have zero jobs running, a single successful run and zero failed jobs. Let's try to hit that Load Balancer one more time. Great! It looks like our app is running properly now. Let's open it on a browser!

And there you have it: Our rails app running on Kubernetes!

Summary

Today we've deployed our app on a Kubernetes cluster and went through all of the required manifests to do so.

Resources