Kubernetes A comprehensive review

In this series, we've glanced on how kubernetes can help you deploy your application reliably, scalably and securely on a cluster. We've learned that are multiple ways to deploy kubernetes in a variety of cloud providers. We've also learned how to embed kubernetes into a CI/CD pipeline for maximum speed on deploying your changes upstream. We hope this helped you get a better grasp of how Kubernetes works and why is it so popular amongst the development and the operations community.

A week in review

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the Cloud Native Computing Foundation[1]. What that means is that your workload, being a single container with a Golang app or a range of tens up until hundreds of microservices can have defined in a declarative manner how it is going be deployed across your cluster. How many copies of that app, how's the upgrade strategy is going to look like, etc. You can also binpack your cluster in order to maximize resource usage and extracting a better return of investments made in either cloud resources or bare metal.

Kubernetes is not a drop-in replacement for an all-inclusive PaaS system. It does not provide middlewares, data processing frameworks, databases or cluster storage built-in. Those are up to you to decide which one to use and how to deploy it - even though you can run these solutions on kubernetes.

What Kubernetes does provide you is a portable, extensible, self-healing platform for you to deploy your application on. In a declarative manner, you don't have to deal with converging the state of the world to what's defined of the manifest. That's the kube-scheduler job to figure out how to achieve that. All of that is fronted by an API that helps you to interconnect it with any tool that is capable of interacting with an REST API.

Speaking of tools, there's kubectl, the main way in which you interact with your cluster. Think of it as the SSH of your cluster. With kubectl you can fetch logs, create, delete, update and expand workloads in your cluster.

In the grand majority of the scenarios in which kubernetes is present, it does so by being the last part of a delivery pipeline, in which the pipeline's artifacts gets deployed into the cluster. And that's how it aggregates the most value to your team! Sure you can make changes to your cluster by issuing kubectl commands from your local machine to your cluster, but that's not ideal.

Future weeks

We've yet to touch on a couple other subjects. Those are: Monitoring, Securing, Packaging your application and its dependencies (Helm charts), increasing your workload's resilience by federating multiple cluster together and Extending your cluster's capabilities by using plugins.

As part of the Cloud Native Computing Foundation, Kubernetes leverages a whole suite of other projects that you can benefit from being integrated with it. From container runtimes, to service mesh, tracing and monitoring you should definitely check those out and try them out.

By abstracting away your cluster's inner workings and its workload's orchestration, kubernetes also enables you to start thinking outside the box here. For instance, Function-as-a-Service models in which you could have highly scalable event-driven software that does not require much hassle to deploy. They all run on top of kubernetes' platform.

We'll be bringing those to you in the next couple weeks. Stay tuned!

[1] https://en.wikipedia.org/wiki/Kubernetes