Software management has changed dramatically over the last couple of years. New techniques allow you to deploy your software to production, and scale it up and down on the fly. In the old days, we dealt with plenty of problems with managing a service. Take dependency for example: a change of one component sometimes broke another that was dependent on it, and with complex systems, it took a lot of care to avoid issues. We had to deploy apps manually to different servers or even ship the whole VMs that were running the applications. When you top it with scaling, things were getting really complicated.
This is where containers came in and things got interesting. The idea was not new: take the application and put it in a box. But the user experience was a real game changer - you run one command and you have containers running isolated processes on your machine.
Containers are very easy to build and use. Their images can be moved between servers without the need to keep consistent hosts. We no longer need the pet hosts. We can have the cattle hosts, which are used just to run containers. This way, we can have the same minimal environment everywhere, yet all the applications in the containers can have their own dependencies installed just for them.
How to run containers at scale
Do you need to login to every host and spin up new containers using configuration management tools? Of course, there is a better approach. Container orchestration platforms are prepared to run your images, take care of the application, and in case anything goes wrong spin them up again. Whether it’s because the host is inaccessible or the application is not healthy and crashes, or you need to introduce a new version, the container orchestration platforms take care of the heavy work. There are several out there like Docker Swarm or Mesos. But the star of the moment is Kubernetes, which we’ll take a closer look at in this blog series.
Kubernetes Container Orchestration
Kubernetes is one of the most active projects on GitHub with almost 50 thousand stars at the moment. Its community is huge and very active, so it’s easy to get help if you have problems. All of the major cloud providers like Google (GKS), Amazon (EKS), Azure (AKS) and DigitalOcean offer hosted versions of Kubernetes. But if you are brave enough or just cannot use hosted solutions and would like to setup it up on-premises, you can. Kubernetes is Open Source licensed with Apache 2, and it has good documentation. There are distributions that help you with the task, such as K3S. Setting up the production-ready cluster might not be an easy task as it requires a good level of understanding of Linux networking and general operating system knowledge. If you want to try it on your local machine, there are projects that let you do it with one click, like Minikube or MicroK8S. Kubernetes was created by Google, and it has Google’s scaling knowledge built-in to its core. It should come as no surprise that its capabilities in the area are exceptional.
Kubernetes takes care of networking, load balancing, deployment, rollout, scaling up and down, and everything else that comes to mind when thinking about the application lifecycle. It’s a really well written piece of software. Its internal architecture allows it to be extended with the help of plugins, so in case there is something we want to add, we can do it fairly easily.
In Kubernetes we define everything with YAML files. Similar to how a Dockerfile is a recipe for the Docker container, the YAML files are recipes for services, deployments and load balancers. They contain information about ports needed, environment variables, mounted volumes, container images, versions, rollout strategies and many others. With Kubernetes, instead of needing to prepare long documentation on how to run your application, you can create the whole recipe to set up your application in minutes, which gives you the time and ability to focus on your software versus managing the infrastructure.
Kubernetes adoption among IT departments will grow over time. It changes how we manage our software. It gives us flexibility. It helps create the DevOps culture where developers and administrators work together, which is a huge win for both sides. It helps promote good practices like CI/CD. With the help of Kubernetes, everyone can use the world-class engine to orchestrate their infrastructure while focusing on the software instead of managing the deployment details by hand. And that's definitely a game changer.
We'll explore more about how LogSense and Kubernetes work together in some coming posts on the blog. If you're anxious to get started now, we'd love to you show you how.