LOGIN
START FREE TRIAL

LogSense Blog

See everything, even before it happens.

Kubernetes vs. Logs: Application Monitoring Challenges

Apr 5, 2019 2:05:47 PM |     Marcin Stozek
LogSense - Kubernetes2

As we saw in the previous post, Kubernetes helps us run our applications across multiple nodes using the standardized, declarative way. While we don’t need to think about where our applications are run physically, we still want to have some insights into how they behave. Generally speaking, we can gain those insights via three pillars of observability: logs, metrics and traces. For the purpose of this post and series, let’s focus on logs and the associated log monitoring.

In the Kubernetes world, there are challenges. Because everything needs to be automated and we are no longer allowed to get the logs from the node by hand, you need to be aware of three points:

  1. You don’t know where your application will run, so you need an automated integration with every running pod
  2. If your application has been deleted for some reason, the logs are gone
  3. You can have thousands of Kubernetes nodes and sometimes you don’t have access to them; also you don’t always have access to the Kubernetes API to read the logs with the help of the kubectl logs command

 

So, how can you monitor logs in Kubernetes effectively?

Kubernetes by default lets you get logs by using the kubectl logs command. But the logs are stored locally on the node and disappear in case the pod has been evicted or the node has crashed. There are several solutions to those problems.

Kubernetes: Logging Libraries

First and this seems obvious, you can use the logging libraries. With their help, we can send logs directly from the application into your logs aggregator. While this might work, it creates its own problems. Like every other dependency, they need to be updated from time to time. Libraries differ between the languages and frameworks, so many times we cannot reuse them if we want to use different technology.

Kubernetes: Sidecar Containers

Another solution is to use the sidecar containers. They are useful if your application cannot write to the standard output and standard error streams. They run in the same pod as your application and share volumes with log files. This way they can read the logs and forward them somewhere else.

There are two types:

(1) Streaming sidecar containers forward logs into its own stdout stream - this way we get logs on the stdout of a sidecar container and later can read them with the help of kubectl logs command. The downside is that we get our logs doubled: one copy is in the original log file and another copy is in the sidecar log file.

(2) Another type is the sidecar container with the logging agent. The difference is that instead of streaming logs into its own stdout, it forwards logs into the external service. While it doesn’t double our logs, we cannot use the kubectl logs command to see the logs, which might be really helpful in case something goes wrong. It’s always nice to have a backup plan.

 

Kubernetes: Node-Level Logging Agent

The third solution is to use the node-level logging agent. The node-level logging agent is the best option. The concept is very easy: on every node in the cluster, we run the pod that can read the logs from every container running on that node. Then it forwards them into the logs aggregation service of our choice. For this solution to work, every container should log into the stdout and stderr (if some containers cannot do that, we can use the streaming sidecar container approach to make it happen). This approach is the best option because we can have both of the worlds: our logs are accessible with the kubectl logs command and they are forwarded into the logs aggregation service. There are solutions to this approach, which we can use straight away - one of them is to use Fluentd Kubernetes DaemonSet.

 

Understanding Logs in Kubernetes

So now that you have all of the logs into the logs aggregation service, you should be able to see all of your applications in one place. However, all services are not created equal. With so much data available, it isn’t as useful to be able to see only the information. You need to understand it. For the data to become meaningful, the service not only needs to parse the logs to get the metadata about the pod names or container images, it also needs to allow us to search and reason through the many different types of logging. Generally speaking, logs are only as valuable as the information you extract from them. Whereas logs may look more or less the same showing an array of characters, the real value comes from being able to see the pattern and parameters. This is a critical piece of log monitoring and log management in Kubernetes.

At LogSense, we help you with the logs from the Kubernetes cluster by using a node-level agent logging solution. Sending logs from your cluster into LogSense is as easy as applying a YAML file. From there, we can do the rest. In our next post, we will take a deeper look at how fast and accurate log parsing in Kubernetes can help you better manage your infrastructure. For now, if you want to take a closer look, let's schedule a quick demo or get you started on a free trial to try it for yourself.

 

New call-to-action

 

 

 

 

 

 

 

Topics: Kubernetes

Want more of the LogSense Blog? You got it.
Subscribe to our newsletter.

Comments