LOGIN
START FREE TRIAL

LogSense Blog

See everything, even before it happens.

How to Optimize Your Kubernetes Cluster with Log Monitoring

May 10, 2019 2:08:01 PM |     Marcin Stozek
LogSense-Kubernetes

Kubernetes is one of the great success stories of 2018. Large companies are using it for production deployment of applications. It has become the clear leader in orchestrating containerized deployments. DevOps production cycles take it for granted. Its popularity growth, especially for enterprise-scale services, has created new challenges.

Federation is the single biggest Kubernetes open source sub-project and one that attempts to solve some of those challenges. One of the main goals of Federation is to be able to define the APIs and API groups that encompass basic tenets needed to federate any given Kubernetes resource. The first iteration failed due to several design decisions that didn't work out, but Federation V2 offers more flexibility and provides enhanced capabilities for multi-cluster operations. Federation opens up new possibilities for large-scale, high-availability services, but at the same time it makes the orchestration issues more complex. This complexity makes accurate monitoring in Kubernetes even more essential to catching problems quickly and making sure sufficient resources are available.

Kubernetes at a Massive Scale

Multi-cloud services do provide greater redundancy than single-cloud ones, and therefore more reliability. The scope of these deployments forces greater reliance on automation – including updates, backups, and configuration adjustments across Kubernetes clusters. Lack of automation increases the risk of outdated code, inconsistent operation, lost data, and reduced efficiency.

Functions as a service (FaaS), also known as serverless, are an increasingly popular way of deploying large-scale services. Knative is a new set of components for use with Kubernetes in serverless deployment. From the Knative website, it’s meant to provide a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center. Since each function instance is fully independent, the architecture works well for large workloads.

These large-scale deployments require more automation and even more important, more intelligence. Simple number-crunching isn't enough to optimize orchestration across such large systems. Now, we have a large and growing amount of data coming in from disparate sources. Turning that information into optimal performance and actionable intelligence requires a more sophisticated approach.

The Importance of Monitoring

Monitoring in Kubernetes always has been an essential part of Kubernetes. Information needs to be gathered and analyzed both for the cluster as a whole and for individual pods. If a pod becomes unavailable or overloaded for any reason, the cluster services need to make quick adjustments. The difference between the current state of an application and the desired state needs constant monitoring so that adjustments can be made in real time to optimize performance.

Auto-scaling makes use of monitoring data to determine the optimal number of instances. It can incorporate information from pods and objects, as well as external information such as network traffic. This monitoring provides valuable feedback to developers and system managers. They can see and adjust any bottlenecks. Or identify any areas where the code needs optimization. Admins can figure out where a change in configuration, such as allocating more memory or increasing the cache size, will result in better performance. With so much information available, it’s often hard to get through the noise and make sense of the data. This is where a log management service like LogSense can help.

AI and Machine Learning in Kubernetes

Taking all the available data into account requires advanced techniques. Artificial intelligence and machine learning will play an increasingly important role in cluster management. This will come at two levels. First, AI will be built directly into Kubernetes cluster services. New features will identify patterns of behavior and anticipate demand. This capability will keep clusters running optimally rather than forcing them to catch up when the monitoring system notices they've slowed down. Then, many metrics are specific to particular environments. Developers will create domain-specific add-ons to add intelligence for particular use cases.

It's a safe bet that 2019 will be the year that machine learning will take over Kubernetes. It will be critical to accurately monitoring performance and allocating pods optimally. With large multi-cloud deployments, the amount and variety of data are too great for conventional methods to analyze. ML-based analytics will tell DevOps teams exactly where the most important issues are.

This is where a solution like LogSense comes into play. With LogSense, devs and devops teams are able to:

  • Simplify monitoring across multiple instances and clusters, giving control and power back to teams working on building the applications
  • Parse the data to get information about the pod names or container images
  • Search and reason through the various types of data in order to identify patterns and parameters automatically

Security in Kubernetes

The downside of becoming popular is becoming a target. As Kubernetes has increasingly become a de facto standard, criminals have looked for weaknesses, both in the software and in the way it's used. Configuration errors can open up weaknesses, such as accidentally exposing test software to public access. Administrators will have to be very careful not to make such mistakes on instances that are reachable from the Internet. Making good use of logging and monitoring will help to detect unauthorized access and abnormal traffic, especially as the amount of data continues to grow. Once again, AI and ML will become increasingly important.

What Comes Next

Kubernetes is taking on the big and important tasks of container orchestration and management. Artificial intelligence and machine learning will become central to its operation. These advances will help DevOps teams to make better use of their resources in the most complex deployments.

With LogSense, you can get value from that data almost instantly. You can send your data in any format – including unstructured data – and we will create patterns that can be used to help optimize performance and security in your Kubernetes environment. With each new application that’s introduced to your Kubernetes cluster, this intelligence happens automatically with LogSense. Want to learn more? We’d love to show you.

 

New call-to-action

 

 

Topics: Kubernetes

Want more of the LogSense Blog? You got it.
Subscribe to our newsletter.

Comments