LogSense Blog

See everything, even before it happens.

DevOps and Log Management in 2019

May 28, 2019 10:30:33 AM |     Marcin Stozek


DevOps has risen to unquestioned dominance in the software development world. It's the best way to keep up with demands for rapid change without a lot of downtime and bugs for new releases. DevOps combines development and operations into a single team in order to make frequent and smooth advances.

What is DevOps?

The most important feature of DevOps is the creation of a production and release cycle that is supported by automation. At each step, scripted processes check for errors. The cycle can't advance until any identified problems are fixed or overridden. Coding glitches don't accidentally get through, and all supporting code is brought up to date. DevOps often entails continuous integration or continuous delivery (CI/CD). While "continuous" is an exaggeration, the aim is to bring every change through the development cycle as soon as it's ready, rather than accumulating changes for a specific release. Automated processes check the code changes and handle the building and deployment, so the risk of breaking a release is minimal. In theory, DevOps offers a way to speed up production, with minimal hits to quality.

Containerization has smoothed the cycle as well. A container holds a full environment for a software release, and with appropriate configuration parameters the same container can serve for development, testing, and/or release. Automated deployment tools like Kubernetes make containers fit right into the DevOps cycle.

Logging with DevOps

This new paradigm aims to catch problems early, so that the released code is reliable. Monitoring and logging are important to achieving a reliable code release. Not every problem shows up in the user interface. Some bugs waste resources, degrade performance, and open security holes without being obvious. Other problems affect the user but are hard to trace to their cause without more information.

At the same time, the proliferation of instances and containers makes log management much more of a challenge. The amount of raw data can swell into the tens of gigabytes. Finding the valuable information amid so many lines exceeds human abilities.

Problems with Manual Log Analysis

Going through logs with search tools such as grep and awk can be painful. On the scale that a DevOps environment generates them, it becomes virtually impossible. Privacy and security concerns also get in the way. Administrators may not want developers to see raw log data, since it could include personal info from forms or information about servers. Statistical analysis of logs requires more than finding relevant lines. It involves a ton of number-crunching and comparison with previous log data.

Today's teams tend to adopt microservices and break software into smaller pieces for easier evolution and management. But that comes with a hidden price of a distributed infrastructure. In many cases, each piece will have its own logs but not all will have the same format, which can be challenging to bring together for a big picture view.

The Need for Automated Tools

Automation is what makes the DevOps cycle work, and it has to cover log analysis to do a thorough job of catching issues. Each instance has its own logs, and multiple instances may be in the pipeline. Without sophisticated software assistance, there is no practical way to get insights from so much data.

Automated scripts can analyze logs to identify several types of problems. Non-fatal errors may not cause any obvious problems for the user. They could include invalid data that can be discarded, exceptions that are caught and handled gracefully, attempts to reconnect, and/or missing parameters. These issues can degrade performance, consume memory, and cause problems later on -- all of which can eat time, budgets, or other resources quickly.

Errors that affect the user often need more than a stack dump to identify the cause. Logs show what was happening prior to the error message or crash. A thorough analysis of the logs is the quickest way to pinpoint the error that caused the problem.

A new release could affect performance. Logs are a good way to find out where the time to perform a task or the required resources have changed. Usage patterns may change, straining aspects of the code that didn't have much impact before. Log analysis helps to discover these issues. Statistical analysis of logs helps to identify trends, and those trends can be used to speed up production or avoid future issues.

Adjusting to a New World

Adopting DevOps processes can come with a price, but it also opens the door to a greater need for log analysis than ever. Unit and user interface testing aren't going to catch every problem in each release cycle. DevOps teams need log management, and they also need better tools to stay on top of it.

LogSense can help you to ingest and analyze large and diverse sets of log data fast. With LogSense, you'll quickly get the insights you need to make your release cycles smooth and error-free. We'd love to show you in a quick demo, or start your free trial today to see for yourself.


New call-to-action


Topics: DevOps

Want more of the LogSense Blog? You got it.
Subscribe to our newsletter.