Deep Learning World 2019 is next week at Caesars Palace in Vegas. If you’re planning to be there, here are three things you should know:
- We’d love to connect live! If you’re around for a quick get-together or chat, ping us here or at press(at)logsense.com.
- Mark your calendars for Wednesday, June 19 at 3:30pm. Our own co-founder and VP research & development Przemek Maciolek is giving a talk on how real-time automated pattern discovery can impact overall network security and performance. The talk is titled Elevating Deep Learning for Network Security and Performance with Real-Time Pattern Discovery in Logs and promises to be informative, interactive, and entertaining.
- Take a look! We took a few minutes to talk with Przemek about his talk and the current state of deep learning, and he shared some great insights to help prepare for your week at Deep Learning World.
Q&A with Przemek Maciolek, co-founder & VP R&D at LogSense:
You've worked with ML/DL for a long time - what's the most interesting change or advancement you've seen?
I find it amazing how far we’ve come – and how much still there is to be discovered. I believe that the greatest advancement is the democratization of ML/DL. It was never so easy as now to start working with large amounts of data and get results quickly. There are great tools that allow us to express DL network in just a few lines of code. The community is thriving. Researchers are sharing their results using notebooks, which makes it easy for anyone to pick it up and continue working with the data. Progress is happening daily, and it’s truly incredible to be part of it. This is very much different from the landscape that was present here, say, 15 years ago when I started.
What is real-time or automated pattern discovery and why is that important?
Automated pattern discovery allows the computer system to "understand" the logs – literally, all of them. Logging has transformed. Logging was invented based on making the logs human readable, which allows us to look through them and see what's going on at any point in time. In this format, while it may take time, an administrator is able to figure out the meaning of the log, but it's much harder for the machine. A standard approach is to create a parser that matches the human-defined patterns to some logs and is able to extract meaningful content out of them (e.g. status messages, process names, variables, etc.). This approach is obviously time consuming and leaves out a long tail of logs that do not have dedicated parsers.
At LogSense, we devised a way to do that part automagically. Through the application of Natural Language Processing and statistics, we're able to do automated pattern discovery. Our technology and approach has two great benefits: (1) the user can use the logs right away, including charts on parameters and such, and (2) ML can now think about logs in terms of previously seen patterns and params, which allows us to build an anomaly detection model that has even more use cases.
When you talk about inputs for DL and separating from the noise, what does that mean and why is this an issue today?
In the age of microservices handling thousands of transactions per second, we frequently deal with plenty of logs coming from different sources. It's not really viable for a human operator to look at all of that data at once, and it's safe to assume that many of the logs are not helpful, i.e. might be considered noise. A traditional way to handle that problem is to provide some rules expressed through various filters or custom-built dashboards that focus on data that is known to provide valuable information (such as error stack traces, API calls with error responses, etc.). This approach still leaves out the problems that the operator may not be aware of - even those that might be hidden in the plain sight. There's simply too much data to manually handle the load.
Applying DL on top of meaningful logs (which is achieved through pattern discovery) allows us to provide high-quality anomaly detection. It's this approach that uncovers the gaps and provides full visibility into what important things are going on in the computer system at that time.
What can attendees expect to walk away with -- that they may not be expecting based on the talk abstract alone?
Deep Learning had great success in areas such as NLP or Computer Vision. Many other areas are waiting for a similar success story. With logs, we gain new possibilities by having a fresh look at the problem. By feeding log data into the model using a novel way of analyzing the information, we are able to get a more complete picture. Just detecting an anomaly, while great, is not enough. Explaining what the anomaly is about is as important, and using Auto Encoders allows us to achieve that goal.
What's the story behind LogSense -- why should people in IT or DevOps take a closer look at LogSense?
LogSense was started by a group of professionals who were unhappy with state-of-the-art -- or status quo -- in monitoring systems. We believe that advanced log parsing opens new doors to helping organizations improve security and performance - and particularly when you apply automation like we've done with our patent-pending LogSense technology. By using LogSense, one gains effortless capabilities for using data present in logs and mixing it with traces and metrics for getting the full picture, especially when coupled with Deep Learning-based Anomaly Detection.
It's an exciting time to be in the space, and we're poised to see even greater advancements in the coming months and years. For now, I'm looking forward to being at Deep Learning World to meet with customers, network with community peers, and learn more about the newer ML/DL technologies.
Not heading to Vegas, but want to see the LogSense technology in action? Sign up for a free trial today!