Log files record what is happening in a digital environment. But an ambiguous data structure clouds their deeper value. A special kind of AI called AIOps can perform log analysis to unlock these insights. Let’s explore how.
The Language of Logs
Log files are essentially records that an IT system keeps of its own activities. Think of a log file like a diary. The content and structure of logs is under the control of the system or application developer to a large degree. Enterprises and their vendors try to impose some discipline on log structure and content. Despite this, logs show little rhyme or reason from one system to another—at least on the surface. Uniformity of any kind may be hard to discern, even within the boundaries of single system.
Yet when one looks at a log, it is possible to pick out information such as the names of message sources, time stamps, and indicators of actions being taken. Often these are written down in language that sort of bears a passing resemblance to English. Logs suggest that they might have started their existence as sentences, the words of which somehow got jumbled along the way.
The presence of human readable words in log files is the heart of log management platforms like by Splunk and Elastic. The previous generation of SIM-oriented log management platforms employed algorithms for log analysis that searched in an unsupervised or semi-supervised manner for repeating alphanumeric string patterns, with no care at all human readability. While the search algorithms that drive platforms like Splunkbase and the ELK stack are based on token and template matching, they exploit the fact that logs contain word-like substrings. They still care nothing about what the words mean, but concede that humans look for specific types of words that make sense to them. The usefulness of these algorithms is what separated these technologies from legacy approaches.
The Purpose of Logs
Systems and applications generate logs so that records of critical events can be maintained. While these records may be used for many different reasons, the important point is that they are meant to record events. Now, an “event” is nothing more than a significant happening. An event comprises three important elements:
- a change which occurred
- a time when it happened
- an object or collection of objects that either affect the change or suffer it
Any declarative sentence contains information that directly associates with each of these three elements:
- a change which occurred = VERBS
- a time when it happened = TENSE
- objects related to the change = NOUNS
…and by extension:
- the source and destination of the change = PREPOSITIONS (to/from)
Whenever one tries to represent events—no matter the context or rationale—the resulting data is something analogous to these elements. Therefore it is not a happy accident that logs look like jumbled sentences and contain human readable substrings.
Human Limitations in Log Analysis
Herein lies the problem.
Logs are a rich source of information about what is happening in digital environments. However, the nouns, verbs, prepositions, etc. describing events are all jumbled up. Their relative positions change from log to log and/or system to system. Their information can only be fully exploited if a human being is involved. Only humans can recognize the events that the logs are representing.
This is why log management databases have become so popular, of course. They make it possible for humans to interpret logs by allowing users to search on strings that mean nothing to the search algorithms involved, but mean everything to their users. So humans are critical to the success of log management systems.
But IT systems generate millions of logs. For a user to have a chance at accessing logs relevant to his/her concerns, s/he must have a good idea of what to search, from the outset. The challenge with this approach is that most system issues are unanticipated. Events take place that have not been anticipated or planned for. Given the dynamic, distributed, modular and ephemeral nature of modern IT environments, many events haven’t even been imagined.
Although humans are critical to extracting information from log management systems, most of the information potentially available remains trapped in unread logs. This is due to the sheer number of logs, their varying structure, and the limits of flesh and blood users. Any potential insight remains unknown.
How can this problem be resolved? At a high level, the obvious answer is AI.
Artificial Intelligence for Log Analysis
AI brings to bear the sequential execution of algorithms for data selection, pattern discovery, and inferencing, among others. Clearly this is what’s needed to unlock the value of log files. In an automated way, AI discovers the information content of any large log file, communicates it to the human user, and supports any actions to be taken in response to what has been communicated.
But what specific form should these algorithms take? One clue may be taken from an old idea in linguistics first proposed by Noam Chomsky back in 1968. When one looks at sentences that speakers consider to be grammatical, the structure they have might not be fundamental. Rather, their structure may be the outcome of operations and systematic changes made to an underlying “deep structure”.
For example, the sentence, “I am being praised by my manager” is best regarded as the result of modifying an underlying sentence, namely “my manager is praising me”. The latter active form is the deep structure, while the former passive form is the surface structure. After Chomsky proposed this theory, many argued that interpretation only pertained to the deep structure. If you want to know what a sentence means, you first need to go from surface structure to deep structure, and then associate the elements of that deep structure with meanings in various ways. This idea became known as Semantic Syntax.
Moogsoft’s Approach to Log Analysis
To deal with the jumble of natural language-like substrings in log files, we at Moogsoft take a hybrid approach to using AIOps for log analysis. It can be described best as a combination of deep neural network processing and semantic syntax.
Step 1: Running through a large trainer set of logs isolates those substrings that correspond to nouns, tenses, verbs, and prepositional phrases. While this mark-up might initially prove labor intensive, in fact the conventions implicitly used by system and application developers are quite limited. System names vary very little, as do the ways in which time stamps are recorded. However, there is often significant variation in the ways that actions are represented.
Step 2: After the initial isolation is accomplished, a neural network algorithm is let loose on the results in order to stabilize the features of each of the element types. So far, so good. However we still are dealing with a jumble. Any IT system would not have an easy time figuring out what events are being indicated by the logs. Here is where the semantic syntax and deep structure thinking comes in.
The jumbled up strings get treated as surface structure, so the real trick is to go backwards from the surface structure to the deep structure. Once a jumbled up string has been converted to its underlying deep structure equivalent, only then can the event being represented be determined. Determining the rules for converting surface structure to deep structure will itself require a dose of automated learning. But given the highly constrained form that a deep structure can take—essentially “Noun, Noun,…, Verb, Tense”—that effort has proven reasonably inexpensive.
Step 3: With logs now converted to a deep structure form that clearly indicates the events they are meant to express, the log analysis typically carried out by an AIOps platform can begin. AI algorithms can read and work with them without significant human intervention precisely because the logs have been converted into a format that, by virtue of its structure alone, has revealed the meaning of the various elements.
Many logs often indicate the same event as other logs. Many logs often indicate events that never occurred at all. Hence deep structure format log files must be cleansed of noise and duplicates. This is an important point. The conversion to deep structure format, i.e. the unveiling of the event represented by the log, needs to take place before the event itself is examined. The application of this layer of AI algorithm takes place before data set selection.
Final Steps: Once the log file is cleansed, once each of its contents represent an actual unique event, distinct from all other events represented by file contents, then the real value can be revealed. Events can be correlated, causally analysed, results communicated, and remedial actions performed.
In Summary: Logs are an important source of information regarding events taking place within a digital environment. However, their volume and the ambiguities of their structure mean that something like human intelligence is required to take advantage of that source. These volumes and ambiguity make it just about impossible for humans to work with log files. AI must be brought to bear to realize the promise of log files. What’s required is AIOps, a very special kind of AI that starts by converting logs into their declarative sentences by analyzing deep structure.
Want to learn more about how Moogsoft uses log data? Explore our newest offering, Moogsoft Express.
About the author
Will studied math and philosophy at university, has been involved in the IT industry for over 30 years, and for most of his professional life has focused on both AI and IT operations management technology and practises. As an analyst at Gartner he is widely credited for having been the first to define the AIOps market and has recently joined Moogsoft as CTO, EMEA and VP of Product Strategy. In his spare time, he dabbles in ancient languages.