What is AI and Why Should You Care
Will Cappelli | December 12, 2018

Learn why AI has become one of the major topics of conversation at the intersection of IT and business today.

Learn why AI has become one of the major topics of conversation at the intersection of IT and business today.

The last five years have seen renewed interest in AI on the part of enterprises. Unfortunately, this interest is often ill-defined and has been driven more by attempts to emulate the success of web-giants like FaceBook and Google (who loudly market their use of AI) than by any clear idea as to what AI is and how it may benefit a business. So what precisely is AI? And does its deployment positively impact a business?

Tweet Section

The combination of digital business driven necessity, the difficulty of managing complex modern IT systems and the technological and economic plausibility of harnessing AI in its inference and pattern discovery incarnations to deal with that complexity explains why AI has become one of the major topics of conversation at the intersection of IT and business today.

AI as Old as Computer Science

AI is just about as old as computer science itself. Alan Turing invented the modern computer by showing that any calculation one could think of could be carried out by stringing together some basic operations that did not require any kind of mental process to perform them. The strings themselves could likewise be described without appealing to the presence of any underlying mental process. Shortly after he accomplished that feat, he wrote a paper suggesting that all mental processes could be simulated by ‘mind-free’ combinations of such ‘mind-free’ basic operations. He called these simulations ‘Artificial Intelligence.’

At first, AI was seen to be a bit of a philosopher’s party game. If AI was possible, then the mind could be reduced to a computer built out of neurons or what have you, etc. – But, as researchers became more aware of the complexity of the brain and its processes, the idea of AI began to take a more practical turn. While the process of building up complex computations from mind-free combinations of basic mind-free operations proved very successful when the computations dealt with static domains that were themselves built out of repeating collections of similar components, programmers found it very difficult to build up complex computations that could handle complexly structured, dynamic domains. The brain, however, was seen, however, to be just the opposite. Mental processes tended to come unstuck and error-prone when neurons tried to deal with large repetitive static structures while, at the same time, coped extremely well with complex, dynamic environments. Indeed, it appeared as if evolutionary processes had shaped the computations our brains naturally performed to specialise in the complex and the dynamic.

AI, the Brain, and the Business

With that realization, a new take on AI emerged. Rather than trying to reduce the brain to a computer – which might be an interesting academic exercise but otherwise not very useful in day to day life, researchers began attempts to reverse engineer mental processes that actually take place within the brain. Rather than try to build computational processes capable of coping with complex dynamic environments from scratch, instead they started turning to nature and biology to gather clues and design principles to help them build algorithms that could deal with dynamic complexity. This led to the early work on neural networks in the 1960s and eventually to the work on deep networks (neural networks in layers) in the mid 1980s.

A key result of studying the brain was the recognition that mental processes, i.e., algorithms that could handle dynamic complexity, could be subdivided in three fundamental types. First, there were the algorithms that selected focal points of interest in the environment. What required attention? What problems needed to be solved? Second, there were the algorithms which studied those points of interest, those problems, and sought to extract patterns from the data that constituted them. Third, and finally, there were those algorithms that tried to infer new patterns and new problems from the patterns discovered by the second batch of algorithms and, insofar as new problems were discovered, reinitiated the whole three step cycle. Work, then, commenced on the re-engineering of these processes and it was not long before the world outside of academia began to take notice.

AI Enters the Market (Three Times!)

The first type of process to commercialise was the type associated with inference. Led by departments of defense in the Western world, this wave of commercialisation led to the AI boom of the mid to late 1980s. Expert systems were all the rage and most AI software took the form of automated inference engines although the final years of the decade welcomed the appearance of the earliest industrial neural networks. With the end of the Cold War, defense investment and defense customers disappeared and, almost overnight, the first wave of AI commercialisation collapsed.

It is interesting to note that the IP created by companies like Symbolics, Lisp Machines, Thinking Machines, and Aion did not fall into the abyss but rather provided the initial fuel that got the distributed systems management software market going. Many of today’s service desk platforms and event management systems are based on the expert system environments built during the late 1980s.

The second type of process to commercialise was the type associated pattern discovery. 20 years after the collapse of the first AI market boom, the IT world saw three developments occur which brought about conditions favorable to a second wave of AI commercialisation.

First, businesses had become increasingly dependent on IT – not only to support back office or research functions – but also to support, and indeed, to become the essence of customer/partner/supplier facing business processes. In short, the economy was digitalising.

Second, precisely in order to support this digitalisation, IT systems had become exceedingly complex. They had become more modular, i.e., built out of more and more largely autonomous components. They had become more distributed. i.e., the components which worked together to deliver any given IT service or application were dispersed across the globe and even into orbits around the earth. They had become more dynamic, i.e., the relationships among the different components, both logical and topological, changed with ever greater frequency, often reconfiguring during the course of single end user or customer interaction with the system. Finally, IT systems became more ephemeral, i.e., the components constituting these systems became, on average, shorter and shorter lived, the length of their life spans sometimes measured in micro-seconds.

Third, Moore’s Law did its work and computations which were physically and economically infeasible earlier – most notably, large and multi-layered neural network, Bayesian network, and kernel-based pattern discovery algorithms – became eminently doable at relatively low costs.

The combination of digital business driven necessity, the difficulty of managing complex modern IT systems and the technological and economic plausibility of harnessing AI in its inference and pattern discovery incarnations to deal with that complexity explains why AI has become one of the major topics of conversation at the intersection of IT and business today.

What Comes Next?

Of course, things have not stopped evolving. Digital business has gone into overdrive. Research conducted by analyst firms like Gartner and Forrester and validated by our own observations at Moogsoft show that the percentage of business events that are, at their base, IT system state changes has grown from approximate 20% in the early 2000’s to something approaching 70-80% today. The unrelenting demands for system agility continues to deepen complexity and the task of the IT operations professional who needs to ensure the continuous turning of the wheels of digital commerce becomes exponentially more difficult year by year. However helpful AI has been until now, enterprises are already demanding a new wave of commercialization of technologies that not only discover patterns and automate inferences but can guide enterprises through the vast streams of data they are continuously required to observe and extract from those streams the data items that genuinely signal the emergence of problems to be solved and incidents to be avoided. This applies across business processes but it particularly true for those processes that directly deal with the management of the IT systems upon which digital business, in its very essence, depends.

Moogsoft is, to my knowledge, the only vendor that has proven its ability to deliver all three types of AI in a coherent logical workflow to just this end and, in future blogs, I explore what Moogsoft does in each of these three areas in great detail.

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author


Will Cappelli

Will studied math and philosophy at university, has been involved in the IT industry for over 30 years, and for most of his professional life has focused on both AI and IT operations management technology and practises. As an analyst at Gartner he is widely credited for having been the first to define the AIOps market before joining Moogsoft as Field CTO. In his spare time, he dabbles in ancient languages.

All Posts by Will Cappelli

Moogsoft Resources

June 8, 2021

Chapter 7: In Which Sarah Experiments with Observable Low-Code

June 4, 2021

Have You Herd? | Episode 1: DevOps vs. SRE

May 17, 2021

A Day in the Life: Intelligent Observability at Work with our SRE, Dinesh

May 11, 2021

Monthly Moo Update | April 2021