The End of the IT Monolith
Richard Whitehead | June 7, 2016

Why the Enterprise should shift towards Composable Monitoring

Why the Enterprise should shift towards Composable Monitoring

I’ve always loved Unix/Linux, for lots of reasons, but mainly because of the insanely powerful library of commands that can be strung together to do, well, almost anything.

grep graze `ls -1t moogdemo* |head -1` |awk ‘BEGIN {FS=”/|?”};{print $6}’ |sort |uniq -c |sort -n |tail -1

That clumsy (but quick) example, by the way, tells me today’s most popular “Graze” RESTful request on my demo copy of Incident.MOOG. (click here to learn more about Moogsoft AIOps, and here to learn more about the Graze API).

Very simply, executed in the Apache-Tomcat logs directory, it orders the access logs by date (ls), picks the most recent (head), extracts the name of a command from the file (awk), sorts them alphabetically (sort), removes duplicates with a count of how often they occur (uniq), orders them by frequency (sort again), and finally, extracts the most frequently executed command (the last head).

This ethos of a large number of small commands, each of which does one function very well, is at the core of the success of the operating system.

In some way, we’re seeing a couple of very similar phenomenons today, in the rapid growth of micro-services, and also a trend recently highlighted by The 451’s Donnie Berkholz, The Latest Disruption in IT Management: Composable Monitoring. Let me explain why this is so significant.

For years, the discipline of monitoring has been dominated by monolithic solutions that have striven to be the single, integrated monitoring platform. As monitoring needs grew, and became more diverse, so the platforms became more monolithic, as the lines of code grew, if you were lucky… In many cases, the “monolithic” solutions were assembled from years of acquisition and consolidation, innovation taking a back seat to the urgent need to refactor code in the sometimes futile attempts to seamlessly integrate these solutions.

The result, familiar to many, is illustrated below—multiple high-cost element management systems, packed with competing features that will never be used.

At this point, the temptation is to develop the functionality yourself. After all, you have the skills, and you’ll get exactly what you need, right? Well, that’s certainly true for the select few organizations that have the surfeit of resources required, but it’s not the case for the vast majority, who fully understand the lost opportunity costs of the internal development of tools that are not core the business, and appreciate how this tempting expedient rapidly becomes a maintenance mill-stone.

Sounds dismal, but not entirely.

Open Source to the Rescue

Developers love developing. If the tools at their disposal don’t do exactly what they want, their immediate reaction is to modify them or, in many cases, start from scratch. As someone quipped at Monitorama PDX15, “‘let’s use an existing tool,’ said no developer ever.”

Developers also love to share, and many (thousands) of these tools are now available as open-source projects. In fact, the last time I did a search, there were in excess of 28,000 projects on Git alone that matched the keyword “monitoring.”

In many cases, these projects follow the mantra of the Unix tools of yore: they do one or two things very, very well; and they focus on the developer’s primary need. This means there’s a plethora of lightweight solutions out there, each one adapted to solve a very specific problem, and nothing more. This is truly the age of composable monitoring.

Some solutions have emerged as functional leaders (it’s easy using Git to see how many pulls a project has), and are rapidly becoming de-facto forms of instrumentation. {Collectd} is a great example.

Of course there is a downside: Even some of the more mature and well-supported projects have a tiny fraction of the functionality you would expect from an equivalent enterprise solution, which leaves the implementer with three possible courses of action…

  1. Put in a feature request. This may or may not get implemented by the community. And now you’re at risk of losing the elegance of the composability. If everyone’s featured got added, it’s no longer the lightweight component.
  2. Fork and develop. This is potentially less damaging to the composability than option 1, but it has now become exponentially expensive. Now you‘re developing, testing publishing and maintaining an open-source project yourself. And weren’t we trying to avoid development?
  3. Buy the supported OSS version from a commercial entity that supports it. This is certainly an option for some, but you’re now trading hidden costs for real costs, just to get some extra features. So for the majority, it represents the worst of both worlds.

Is there an alternative? You bet.

Open Architecture, the “Force-Multiplier” for Composible Monitoring

Open Source is most effective, if it is used in its “core” form, with minimal modification. There is no code to maintain, and there is a considerable expertise base. But how do you get the broader functionality that an enterprise typically needs?

A way to achieve this is to have the OSS monitoring components feed into a flexible aggregation layer that is adaptable enough to accommodate the diversity of sources, but has all the core functionality that is essential to an enterprise solution.

This has two profound benefits: Firstly, it enables the specific OSS solutions to be deployed in their raw form, without custom development, avoiding the common issues with OSS, maintainability, and support. And secondly, it provided the enterprise “Stamp of Approval.”

Remember that monitoring architecture? Here’s an alternative view, still the same overage, more modern and efficient of course, but look what it’s done to the cost structure…

Then there’s the composible part. Because the OSS solutions are vanilla, there is a much lower overhead associated with deployment. You can choose multiple solutions in parallel (because of the abstraction layer) and change solutions as the environment changes (for the same reason).

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author


Richard Whitehead

As Moogsoft's Chief Evangelist, Richard brings a keen sense of what is required to build transformational solutions. A former CTO and Technology VP, Richard brought new technologies to market, and was responsible for strategy, partnerships and product research. Richard served on Splunk’s Technology Advisory Board through their Series A, providing product and market guidance. He served on the Advisory Boards of RedSeal and Meriton Networks, was a charter member of the TMF NGOSS architecture committee, chaired a DMTF Working Group, and recently co-chaired the ONUG Monitoring & Observability Working Group. Richard holds three patents, and is considered dangerous with JavaScript.

All Posts by Richard Whitehead

Moogsoft Resources

April 29, 2021

Q&A from the Moogsoft/Datadog Fireside Chat

April 23, 2021

New Gartner AIOps Platform Market Guide Shows More Use Cases for Ops and Dev Teams

April 21, 2021

James (IT Ops Guy) and Dinesh (SRE), Petition the CIO and CFO For AIOps Rollout

April 21, 2021

Coffee Break Webinar Series: Under the Covers of AIOps