Have You Herd? | Episode 3: Observability from Bare Metal to Cloud
Minami (Coirin) Rojas | August 19, 2021

A summary of our third Moogsoft engineering Twitch Stream chatting about all things DevOps

A summary of our third Moogsoft engineering Twitch Stream chatting about all things DevOps

Hello humans! In this last episode of Have You Herd?, I was joined by Thom Duran, Aashreya Shankar, BJ Maldonado, Jacob Laverty, Joe Nye, and Richard Whitehead from the Moogsoft Engineering team. We broke down the journey of observability from the old age of bare metal to where we hypothesize it’s going in the future!

For a great history on the evolution of hosting, check out this article and read on below for a high-level summary of our full conversation!

The Age of Bare Metal

Imagine a world, with hundreds of racks bolted to the floor with security in the form of physical cages, and the air around you changing from hot to cold depending on the direction the servers were facing.

Richard and Thom take us back to the stone age, whoops the early 2000s, and talk through when monitoring included bringing your own hardware and a surefire way to get your steps in for the day.

The overwhelming concerns of monitoring for bare metal were the limited resources and the constraints in trade-offs that had to be made. For example, if you wanted to monitor your network -- you had to have access to the network, which meant wandering around looking for an available port to plug your monitoring device into.

Beyond those physical constraints, monitoring had even more finite limitations. Like, back when they started monitoring UNIX systems and the big question was how much of the available memory would a monitoring agent take up of the total 32 megabytes in the bank. Ah, the ages of megabytes.

While bare metal servers are still very real, our interaction with them has become more and more virtualized as technology continues to evolve. Interestingly enough, monitoring evolved along from constraints from a finite amount of resources to the cost of said resources.

VMs to K8s to Normalizing Monitoring

First, comes bare metal, then comes VMs, next comes Kubernetes in a baby carriage? Technology and playground rhymes do not pair well. Noted.

Let’s time travel forward into the world of virtualization, around 2008 when VMWare exploded onto the scene. Monitoring became more common but constraints still took precedent. In this world, deploying a monitoring solution was still an afterthought with an emphasis that if deployed, it must do no harm. In other words, if you’re monitoring the network, it shouldn’t have any impact on the core purpose of the network -- and if it’s going to add any traffic, the bigger question is could we afford it.

Now in the world of microservices and Kubernetes, the cost balance conversation shifts even more to technical scale & human time. On one hand, well-architected microservices can help bump up your availability numbers because not all components of the product are being used at the same time. On the other hand, if you're doing microservices and independently scaling every service, the cognitive capacity it takes to keep track of everything becomes much greater the more services you introduce. Then when you add monitoring observability pipelines and responding to incidents, the challenge is overwhelming. This presents the constraints of today - the tradeoff between scaling fast and dynamically, but when there is a problem, how long does it take my engineer to actually get to that problem.

Especially as monitoring and observability are now becoming must-haves, the balancing act becomes the availability and uptime provided to your end customers and the architected technology, observability pipelines, and, arguably the most valuable piece, your engineers’ time.

What’s interesting is that as we walk through the history of technology, the debate on constraints - whether it be capacity, cost, effort, etc. - always remains the same. If we wanted to go full circle, we in essence have transitioned from running laps amongst physical servers looking for the blinking red lights to now parsing through mountains of data, across dozens of monitoring tools to find the context we need.

What’s Next? Edge Computing & AI

As we look forward to the future, what do we think is next? Edge computing got the votes from the team. Keeping data close, but using distributed compute resources with global scale.

In the world of observability? AI will run supreme. Even now, AI is already starting to bridge the gap where the lack of data standardization in ever growing complex systems makes correlating and providing context rich incidents difficult. Ideally, AI could be used to proactively identify future issues before incidents happen, but if predictive texts are any indication of where we stand there... we may have some ways to go!

We’d love to hear what you think is next! Email us at haveyouherd@moogsoft.com and join us for the next one! MOO!

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.

About the author


Minami (Coirin) Rojas

Minami leads Digital Growth at Moogsoft. She's been a proven leader with experience optimizing B2B SaaS scaled revenue through web and self-service online acquisition channels. Outside of the business world - amateur photographer, travel lover, and mini-daschund dog mom.

All Posts by Minami (Coirin) Rojas

Moogsoft Resources

October 7, 2021

Who’s Who in AIOps: The 10 Most Innovative AIOps Companies

October 6, 2021

Monthly Moo Update | October 2021

October 6, 2021

5 AIOps Use Cases: How AIOps Helps IT Teams

September 23, 2021

Post Mortem Series: Jurassic Park, You Can Do DevOps Better