An Analysis of the Atlassian Acquisition of OpsGenie
Zack Zilakakis | September 18, 2018

A closer look into what the latest Atlassian acquisition means for IT leaders

A closer look into what the latest Atlassian acquisition means for IT leaders

The more things change, the more they remain the same. While monitoring vendors are expanding their tool offerings (often through acquisition), organizations are still struggling to analyze all the events traversing the IT infrastructure which yields significant insight regarding the performance of the business itself.  

The Atlassian acquisition of OpsGenie still focuses on a rules-based approach that does not scale across the infrastructure using event correlation to reduce meantime-to-acknowledge (MTTA) and meantime-to-resolve (MTTR). Jira and OpsGenie will operate the same, with bi-directional integration and a focus on DevOps, as before the acquisition. Here’s why.

Tighter Integrations in the Future at the Expense of Meeting Digital Business Initiatives Now

There will undoubtedly be tighter integration, with additional features and functionality; however, IT leaders must decide if they want to wait for these DevOps-centric features that are not focusing on their digital business initiatives. Waiting a year, at best, for acquisition benefits will impact IT operations groups’ ability to maintain the agility and speed demanded by businesses. IT Operations must identify platforms that can help address these problems today.

The DevOps adoption model has long been driven by early users within an organization — perhaps even a single team — who would see benefits and evangelize them to their colleagues and peers. This model works well, but it does have one limitation, in that it is not always the case that what works in a small team or environment can easily scale up to the whole organization. Rules-based approaches to data classification and correlation are particularly subject to this failure mode, as a few simple rules may appear to “prove the concept,” leaving only “minor details and edge cases” to be cleared up by further rule development.

Tweet Section

The implication is that operations are not prepared to meet the needs of the digital business. With an abundance of choices (such as managed service providers and SaaS), IT organizations are challenged to maintain relevance.

Unfortunately, an evil variant of Pareto’s law applies here: you spend the first 80% of the time on the first 20% of the issues — and the second 20% of the time on the remaining 80%. What is worse, the ground is shifting under you. A short test measured in weeks may seem to indicate predictable results, but most organizations are now dealing with at least parts of their infrastructure that change multiple times per day. This rate of change is fundamentally incompatible with rules-based approaches, but the incompatibility may not emerge at the scale of a single team.

Another facet of the problem is that monitoring data is no longer only used for IT operations. New stakeholders outside traditional IT are increasingly demanding access to this valuable information to inform their decision-making abilities better.  Most IT operations organizations still run in silos, independent even of closely-related functions such as development and security, which impairs their ability to manage and monitor applications and infrastructure in a holistic manner. This is important as IT departments are focused on aligning and delivering data-driven services to the business.

This Keeps IT Focused on “Infrastructure Thinking” Instead of “Digital Business Thinking”

IT is becoming much more central to business operations, and as organizations digitize their business processes, analyzing the information that is traversing the IT infrastructure can yield significant insight regarding the performance of the business itself. The demands on the IT infrastructure will require significant improvements in operations as organizations continue to evolve into digital enterprises over the next several years.

Many IT departments currently lack the skills, processes, and tools to lead this change. The implication is that operations are not prepared to meet the needs of the digital business. With an abundance of choices (such as managed service providers and SaaS), IT organizations are challenged to maintain relevance.
Comprehensive visibility will become more important in these modern environments, and is one area where internal IT teams could be expected to be able to offer unique differentiation over external parties. Operations are already consumed with running and maintaining the existing IT infrastructure, and digital business is calling for an even greater focus on monitoring the existing environment. To deliver on these challenges, they must understand and prepare to face internal and external challenges while embracing new methods and emerging technologies to drive business outcomes.

As digital business and cloud computing become increasingly prevalent, enterprises must shift from infrastructure thinking to platform thinking regarding their business models, delivery mechanisms, talent, and leadership. An AIOps solution becomes vital for agile deployments, continually changing infrastructure, and monitoring. Emerging technologies such as AI and machine learning allow organizations to optimize their operations and align IT with business priorities. Beyond that, AIOps is also having a significant impact on IT monitoring strategies. Organizations are starting to enhance, and sometimes replace, application performance monitoring (APM) and network monitoring tools with AIOps. In modern environments, where applications are distributed, written in multiple languages and span infrastructure from on-premises to cloud environments, it is critical to identify the cause of an incident or event quickly.

Conclusion: Operational Silos Will Persist

The OpsGenie acquisition is a symptom of operational myopia within the DevOps community. There is a tendency to work in self-contained silos, even long past the time where continuous integration and deployment have become the norm. The risk is that a detailed perspective on one aspect of the infrastructure does not necessarily deliver the sorts of insights which would require a good grasp of the whole infrastructure on which their applications are running.

Consequently, early-adopters are at risk of underestimating the complexity of the signals coming from that infrastructure regarding the application behavior and believe they can get by with tools like Opsgenie. In other words, rules-based technology can work well for initially limited deployments, but will inevitably show its limitations at enterprise scale.

The more agile approach enabled by AIOps has a much better chance of delivering the results that business users expect in today’s highly dynamic, fast-changing, and heavily automated infrastructure models, and to continue to do so as those models evolve, without becoming yet another source of manual “keep the lights on” work for Operations teams.


Additional reporting by Niklas Eklov // Strategic Architect @ Moogsoft

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author

Moogsoft Resources

June 8, 2021

Chapter 7: In Which Sarah Experiments with Observable Low-Code

June 4, 2021

Have You Herd? | Episode 1: DevOps vs. SRE

May 17, 2021

A Day in the Life: Intelligent Observability at Work with our SRE, Dinesh

May 11, 2021

Monthly Moo Update | April 2021