Tool Rationalization: Why AI Tools Would Better Serve IT
Thursday August 23 2018
A common request when investigating new tools is to replace an existing tool. Sometimes it’s the right response, sometimes not.
At least once every couple of weeks, I find myself in conversation about whether or how Moogsoft AIOps can replace one or more of the tools somebody is already using. I think this is the wrong approach, but I am more interested in what it reveals about the state of IT departments.
The bottom line is that most companies have lots of tools. The reason for this proliferation is that each team has its own needs and, over time, increasing specialization leads to each team procuring its own particular tool. The server admins have a different tool from the network admins, DBAs are doing their own thing somewhere else, and application owners have their own particular view of the world.
So far, so good: everyone is getting what they need. The problems start when something goes wrong and it’s not immediately obvious which domain it falls into. That’s when the finger-pointing starts, and equally that’s where a lot of time and energy gets wasted while people try to figure out whether it is their problem or not. At this point, the various specialist tools flip from being assets to becoming hindrances, as it’s very difficult to reconcile the different views of the world that they offer.
The Right AI Tools for the Job
At this point, someone is liable to say something many of us have heard repeatedly in IT: “Wouldn’t it be great if we could just buy one tool that would do all of this for us?”
Of course there is a second driver behind tool rationalization, which is pure cost. Each of those specialist tools has a subscription which needs to be renewed, or perhaps for older tools a purchase price and maintenance payments are considerations. Either way, if you can turn one or two of them off, that’s a direct saving.
Tool rationalization is rarely a good starting point. Rather, a better approach is focusing on building good connective tissue between those specialized tools that are already providing value to their users.
The problem with both of these drivers is that they don’t play out. While generalist tools do exist, they rarely live up to their promises. Either they used to specialize in one area and have broadened their remit, often via acquisitions, or they are too lightweight to provide much value anywhere. Because of these limitations, the One Tool to Guide Them All does not actually shorten diagnostic cycles, and teams will fight to justify exceptions that let them keep their own particular tool around. This in turn means that the projected savings do not materialize, leaving everyone unhappy.
Even if we assume equivalent tools, they are not entirely fungible; people build up skills in one particular toolset, processes are influenced by the particular toolchain in use, and other tools might be selected based on their ability to integrate with that toolchain. On top of that, one tool may boast special capabilities that suit the application stack, which make other tools a worse fit.
All of this means that tool rationalization is rarely a good starting point. Rather, a better approach is focusing on building good connective tissue between those specialized tools that are already providing value to their users. A fluid and flexible overlay can unlock additional value from past investments, delivering a return without the disruption of changing tools.
Another suggestion would be to evaluate each domain-specific tool on its own merits, with less focus on extraneous functionality, because communication with other tools will be handled by that connective overlay. This will ensure better satisfaction from users of each tool, who might otherwise be dissatisfied with a “jack of all trades, master of none” approach.
A further point is that not any overlay will do. IT evolves, both as a discipline and as a set of technologies. This constant evolution means that older approaches, even very successful ones, may no longer be fit for purpose. The new discipline of AIOps represents a connective layer between monitoring, service desk, and automation components. A couple of decades ago, we would have called this a manager of managers, or MoM.
There were a number of different MoMs, and indeed, many are still on the market today, but their common characteristic was to be based on static models, rules, and filters. This reliance on models made them brittle in the face of change, but this was not an issue as long as the rate of change in the environment remained below a certain level.
Today, the rate of change is already much higher than it has ever been before, but it is a safe bet that it will never be this low again. Constant change is the only constant in IT, which means that static models cannot be updated fast enough to keep up. AIOps is the response to that realization, replacing the static models with dynamic algorithms to enable busy IT operations teams to keep up with their constantly evolving environment.
Of course existing MoMs are probably wired pretty deeply into lots of other systems and processes, so this is not the sort of replacement that can be completed over a weekend; it’s a gradual process, with verification at each step.
Whether you are looking at replacing an existing MoM, or budgeting for a newer tool that is still model-based, consider instead putting that budget towards AIOps — but without displacing existing specialist tools that are working well for their users. There is value in each of those, and even more in joining them up and making that value more widely accessible.
Moogsoft is a pioneer and leading provider of AIOps solutions that help IT teams work faster and smarter. With patented AI analyzing billions of events daily across the world’s most complex IT environments, the Moogsoft AIOps platform helps the world’s top enterprises avoid outages, automate service assurance, and accelerate digital transformation initiatives.