Falling neck-deep into the money pit of maintenance can leave ITOps leaders longing for a more modernized, automated world.
Pity anyone charged with managing enterprise IT Operations! They’re responsible for a complex technology stack, potentially reaching back through many generations of architectures all the way to mainframes. Each layer is built on the layers below, and they are all intertwined in complex and non-obvious ways. The people who originally built the systems towards the bottom of the stack are probably no longer around and may even have retired.
All of that complexity is not kept around just for sentimentality. Those legacy systems are working, impressively delivering against their original design goals. Unravelling the complex web of relationships and dependencies involved is risky, with the non-trivial danger of disrupting the various critical business processes which run on top of those systems. Most of these applications only get turned off when the business need either goes away or changes dramatically enough to warrant a fresh start.
Conflicting with this drive to predictability and keeping the lights on, however, is the request from on high for digital transformation and responsive delivery of ever more new services. This demand to hone IT’s competitive edge, sometimes directly opposed to the push for system stability, is driven by two factors: technological and business-related. Meeting that digital demand, however, is rarely easy.
The Real Obstacles to Innovation
Technology change-over has always been a reality. Any system deployed in production is already obsolete, falling behind the leading edge of development. This is not a bug, but rather a feature, implemented by design; the leading edge is also known as the bleeding edge, and for good reason. Most sensible Ops people would much rather have other people find the bugs in 1.0 software before relying on it for anything their jobs depend on. However, waves of technological revolution are coming closer and closer together, and with ever greater operational impact. CPU generations used to be ~18 months long, defining the steady upward curve of progress, and in operational terms, newer CPU types were much like previous ones.
In contrast, the moves from physical compute to virtual, to containerised, and now to serverless application architectures, each demanded radical changes to operational models. The sort of artisanal, one-to-one, hands-on-keyboard maintenance of individual servers had to be abandoned in favour of first defined-state and then fully immutable models of infrastructure. In this new reality, modifying the live configuration of a production system is a sign that something is very wrong.
Some naive commentary has assumed that IT Operations are reluctant to accept or adopt the new models of IT infrastructure. Nothing could be farther from the truth. ITOps people are always staying up to date with developments in their field, and may well be running small-scale evaluation or pilot projects already. After all, if you pursue a career in IT, it’s safe to assume you love the latest and greatest technology. The main factor holding those pilot projects from developing into full-blown production adoption is the sheer amount of technology debt that has accumulated over the years and decades. When you’re drowning in debt, it’s hard to see the light.
Strategically, IT teams need to break out of this box, and do themselves a favour by reframing the whole concept of technology debt as technology investments.
Toiling Away to Pay Your Tech Debt
“Technology debt,” “technological debt,” or “technical debt” is of course a financial analogy, but the analogy holds true in more ways than one. The principal of a tech debt is whatever architectural choices were made to support a business need at a particular point in time. The problem with debt is not the debt itself; it’s the interest. The interest on technical debt is the maintenance cost of what was built. As with financial debt, this interest starts small but compounds over time. At the beginning, skills and parts are readily available, third-party software is updated frequently, and all is well. After a few years, though, some structural components begin to become obsolete, practitioners move on to newer frameworks, and even hardware components may cease production and be harder to come by.
Over time, if the debt is not serviced, the interest just continues to grow—to the point that many enterprise IT Operations departments are now completely paralysed, not by fear of something new, but because all of their considerable resources, energies, and skills are bent toward servicing that debt, to the point that there is nothing left over for major new projects. Tedious toil becomes the order of the day, with genuinely strategic, creative engineering work remaining merely concepts sketched on a whiteboard—if there was even time for that.
Strategically, IT teams need to break out of this box, and do themselves a favour by reframing the whole concept of technology debt as technology investments. The problem with debt is that by the time the interest payments come due, they are all too often disconnected from the reason the debt was originally incurred. Framing technology adoption as investments, however, makes that link explicit and permanent. An investment is made with specific goals and certain expectations of returns, which aren’t easily forgotten even when new investment opportunities come along. Also, while walking away from debt is generally frowned upon, there are accepted mechanisms for cutting non-performing investments, including restructuring one’s portfolio (or system architecture) with new investments that easily supplant the old ones by outperforming them. In any investment strategy, booking a loss, and taking the hit, is usually better than continually carrying a loss. Past a certain point, the habitual non-performers simply have to go.
Investments Are Always a Calculated Risk
If you’re drowning in debt, of course, investments may be the last thing on your mind. But that’s why the mindset reframe needs to be total: instead of paying down the ever-rising interest on legacy purchases, you’re actually just sinking more and more money into a continually plummeting stock, so to speak. And what smart investor does that? “Debt” gives the impression that you owe somebody something—like, say, gratitude to the sagely senior engineers who architected the basis of your ultra-complex, fickle, and now largely modular, software-defined infrastructure in the days of yore. With that mindset in place, it makes sense to keep struggling to patch and reprogram and barely hold together your legacy inheritance. But it also leaves critical IT hearts and minds continually toiling in the back office as tape-jockey support players, performing tasks better suited for machines, disengaged and disconnected from larger business goals, strategies, visions, and creative possibilities. (And if some of them get fed up with it all, which is inevitable, the company suffers a brain-drain.)
Still, when all of those support efforts seem absolutely critical to keeping the lights on, lest the whole house of cards collapse, how can one justify making new investments that would almost certainly disrupt it all?
Simply by deeply, honestly reckoning with the fact that IT purchases were always only iterative, made-sense-at-the-time investments in technological means to obtain certain business ends. If the ends change over time, the means need to follow suit. If the means are no longer delivering the intended ends, the means need to evolve.
Investments are always a calculated risk. If one really adds up the cost of staffing a team of maintenance workers, the cost of patching and repairing legacy architectures, and the opportunity cost that all of that entails by preventing some of an enterprise’s brightest and most potentially creative employees from freely engaging in more future-focused, proactive, genuinely innovative pursuits—well, is it worth it? Maybe daring to invest in building out a secondary, fully virtualized, highly automated, AIOps enhanced, dynamically expansible data center would actually cost less, in the long run, and thus be a smarter investment overall? Once completed, you flip the switch, transfer over, and begin anew—this time knowing full well that you’re only investing in an ever-scalable future, not paying the debts of your ancestors.
It’s a nice vision, isn’t it? But undertaking big and bold projects like that, which are becoming increasingly necessary to compete with the pace of digital transformations underway, requires time, energy, and strategic thinking. And all of those factors will remain in short supply as long as IT leaders believe themselves destined to toil away on maintenance, rather than embracing the ever-present possibility of advocating for the chance to be truly innovative engineers.
About the author
Dominic Wellington is the Director of Strategic Architecture at Moogsoft. He has been involved in IT operations for a number of years, working in fields as diverse as SecOps, cloud computing, and data center automation.