According to a new report from Diamanti, 44% of the surveyed IT leaders say they plan to switch from virtual machines (VMs) to containers. Another recent survey suggests that by 2020 83% of enterprise workloads will be cloud-based. These are just the latest indicators of the imminent future of IT, in case it hasn’t already made its full debut at a corporate back office near you.
It’s a new agile, cloudy, code-based, containerized world, rife with reams of virtual processes for IT operators to monitor and maintain. And it’s overflowing with more noisy data than merely mortal minds can handle.
The question is, in this new world will your existing monitoring tools be overwhelmed by microservices mayhem? Will the shift to containers limit the purview of your legacy incident management systems? Or will powerful new technologies come to the rescue just in time?
The ability to see the contents of containers and track the data being generated by individual microservices as well as microservice clusters is critical, and AIOps’ machine-learning capabilities will form a vital component of the containerized infrastructures of tomorrow.
1. The Problem: Containing the Data Lake
Over the next couple of years, IT Operations will be facing a growing tidal wave of data. It’s estimated that by 2020 the Internet of Things (IoT) will account for around 30 billion devices. The most prosaic form of IoT, smartphones, are just the tip of that iceberg. With networked and AI-enabled devices taking up residence in more areas of our lives every day—including some failure-is-not-an-option devices, like connected cars—IT operators are going to have their work cut out for them. It’s one thing to receive user complaints about the latest version of Candy Crush crashing on phones; it’d be quite another to receive a stream of alerts about Teslas crashing on highways.
But IoT isn’t the only driving force behind this era of Big Data Gone Wild. In many ways it’s just a symptom of the complexification of every aspect of IT, especially in major enterprises, which now rely on hundreds of distinct applications to conduct business each day. The expansion of existing IT infrastructure through virtual machines has been an incredibly helpful response to the problem, but over the past few years many DevOps teams have been hankering for even more dynamically flexible ways to contain, monitor, and maintain their data-generating and data-hogging apps.
Enter: containers. Able to support the traditional top-down control networks, containers are also “portable” enough to support the less centralized, edge-based systems that will make up large portions of future IT architectures, particularly for IoT.
It’s unlikely that any enterprise will be able to keep up with the next 10-20 years of exponential data growth without cloud-based containers playing a significant role—especially by parsing vast enterprise data lakes into bite-sized, portable, self-contained chunks of usable information—so it’s a topic everyone in IT should strive to understand. You may already be using containers, as many large enterprises have either started to use them in at least a trial capacity, with an average of $100,000 being spent on adopting containers this year, and 59% of IT leaders in the Diamanti survey saying they have either already deployed containers in a production environment or plan to do so.
Still, many large IT departments are still reluctant to invest heavily in container-based systems. Well-informed caution, after all, is part of every successful CTO’s repertoire. Luckily, the modular and easily replicable nature of containers makes leaving our VM-based systems behind us for swarms of self-managing containers in the cloud a relatively safe transition, if done well.
2. The Solution: Containers + Kubernetes
Docker is the foremost platform for creating containers, and the open-source system known as Kubernetes—meaning “helmsman” or “pilot” in ancient Greek—manages and controls their deployment. Both of these technologies are usually used to deploy containers containing microservices (or microapplications). Docker is a collection of open-source tools that helps you create inviolable Linux containers that can carry microservices (e.g., just the client database, or just the customer interface of an e-commerce site), or carry entire applications/services in a more efficient manner than current VM-based methods. Kubernetes (which can be used with Docker or a number of other container makers) then manages the deployment and running of these containers, keeping them perfectly in step with each other no matter where they are.
But what makes containers better than VMs alone, you may ask? Well, for starters, container tools such as Docker and Kubernetes make fast, secure deployment of code easier than ever. Because containers run identically on every supported platform, they’re also a good way to ready your organization for tomorrow’s edge-computing revolution. You’ll be able to easily run them either on-premises, in the cloud, or locally (in-device) as needed, easily meeting tomorrow’s needs as well as today’s. It doesn’t matter if you run 5, 50, or 500 copies of a particular container in a swarm, or cluster (terms used to describe a set of identical containers running as a group); everyone will be identical, regardless of what it’s running on. Docker and Kubernetes will keep containers in step with each other throughout their lifecycle, leaving IT operators free to worry about other things.
What’s more, switching from VMs to containers saves money. Even a slow partial deployment of containers within your organization will start to yield real savings as containers start to take on more and more work. Ancillary benefits such as the extra time and power used to spin up a larger, old-style VM versus using new-style containers will start to reveal themselves in your bottom line.
Because Docker and Kubernetes are open-source, and because of savings derived from the consolidation of physical servers and virtual machines, deploying containers generally pays for itself, dollar for dollar. The sooner you deploy, the sooner these savings will realize themselves.
3. The Result: It’s Time to Evolve Your Incident Management System
Whether or not you are currently using mostly cloud-based or on-premises solutions, in the future having the ability to seamlessly deploy to the cloud (or even to other locations) is going to be an essential part of keeping up with deployment of AI and other emerging technologies. Giving your staff the chance to start working with containers now will make sure they’re ready for the IT changes to come.
Persistent storage options are just starting to become available in the Kubernetes infrastructure, making the deployment of stateful applications an option for enterprises hoping to deploy containers. If your organization is currently making the transition to the cloud, this is a great time to also switch to a more container-based architecture. However, without up-to-date monitoring systems capable of overseeing huge data streams, it will be impossible to maintain the steady uptime and efficiency that enterprise systems require. Even then, your legacy monitoring tools may lack the capability to penetrate and make sense of the data arising within and between containers.
This is where AIOps comes into play. The ability to see the contents of containers and track the data being generated by individual microservices as well as microservice clusters is critical, and AIOps’ machine-learning capabilities will form a vital component of the containerized infrastructures of tomorrow.
Indeed, AIOps can already be used in concert with Kubernetes to correlate container-swarm orchestration data with system alerts and logs, enabling IT operators to identify root causes of problems within a single application with pinpoint accuracy—even when its various components are containerized and the containers are hosted, for example, in separate public clouds.
Using traditional incident management systems to deal with increasingly high data volumes, along with container-based systems, will make it increasingly difficult to manage alerts and anticipate future problems. But why would you want to keep using your old system anyway? To take full advantage of the benefits containers can bring you, it’s essential to have a practical way to monitor everything and resolve issues quickly without wasting thousands of personnel hours manually hunting for the sources of errors.
By switching to an AIOps platform at the same time as you embark on container adoption, you’ll be able to take advantage of everything that a more streamlined, well organized data flow can bring you, while giving you unprecedented flexibility in how you deploy your current services, maintain them, and develop new ones.
Or you could opt to be left behind, drowning in both your data lake and VM fees. But I wouldn’t recommend it.