Building and running microservices at large organizations is becoming more achievable with the availability of supporting tools and best practices.
As explained by Sam Ghods from Box, you might package some code in a docker image, back it up with a postgres database, connect to a persistent remote disk, address scaling concerns with an auto-scaling group, put a load balancer in front of that auto-scaling group, and lastly introduce service discovery for all of the components to find each other. This is good enough to run your microservice on a single cloud infrastructure. But realistically, modern enterprises would want to run microservices across public cloud, private openstack cloud, or even on customers’ data centers. The challenge is writing a single schema that works across different cloud providers.
To address this challenge, organizations are now leveraging Kubernetes, a leading container orchestration engine that is optimized for ALL infrastructures and applications. Kubernetes is used to help automate the deployment, scaling and management of containerized applications. By working alongside platforms like Docker, Kubernetes can coordinate actions across a wide cluster of hosts.
DevOps Lack Context Across Microservices
DevOps teams benefit from Kubernetes because it allows them to easily view the schema of microservices and investigate the health of those microservices. For example, teams can leverage Kubernetes to see that a particular service is running across a certain set of 50 containers and requires a specific set of resources across the network, compute and storage. In order to identify and address issues, however, you need access to rich context across microservices.
When viewing the health of a service, you can essentially use Kubernetes to ask three questions:
- Is this service alive?
- Is it doing something?
- What is the schema?
Troubleshooting service issues, however, typically requires context beyond those 3 questions. This is why DevOps teams are now adopting AIOps platforms to streamline troubleshooting across microservices.
AIOps Provides Actionable Insight Across Microservices
AIOps platforms are able to leverage data across microservices architectures, and apply algorithms to understand anomalous features and relationships across components in real-time. AIOps platforms can be used to leverage orchestration data from tools like Kubernetes, and correlate that data with events and log messages from the underlying infrastructure.
As an example, let’s say that a container is moved from one piece of hardware to another by Kubernetes. There could be an interruption to the application for the end-user. This issue will likely be transient, making it even more difficult to troubleshoot. If an AIOps platform is in place, it will be able to tell operators that this container was moved at this time, and there was this issue with the hardware that likely resulted in impact to service — but not to worry, it’s transient.
As another example, let’s say that there are containers running in multiple places (e.g. AWS and Azure). The application is experiencing performance issues but operators don’t know where the issue is coming from. With an AIOps platform, Ops can leverage the orchestration schema from Kubernetes to realize that there is an issue with one of the components running in AWS, and there are no issues with Azure.
In summary, AIOps platforms can be used to make sense of orchestration data from tools like Kubernetes to help identify where anomalous activity may be occurring across a microservices architecture.
About the author
Sahil Khanna is a Sr. Product Marketing Manager at Moogsoft, where he focuses on the emergence of Algorithmic IT Operations. In his free time, Sahil enjoys banging on drums and participating in high-stakes bets.