Artificial Intelligence has been around for longer than you may think, but it has undoubtedly progressed by leaps and bounds over the past decade. According to Harvard Business Review, more than 51% of executives already deploying AI in business operations believe that AI is most useful for improving back-office functions, such as IT.
The promise is real. But it’s important that we temper the hype and look at AI and machine-learning (ML) tools as a means, not an end. No matter how smart the technology can be, they are by no means fully autonomous.
This holds true outside of the application of AI to IT (AIOps). In a recent article for Forbes, J.J. Kardwell, CEO of the AI-based sales and marketing platform EverString, warns against one of the biggest oversights in contemporary implementations of AI: letting the algorithms run without adequate data-cleansing, training, and ongoing performance monitoring in place.
If supervised ML is akin to teaching a student calculus, step by step, then unsupervised ML might be comparable to handing Isaac Newton a pen and stack of paper and seeing what he comes up with.
“In the race to take advantage of machine learning and AI, efforts are weighted heavily on creation and deployment, with little focus on developing control systems to detect and correct mistakes,” writes Kardwell. “AI systems in the majority of enterprises aren’t subject to sufficient or ongoing scrutiny. Flaws easily metastasize and may never be detected.”
You may be familiar with some of the high-profile mistakes resulting from public experiments with AI. Kardwell cites some of the most notorious examples, including Google’s horrific results with AI image-recognition and Microsoft’s foul-mouthed, bigoted Twitter bot. But what these examples show us is the nightmare scenario of untempered, unsupervised AI left to run rampant. The reality is that a blended approach to AI removes the risk while improving the results, especially when it comes to IT.
A Primer on Supervised vs. Unsupervised AI Approaches
To get the best result, you have to choose the most appropriate AI or machine-learning algorithm for your specific problem. The right algorithm will typically depend on the use case at hand as well as the volume, quality, and structure of your data.
Broadly speaking, the two basic forms of AI/ML can be defined as either supervised or unsupervised. With supervised ML, you essentially have a sense of your potential X and Y variables in advance, with both sides of the equation neatly labeled, and you employ algorithms to chart the relationships between them. This is often used, for instance, in basic forms of root-cause analysis, where deep-learning algorithms are able to connect known issues to their likely causes. Unlike simpler, rule-based decision trees, these algorithms can sometimes unearth novel or unexpected relationships, but the range of possibilities is also constrained.
With unsupervised ML, however, you typically only know the value of X and the basic contours of what your algorithm can do. At least initially, the Y remains a mystery for your algorithm to plunge forth and reveal, with often surprising results. If supervised ML is akin to teaching a student calculus, step by step, then unsupervised ML might be comparable to handing Isaac Newton a pen and stack of paper and seeing what he comes up with.
The Best of Both Worlds
Though unsupervised ML can be an incredibly effective tool, a human-in-the-loop approach (what’s been called reinforcement learning, semi-supervised learning, or guided learning) transcends the benefits of using either supervised or unsupervised learning approaches alone. In fact, Kardwell notes that implementing a human-in-the-loop (HITL) approach to AI enabled his company to amplify their data-analysis efforts by “more than 100,000 times” what they’d be able to achieve using human minds alone, while also ensuring that performance drift issues didn’t infect their AI’s output.
It makes sense then that IT should value and make full use of both supervised and unsupervised machine learning. Because if you aren’t supervising the inputs, you need to at least monitor the outputs, if you want to ensure you have a result that is genuinely intelligent, adaptive, and above all reliable.
About the author Matt Harper
Matthew Harper is VP of Corporate Marketing at Moogsoft. Previously, Matt held senior leadership roles at Glassdoor, Sony (PlayStation), and EQ Magazine.