Thoughts on Data Science, IT Operations Analytics, programming and other random topics

Using Predictive Insights to Maximize Your Chances of Success

23 Dec 2014

(first published on IBM Service Management 360 site)

Early in the development of one of our IBM SmartCloud Analytics – Predictive Insights products, while proudly demonstrating its capabilities to a group of potential users, our enthusiastic demo was brought to an abrupt halt. Someone in the room sarcastically observed they were so overloaded with actual problems, the last thing they had time to deal with was our predicted, hypothetical, anomalies! There we were with this powerful new technology that promised to help operators detect and prevent outages before they occurred, and they believed themselves too busy to take a look or act on this critical information. This certainly gave us pause.

There was no doubt they were working flat out, addressing a never-ending stream of daily issues. We wanted to help. However, their initial perceptions were that our system was simply one more thing that would be added to the barely manageable set of event they already had to deal with.

In reality, “too busy” is just another way of saying “We don’t see the value yet” or “We remain unconvinced.” Some fundamental considerations were exposed by this interaction, which also applies to those considering similar ITOA technology. I have seen these underlying concerns surface at many such trials, which should be borne in mind by both the vendors and the potential users of such ITOA-targeted analytics.

Don’t add to the existing operator burden

We’re supposed to be helping, right? But, if all we do is provide yet more events or other new volumes of information in isolation, then how is that helpful? Ideally, the analytic function should provide better insights, which eventually should lead to a reduction or elimination of the original problems.

The output must be believable

In the initial phases of deployment, there is naturally plenty of skepticism. While eager to exploit the promise of these technologies, “show me” is the usual attitude. The capabilities underpinning these products usually involve advanced analytic techniques and algorithms, which are accessible to a minority of the potential user community. Anything that looks doubtful will sow uncertainty in users’ minds. Even if there are statistical or other justifications, these generally will not be appreciated.

Conversely, output that stands up to real-world scrutiny will inevitably improve users’ confidence. One way to help here is to avoid dragging users into the deep analytics weeds, and instead give them output that is simple and understandable. For example, “This web server load is increasing in a manner that is likely to lead to a failure—you should investigate” is straightforward and something that could be investigated and tested.

The output must be actionable

Even if convinced of the technical validity of the output, we often encounter situations where the users indicate they don’t care or won’t act on the offered insights. Sometimes this is due to the users understanding a particular insight is not that important in the big scheme of things. However, not all users are able to make such assessments, so anything you can do to help them determine whether and what actions should be taken in response to the output will be well received.

The way to integrate into the existing processes must be clear

These analytic systems must fit within an existing ecosystem with well-defined processes. If you start from the assumption the existing systems must change to accommodate these new tools from day one, you will be disappointed. The right method may be to evolve those processes over time, but you’ll only get the opportunity to do that once the potential has been clearly shown. Assuming you can demonstrate the potential value, you must also show how it will be integrated into the existing ecosystem, what the touch points are, and who the users will be.

Give it time

Some of these new ITOA analytic systems often need time to “settle in” in order to provide the best value. During this time they learn from the environment and the local operational preferences. Usually this cannot achieved in one or two weeks. The actual timeframe depends on the technology in question and, in some cases, the available repositories of history to mine. Keep in mind, during the early phases there may well be additional up-front costs. Eventually, the value will emerge and the opportunities for rationalization and retiring legacy functionality will become apparent.

The promise of these technologies is great. In order to maximize your chances of successfully assessing and deploying them, before you begin make sure you have a solid sense of what you will be looking for in each of the above categories, and as a vendor, ensure that you have appropriate answers to the related questions.

comments powered by Disqus