Information Services Group

 2017 Annual Review

Automation

Automation, Autonomy
and the Messy In-Between

“Autonomy gives people a sense something is in control, and we have a tendency to overestimate technology’s capabilities.” -Nidhi Kalra

 

We’ve made incredible gains when it comes to autonomous vehicles, but today the market finds itself in a messy interim period. We’re moving beyond automation (like cruise control) but have not yet made it to full autonomy, which will allow a car to operate independently of the human inside. This “assisted driving” mode has led to a number of incidents in which distracted drivers have overestimated the capabilities of their vehicle.

 

Though the words sound similar, the gap between automation and autonomy is a kind of chasm. Automation generally means “a process performed without human assistance”, while autonomy implies “satisfactory performance under significant uncertainties in the environment and the ability to compensate for system failures without external intervention [our emphasis].”

 

The former is about executing a pre-defined task; the latter is about mimicking the way humans use judgement in an uncertain environment. In this example, it’s the difference between your car parking while you supervise, versus telling your car you want to drive from Houston to Dallas, while you watch Game of Thrones.

 

That’s a really big difference.

 

In some ways, it feels like we’re about to enter the same confusing middle ground in the enterprise – especially in IT. While the consequences are not life or death as they are with self-driving cars, the hype in autonomic IT systems is reaching a crescendo: systems that can predict failure, run your operations and learn the way you work. And while it’s certain these systems will eventually be able to do this kind of work independent of humans, we’re still a long way away.

 

As of today, autonomic systems still require a great deal of human involvement – either in up-front knowledge engineering, or in human-in-the-loop feedback as models learn. The knowledge engineering approach is great at following a standard set of rules that humans hand code (for example, if x, do y), while the learning approach is great at finding patterns (for example, x usually occurs with y).

 

But both approaches still require humans; they need to keep rules up to date, and provide feedback to models to ensure they are performing (and so they don’t go rogue). But we are still miles away from these systems being able to perform satisfactorily while the environment is changing. Think about how much an autonomic system must be aware of in order to pull this off – it must know its own capabilities, what it does and does not have access to, and be able to work around problems when it encounters them – all transparent to users of the system.

 

To be clear, we’re not trying to downplay what’s happening in this space – there is some incredible technology doing some incredible things. It’s more about the words we use to describe said technology. For example, Level 1 incident resolution is getting automated very quickly. We’re seeing service providers and enterprises reaching 60 to 80 percent end-to-end automation in this area. But we’d still argue this is automation, not autonomy, since many of these automations would fall down if humans were not engaged to help the system deal with changes within the environment.

 

Software robots are working their way into the data center as well. They are being managed from a platform that enables them to execute their work without humans explicitly kicking them off, and they can start their work based on a rule that a human coded or based on a correlation the system identifies. But the big difference here is the ability to perform this work while facing significant uncertainty and doing so without human intervention.

 

That capability, in our opinion, is still a ways off.

 

Will we get to a point where these systems can correlate events, identify patterns, predict failures and take action in our environments independently of humans? Absolutely, but it is going to be a while. Our recommendation is to focus your efforts on automation, versus autonomy. Keep people in the loop to define the rules and provide feedback on the learning models. This will in turn make them more productive and help them – and your company – bridge the gap of the messy in-between.

 

© 2018 ISG - All Rights Reserved