In research done earlier this year, we looked at changing patterns of IT service management (ITSM) adoption across a population of 270 respondents in North America and Europe. One of the standout themes that emerged from our findings was the need for the service desk to become a more automated and analytically empowered center of [...]
IT administrators love to write scripts – at least, the most talented ones do. Scripting provides a powerful platform to automate simple and repeatable tasks. However, like with most powerful tools, there is an overwhelming temptation for scripting to be overused. When faced with a project deadline, a high-pressure failure event, or even just the need to simplify day-to-day events, administrators can unintentionally create scripts that are so complex they actually put the business at risk. I must confess that during my 2 decades-long tenure as an IT administrator and engineer, I’ve written a lot of scripts…a LOT of scripts…and learned a lot of important lessons. Scripting was never intended to replace application programming. Its purpose is to provide a quick and easy resource for performing simple and repeatable tasks. It is not uncommon, however, for scripts to start simple but balloon over time into complex code that is virtually unintelligible even to its author.
There is a reason orchestras have a single conductor. Can you imagine the cacophony that would result if a horn section performed out of sync with a string section? Or if the percussions played a faster beat then the woodwinds? But in IT management, it’s all too common for organizations to have separate automation platforms conducting individual software elements. In fact, this is often the cause of an increased IT complexity that results in degraded performance and reliability. For instance, SAP’s popular customer relationship management (CRM) software includes a built-in job scheduler – the Computing Center Management System (CCMS) – with some limited capabilities specifically designed to support its unique platform (such as to analyze and distribute client workloads). But this is an independent tool requiring administration and monitoring tasks separate from any other automated solutions. An average IT organization will need to manage dozens of similar management platforms, each with its own unique interface and operating parameters.
Chances are, in an average day, you are not accomplishing as many tasks as you would like … and neither are your colleagues or your employees. What is mystifying about that statement is that it seems today’s workforce is putting in more hours and more effort than ever before coinciding with an increased adoption of IT devices and applications designed to improve user productivity. In fact, this has been a key driver for organizations to enable workforce mobility – to provide flexibility in accessing business IT resources (applications, data, email, and other services) from any device at any location at any time in order to improve overall business performance. But even the most accomplished business professionals must admit there are days when little gets done despite herculean efforts.
With its roots in mainframe job scheduling, workload automation is often seen as a relic in today's age of cloud, Big Data, mobile management and DevOps. Do we even still need workload automation as a separate discipline or should we simply roll the management of batch jobs into other automation disciplines, such as IT process automation? Is the market for workload automation software stagnating or is there still potential for growth?
In part 1 of this series of four posts, we examined the grand vision of the software-defined datacenter (SDD). In this second post of the series, we will take a look at the core components of the SDD (see Figure 1) and provide a brief evaluation of how mature these components currently are.