The theory (and a little bit of practice)
Automation in datacenter and now utility computing is heavily used to drive down TCO cost. However it is rather hard to find out what to automate in order to get the maximum benefit out of it. Some automation that seems obvious often has a low return on investment.
Hopefully, we can use ( or abuse) amdahl's law here. It is used to find the maximum expected improvement to an overall system when only part of the system is improved. Interpreted simply, Amdahl's Law says focus on improving the things that make the biggest difference overall.
Pic: Amdahl's law
If we adapt this law for TCO reduction 1 is the cost of running the system (datacenter, service etc.. ) for a discreet amount of time. The new cost is will be the length of cost the unimproved fraction takes, (which is 1 - P), plus the cost the improved fraction takes. The cost for the improved part of the system is the cost of the automated part's former cost divided by the automation cost factor, making cost of the improved part P/S. The final cost is computed by dividing the old running time by the new running time, which is what the above formula does.
Applied to datcenters, cloud or IT operation, this logic suggests that organizations should start with automation that makes the biggest impact, particularly IT staff productivity.
However, if we look at the reality the OPEX cost for a server and a datecenter is represent a very small part of the overall cost. According to google paper it varies between 7% and 9 % of the overall cost. Which means that if we still follow Amdahl law automation can provide only a very limited impact on the overall cost while maximising server utilisation guaranty a better return on investment ( not to mention being smart with hardware acquisition).
But there is no small economy.