Friday, October 31, 2008

Contention ratio for cloud ?

In Clouds / utility computing / gravity challenged particles, capacity is shared among a large number of users. It allows redistributing rapidly idle capacity to different users depending of the demand. However, if a significant fraction ramp up simultaneously, utility provider are going to be hard pressed to provide the capacity that their customers have a right to expect.

The fact is no matter what the hype is trying to convince you, only a finite capacity behind each cloud exist. And the like of Amazon still need to deploy physical servers in order to provide its services.
The physical and economical limitations are here to stay and cloud provider will need to deal with them.

Contention ratio is already used extensively by ISPs, and my guess is that we will start to see cloud provider using SLAs that looks a lot like what ISP are using with the democratization of utitliy computing. And you will end up paying a premium for a guaranty that you will not have one.

1 comment :

  1. Awesome observation.

    I agree that there will will be forms of resource contention, and a premium placed on guaranteeing resources when needed.

    My company (Cassatt) anticipated this when we built a policy engine to broker resources in an "internal compute cloud" (another name for utility computing). Essentially, you can create attributes for low/medium/high priority applications, and allow the policy engine to either (a) steal-from-peter-to-pay-paul (pull resources away from a low priority app for a high-priority one, or (b) allow the engine to "reach out" to other external (cloud) resources.

    Net-net is that we'll start seeing SLAs that are truly tied to business metrics and application priorities. Changes are afoot for IT Operations :)

    ReplyDelete