Tuesday, January 26, 2010

The dark side of Green I.T.


To begin some definitions sustainability:


 Quite often Green I.T. strategy (if you are not familiar with the concept check Green I.T for dummies by HP) is a key aspect of sustainable efforts of companies . However these efforts , while positive when we look at them at the micro level of the economy and the environment. They tend to have a negative impact at the macro level if they are not taken far enough. In order to make the problem more easily palpable i will use the all time favourite: virtualization.

Virtualization allow  the combination of  several physical systems into virtual ones onto a single, powerful system, thereby unplugging the original hardware hence reducing power and cooling consumption.


The benefit of virtualization is rather obvious  when you concider that a server produce  4 tons of carbon dioxide (CO2) annually. Hence, using virtualization reduce the carbon footprint and TCO per service provided. Does virtualization  constitute a sustainable strategy?


Sadly, most of the time, companies  forget that Green I.T. efforts impact go beyond data center, organizational and business efficiency.  If they look at the figures they will quickly realise that by reducing their ecological footprint per service or product they contribute to a reinforce the negative trends on our environment at a global scale.


Why ?  Simple , Economics.  The Key word here is "lower": lower TCO ,  Lower cost  to provide a service , lower the production cost, etc...  Green I.T. is  touted as  good for consumers, businesses, the economy overall, and the environment because it can make service more efficient and, by extension, less expensive. When a something becomes less expensive it automaticaly widen your potential customer base form the one you had before. Most of the time this  drive up the demand for your products, which in return requires you expand your operation in order to cope (good for business). The problem arise from the size of the additional demand.  As you can see in the following figure, the income repartition is "similar" to an inverse exponential function.   





 

 So, every time you lower the price of a certain product, the potential demand increase exponentially! Which means that if a company wants to stay  carbon neutral while coping with the increased demand, this company needs to make exponential efforts to "green" its operation. At that moment the law of diminishing returns kicks in ( see figure below),  the ever increasing  cost of  the greening  operation start to hurt the company profits margin, which leads to its discontinuation.











To be sustainable a company has to understand that it needs become first efficient and then  transition to  become sufficient. However , companies quite often stop at being efficient, claiming to be sustainable and dismiss any further efforts as being unnecessary and counter-productive. By doing so, they are "sustainable washing" themselves.

Worse, these limited Green I.T. Environmental improvements at a micro level (understand company) increase  the rate of degradation at the macro level (planet). Because the rewards for an efficiency gain is substantial surpluses, enough boost economic prosperity of the company and, most significantly, enough surplus to allow further investment leading to yet more surplus. 

Not to mention that all this is also exacerbated by I.T. becoming a utility ( understand cloud).


The Efficiency strategy through green I.T. is often destructive in practice. To become truly sustainable , companies and the society will need to accept that  economic  growth is no longer possible.

 

Monday, January 25, 2010

The network performance within the cloud, an hidden enemy

A lot of people talked about the latency issue when hosting services in the cloud . Recently amazon latency hiccup revealed a deeper problem, but seems to be rarely discussed. While most focus on the network access and consume services from the cloud. I realise that their is a big unknown concerning network performance inside the cloud.

Could provider  don't disclose their real infrastructure underlying their cloud offers. By doing so, cloud customers are completly left in the dark regarding the network linking their different instances. Leaving them with the false warm feeling that their are on top their own flat network.

What does it mean:
  • You have no idea of  your  network or I/O performance for your instance. Your virtual interface is sharing a  physical (sometimes trunked) one(s) with  other tenants collocated on the same physical server and theycompete with you for a share of the network pipe.
  • You have no idea of your network performance  between multiple instances within the same cloud:
    • First your instances can be located in different branch of the infrastructure. Which means more network gears between them.
    • Then, Virtualizated  network gears can also be thrown into the mix. Which add virtual switches and routers with sub optimal performance (remember they are software) but add greater flexibility.
    • Finally, the network traffic generated by all the tenants makes it very difficult (and expensive) to guaranty QoS throughout the infrastructure. Not to mention that capacity planning , measurement and management becomes extremely difficult because it is impossible to predict  the(often asymmetric) bandwidth  network consumption of the instance.  A reason why cloud providers dream for hugely dense, multi-terabit, wire speed L2 switching fabrics.
As a consequence, there is not generally a published service level associated with throughput and latency  within cloud.  When oversubscription hit you, you often don't see it coming.  Maybe cloud will become similar to the home broadband  with  advertised "unilimited" offers but with content ratio.

All this, makes it extremely difficult to deploy  and guaranty the performance of  services that rely on low latency and/or high bandwidth architectures such as high performance computing, web and database clusters, storage access, seismic analysis, large scale data analytics, financial services and algorithmic trading platform.

I can think of  some solutions to these problems but this will be for another post.

Sunday, January 24, 2010

The 5 Reasons You're Failing at innovating

  1. You really don't want to engage directly with customers, employees, etc. You just want them to hear how innovative you are. Ex: You end up following trends rather than creating them.  However, you are quick at claiming  ownership.
  2. You perceive innovation as a threat to your historical revenue streams. However, you still pay lip service out of fear.
  3. You claim to have a strong in house innovation culture. However, you make somebody else pay for it (public funding) or absorbe (and dissolve ) it through acquisition .
  4. You get locked into thinking of ideas at just the higher end of the innovation continuum (next big thing) as a result your innovation program is unbalanced and miss out the small quick wins that are faster and easier to obtain. ( and  vice versa)
  5. Your risk management strategy for Innovation confuse risk aversion and educated risk taking.  Your fear of failure motivate your strategy to minimize the level of uncertainty, however it sclerose you to the point of inaction. 

After follow the moon, Avoid the law

After the follow the moon green trend for cloud computing. Companies are finally catching up to the potential of cloud computing to avoid constraining law and tax systems: like this company from india .

Companies will soon realise that they can leverage cloud computing the same way they use off shore accounts and tax heavens. Not only they will be able to dodge  "optimize"  taxes but also they will be able to avoid limitating law systems by moving the data or the processing of data in a less restrictive location.
Currently, governments force companies to maintain their data on the  country soil in order to control its use through legal means. However, it is really easy to move compute loads with the cloud to where the legal and regulatory environment is more favourable, while  leaving  the  data where it is. 

How fast the law will catch up? Not fast enough. And the problem gets worse when you think that the law system will need to be harmonised at a planetary scale to of any use. I already foresee country creating data and processing taxes heaven law in order to attract cloud providers and their customers. The same way Ireland slashed its corporate taxe to attract company on its soil.

By allowing  to access, store and process data  in a  fast,  seamless and transparent way, the cloud is creating a legal void that companies will exploit in order to maximise their profits and minimize their exposure. How long will we have to wait for cloud providers to advertise their new features :  legal cloud  location zone? Already been done :
According to Microsoft, the geolocation feature is also necessary for legal reasons since many Azure users apparently have “requirements on where they can place their code and data and where they cannot.” 

I think their is a burgeoning  market for IT consultants and lawyers in legal IT optimization  strategy.

Monday, January 18, 2010

IT Trading systems and Cloud take one

Investment banks, insurance companies, and hedge fund firms are running HPC applications to keep their financial services running smoothly. More specifically Algorithmic trading require a huge amount of processing power as well as fast network capabilities ( speed is money ).

I recently came across this company :  Marketcetera. I think we will soon see the emergence of PaaS , SaaS and IaaS  company   specifically dedicated for algo trading solution or financial computing.

The cost of creating testing and deploying such trading platform as well as testing new trading algorithm is rather prohibitive nowadays.  With the advent of cloud computing,  it becomes possible in the near future that companies will start offering at a reasonable price ( and  even at dynamic price) resource specifically dedicated and collocated for financial market operations.


However financial trading applications  have requirements that are very different from the classical web one that are  run on the cloud. I will just  expose some of the one affecting IaaS and PaaS:
  • IaaS , Performance is key: 
    • Network: 
      • Speed and location: Financial trading services have high bandwidth and low latency requirements. To satisfy such stringent requirements IaaS provider will have to be as close as possible to the stock exchange. They will need to have cloud's location s to be physically close or better collocated with  the exchange systems (a la ec2 zone). Except that in this case the actual location will matter less than the actual proximity physical proximity of a certain exchange (especially for unfair high frequency trading) .
      • Virtual Networking:
    • Hardware FPGA and GPGPU, ASIC :  Hardware acceleration can easily boost the performance of operations by a degree of magnitude. It has been successfully used by financial institution to do such thing as  XML processing, network routing, algorithmic trading , etc..  In the race to be the fastest, such piece of hardware can give an significant edge. However, future financial cloud providers will need to find a way to easily expose such piece of hardware. Creating a pool of hardware resource accessible through  I/O virtualization seems a potential solution (by using Hypertransport , Quickpath interconnect or PCIe over ethernet ).
    •  Virtualization vs BareBone: Virtualization always comes at a price. You lose performance and I/O speed. But, contrary to the popular belief, cloud infrastructure does not preclude the use of non virtualized resources. Cloud Providers can easily provide both: virtual  and dedicated hardware resource  within a same cloud in order to satisfy the  various demands of its customers.
  • PaaS , how to balance performance , flexibility and accessibility: 
    • Langage:  C and C++  still represent a huge portion fo the core of financial apps framework. You can even see bits ASM !. However current PaaS languages of predilection are .Net, Java, Phyton or Ruby on Rails. These are often slower but much more easier  (and easy to secure ) to use to create a platform cloud computing engine. One way to go around this problem would be to design a custom language for trading algorithms. Or allow easy integration of external processing component. Which leads to the next aspect.
    • Messaging and order routing : A similar problem arise  for messaging today cloud messaging  APIs are based on public REST, XML, and SOAP standards. While current trading platforms prefer the fast but proprietary (hence expensive ) ESBs or messaging platforms. It make it rather expensive to integrate cheaply with external component. Maybe ,if the PaaS vendor is also a IaaS  (servers ) and a SaaS (the ESB), the integration can be done or provided in a cost effective way.
  • Security : challenges are multiple:
    • VM security
    • network
    • Data (secure storage)
    • Tracability
    • Audit
    • Authentication
    • Non-repudiation
    • Integration with third parties and customer owned solution
    • Etc..
  • Reliability and high availability are and order of magnitude higher than typical cloud apps.

This list is not extensive and I definitely know I have  missed some aspects. But i will try to dwelve deeper in this problem that constitute trading in the cloud. And, while providing cloud for customers wanting to run financial apps is relatively more difficult than "traditional cloud offer". The benefits of such offer can  be worth the efforts:
  • Sharing exchange access point cost.
  • Lowering the cost of cheating entering in High frequency game.
  • Eliminating the cost of creating a trading platform.
  • Lowering the TCO of trading platform.
  • Lowering the cost of testing and validating trading algorithm. 
  • Etc...

    Thursday, January 07, 2010

    Wake up call, cloud is not green.

    In 2008 I mentioned multiple times the issue with virtualization , cloud computing and the false impression that it help save the planet.

    Almost a year and a half later ,  some people start to realize that standardization and automation doesn't reduce the energetic impact of IT  but accelerate the consumption of IT products by making them more easily accessible. As a consequence this acceleration drive up the energetic demands.

     Micro, vs macro, efficiency vs sufficiency quite often people are too quick to jump to the conclusion without having real facts to support it.

    Wednesday, January 06, 2010

    Smart Grids, Smart Meters, and the complexity of the power grid market

    Voltalis sell energy management device to curb energy consumption. It can reduce the energy bill by up to 10% by switching on an off appliance, the aim is to reduce the demands at peak time. It allow energy producer to avoiding using expensive energy production systems to cope with the demands. And reduce the customer bill by reducing their energy consumption when the price is at its highest.

    A couple of month ago I saw a very interesting news about this smart grid startup:


    The judgement created quite a turmoil however a lot of people complaining didn't understand very well the mechanism of the electricity grid and its market and jumped to the easy conclusion that the french regulator and EDF was trying to eliminate green competitor.However the reality is a little bit more complex...



    Voltalis get (very well) paid when a energy provider ask them to reduce the energy consumption of some of its customers in order to ease the pressure. However when Voltalis does the same when it is not requested it is required by law that the company responsible for this reduction must either pay a compensation or consume the energy that was not consumed ( hard to do ).


    Why is the law like that? Simple. You cannot just fire up or shut down a nuclear power plant like that. Increasing or decreasing energy production is a slow and expensive process . As a result energy providers try to match / predict the consumption to limit the waste. By reducing the energy consumption voltalis disturb the systems and in fact make it a lot more inefficient in a certain way ( the electricity is produced but not consumed).

    Solving this issue would require Voltalis and the different actor to share information in order to optimize the system efficiency. However i suspect that the reactivity of the Volatlis system is an order of magnitude faster than the one the energy provider can have with their production systems. I think as long as the systems is not fully integrated with the energy production regulation it is basically just a way to exploit fail in the systems coated with the cloak of sustainability ( yes the customer consume less but the energy is still produced and wasted..)