- A Large Scale Study of Data Center Network Reliability : the authors study reliability within and between Facebook datacenters. One of the key findings is the growth in complexity, heterogeneity and interconnectedness of datacenter increase the rate of occurrence of unwanted behaviours. Moreover, this seems to be also a key potential limiting factor for world scale spanning infrastructure undergoing rapid organic growth.
- Understanding Production: What can you measure? : what do you need to monitor and measure in production. Very good summary of many blog post out there.
- Failure Mode Effects Analysis (FMEA) : once you start reaching a certain production scale and more stringent requirement kicks in ( unless you were unlucky enough to have them at the get-go). You might want to run a failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all possible failures in a design, a manufacturing or assembly process, or a product or service. While it was mainly designed to address shortcomings in the manufacturing industry, it is still extremely useful for IT system analysis, especially when you want to prepare yourself pre-rollout of a chaos monkey like system.
A blog about life, Engineering, Business, Research, and everything else (especially everything else)
Showing posts with label datacenter. Show all posts
Showing posts with label datacenter. Show all posts
Thursday, November 08, 2018
[Links of the Day] 08/11/2018 : large scale study of datacenter network reliability, What to measure in production, Failure Mode effect Analysis
Labels:
analysis
,
chaos
,
datacenter
,
links of the day
,
measure
,
monitoring
,
network
,
production
,
reliability
Thursday, May 11, 2017
[Links of the Day] 11/05/2017 : Google - Push on green , Tensor flow in Datacenter , TCP congestion protocol
- In-Datacenter Performance Analysis of a Tensor Processing Unit : How custom deep learning hardware behave in real datacenter, implication and gain associated with the use of such custom solution.
- Push on Green : Great article on google roll out policy and process. A lot of common sense, and also some less common but as important. This is a great read for anybody involved in software delivery and especially if you are aiming at having an efficient CI/DI system.
- BBR : google congestion protocol for maximising bandwidth usage. It's new TCP scheduling algorithm to fight buffer-bloat at the TCP level. Since the majority of internet traffic is TCP, wide adoption would cause a big improvement. TCP scheduling only affects outgoing packets
Labels:
congestion protocol
,
continuous delivery
,
continuous integration
,
datacenter
,
google
,
hardware
,
links of the day
,
network
,
tcp
,
tensorflow
Monday, September 19, 2016
[Links of the day] 19/09/2016 : #AI bias, Incremental consistency , Customizable datacenter
- Stuck in a Pattern : as predictive policing tools are being widely adopted in corporation and public organisation. There is little transparency as how these systems have been configured. It seems that the current set of software designed and deployed may reinforce discrimination and inequality under a veil of marketing publicizing intelligent solution.
- Incremental consistency guarantees : The authors propose a system that instead of providing a single "hard" consistent answer to a query a system that will provide multiple reply with incremental consistency guarantee albeit with incremental latency cost. This allow system to make decision based on their consistency requirement as well as performance needs. This is interesting as it would allow some application to take decision based on consistent enough information while being able to revise their decision if needed once receiving a higher level of consistency response.
- Customizable Computing at Datacenter Scale : NAS 16 keynote , it seems that HPC and exascale system are slowly converging toward an hybrid model with heterogeneous resources, FPGA, GPGU , CPU , etc..
Labels:
ai
,
consistency
,
datacenter
,
fpga
,
gpgpu
,
links of the day
,
machine learning
Tuesday, August 23, 2016
[Links of the day] 23/08/2016 : Adapting In memory database architecture for Storage class memory and Datacenter network congestion management
- The implication of Storage Class Memory for In memory database architecture :
- SOFORT : The authors propose to modify traditional In memory database architecture in order to optimise its operation for upcoming storage class memory hardware. The idea is quite simple, get rid of the log mechanism and persist all data to NVM except for the index which needs to be maintained in RAM for performance requirement. SCM allow to drastically eliminate a lot of boiler plate architecture functionality by delivering fast byte addressable persistent storage. However, now the developers needs to be aware of the transnational model imposed by this new class of persistent memories. [Slides]
- Instant Recovery for Main-Memory Databases : This paper build on top of SOFORT and looks at leveraging NVDIMM or SCM for speeding up crash recovery features. The idea is not only speed up the normal operation but also eliminate the recovery cost in case of application crash [Slides]
- Note that both these paper have an author working for SAP, so my guess that we will start to see new dedicated feature in SAP Hana for supporting SCM.
- Flowtune : It seems that we are going to see slowly a return of the ATM model in data-center for networking fabric. In this paper the author propose to combine a form of MPLS system with a centralized allocator for resources management and congestion avoidance. Basically the system identify connection ( called flowlet ) establishment and end . Using the existing and past information it derive an optimal path and resources allocation minimizing interference and congestion over the lifetime of the flowlet. Looks like SDN is finally enabling a simplified and more robust ATM model within and probably across data-centers.
Labels:
datacenter
,
In memory Database
,
links of the day
,
network fabric
,
nvdimm
,
nvm
,
SAP HANA
,
sdn
,
storage class memory
Monday, July 11, 2016
[Links of the day] 11/07/2016: SSD failures, BCC , NUMA deep dives
- SSD Failures in Datacenters : Best student paper : SSD fail, What? When? And Why?
- BCC : I have been trying to trace some nasty RCU stall bug ( which turn out to be just the symptom of another problem) and BCC was really useful with this ordeal. It is quickly turning into the linux swiss army knife of debugging. BPF is an amazing piece of software.
- NUMA Deep Dive Series : Start of a series of posts looking into history and modern NUMA architecture.
Labels:
datacenter
,
kernel
,
links of the day
,
linux
,
numa
,
SSD
Tuesday, June 28, 2016
[Links of the day] 28/06/2016 : Heterogeneous Data-centers (FPGA - GPU) and User-mode Ethernet verbs
- Heterogeneous datacenters : Datacenter slowly evolve and we start to see the emergence of specialized dedicate hardware to squeeze the maximum efficiency per watts consumed. As general CPU start to hit their limit, user turn , like the HPC community, to FPGA or GPU to break away of the current power wall.
- A quantitative analysis on microarchitectures of modern CPU-FPGA platforms : Well if you want to venture in the heterogeneous datacenter you better read this paper on the different platform available out there.
- User Mode Ethernet Verbs : Probably the only serious contender to Intel DPDK, User mode verbs expose Verbs API allowing both user-mode applications/ULPs direct access to offload capabilities in the form of Raw Ethernet QPs. Basically you can send receive raw ETH packet using RDMA verbs API and leverage offload engine to accelerate the operations [video]
Labels:
cpu
,
datacenter
,
fpga
,
links of the day
,
network fabric
Tuesday, March 01, 2016
[Links of the day] 01/03/2016 : DSSD , Datacenter design [book] and latent faults [paper]
- DSSD : EMC released into the wild DSSD product (acquired last year). Quite a beast: all flash, 10 M IOPs, 100μs latency, 100GB/s BW, 144TB/5U. It use a PCIe fabric to connect the storage to the compute nodes, however I expect them to move soon to infiniband / omnipath fabric based on the talk they recently made.
- Datacenter Design and Management: book that surveys datacenter research from a computer architect's perspective, addressing challenges in applications, design, management, server simulation, and system simulation.
- Unsupervised Latent Faults Detection in Data Centers : talk and paper that look at automatically enable early detection and handling of performance problems, or latent faults. These faults "fly under the radar" of existing detection systems because they are not acute enough, or were not anticipated by maintenance engineers.
![]() |
Rolex Deep Sea Sea Dweller (DSSD) |
Labels:
book
,
datacenter
,
design
,
fault tolerance
,
flash
,
links of the day
,
network fabric
,
nvme
,
paper
,
pcie
Thursday, December 10, 2015
Links of the day 10/12/2015: Networks, Crowds, and Markets book, end to end encrypted DB, low level Datacenter devops
- Networks, Crowds, and Markets : Books on the different scientific perspectives and approach to understanding networks and behavior. Drawing on ideas from economics, sociology, computing and information science, and applied mathematics, it describes the emerging field of study that is growing at the interface of all these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.
- ZeroDB : An end-to-end encrypted database protocol.
- Vapor : can't believe a company actually named themselves vapor .. anyway . While their hardware is semi interesting what really picked my interest is their Open Data Center Runtime Environment. It aims to offer capability to expose and manage low level DC capabilities such as Power and Temp. I am not sure if devops will really want to go that low but this might be a nice addition to orchestration framework for meta optimization.
Labels:
book
,
database
,
datacenter
,
devops
,
encryption
,
links of the day
,
management
,
power
Thursday, August 20, 2015
Links of the day 20/08/2015 : #Facebook DC network, #cloud at #Berkley go BOOM and Eve
- Inside the Social Network’s (Datacenter) Network : A very insight full and deep look into Facebook data center networking. What's interesting to see is that there is a non negligible inter data center communication going on there.
- BOOM : Berkeley Orders Of Magnitude project, Enabling the construction of data-rich systems at unprecedented scales using a minimum of software.
- Eve : version 0 : a relational database, a new programming language, an IDE, and a UI editor, all built from scratch to fit our goals for a better programming foundation. Ambitious project, lets see how it develop.
Labels:
berkley
,
cloud
,
datacenter
,
facebook
,
links of the day
,
network
Tuesday, August 04, 2015
Links of the day 04 - 08 - 2015 : #AWS vs #AZURE , #Trends of Memory and #Datacenter technology
- Microsoft Azure vs Amazon Web Services (AWS) Services & Feature Mapping : Very comprehensive. Too large event to take a whack at. Essential if you need to compare.
- Stanford Memory Trends : set of tables is a compilation of various memory characteristics for RRAM, PCM, and CBRAM. These tables include most of the reported works in the IEDM, Symp. VLSI, and ISSCC from 2001 to today.
- New Technologies and Architectures for Efficient Data Center : As always power consumption dominate the report. The interesting bit it the emergence of NVM technology but the report seems to lack information regarding its impact on power consumption [slides]
Labels:
amazon
,
aws
,
azure
,
datacenter
,
links of the day
,
memory
,
Microsoft
Thursday, June 18, 2015
Links of the day 18 - 05 - 2015
Links of the day 18/05/2015: NVM biblio , Author of Pegasus and Heracles PhD dissertation, Container for #apple #OSX
- Pegasus and Heracles : PhD defense of David Lo presenting his result in data-center orchestration system. He developed the Pegasus and Heracles platform for Google which resulted in significant power consumption reduction as well as resources usage efficiency increase. My guess is that Google is probably running between 13x and 15x more efficient than classic data-center and the gap is growing. I m not really sure that the whole container craze is actually improving the efficiency side as fragmentation often mean more complex and difficult orchestration. Ha almost forgot the PhD defense slides are here
- NVMDB : comprehensive survey of the (at last count) 340 non-volatile memory technology papers published between 2000 and 2014 in International Solid-State Circuits Conference (ISSCC), Symposia on VLSI Technology and Circuits (VLSI Technology, VLSI Circuits), and International Electron Devices Meeting (IEDM). The resulting data set provides a clear picture of how these memory technologies have evolved over time. [ Online Biblio ]
- xhyve : Mac fanatic rejoice, you thought you avoided the container craze so far. But fear no more here comes the lightweight virtualization solution for OS X in all its glory. CoreOS is already talking about supporting it
Labels:
apple
,
container
,
datacenter
,
google
,
links of the day
,
load balancing
,
nvm
,
orchestration
Wednesday, December 17, 2014
Links of the Day 17 - 12 - 2015
Today's links 17/12/2014: HP's machine, #datacenter, #bigdata, stream computing, #bitcoin dev guide
- Tigon : open-source, real-time, low-latency, high-throughput stream processing framework from Cask Data, Inc. and AT&T . Interesting to see how now company are all releasing open source components.. Open source projects are now just another economic weapon on the corporate arsenal.
- Bitcoin Developer Guide : detailed information about the Bitcoin protocol and related specifications
- The Machine: HP datacenter scale computer based on memristor technology
- The HP Memristor Solution for Computing Big Data : Stanley Williams lecture on HP "machine"
Labels:
bigdata
,
bitcoin
,
datacenter
,
hp
,
memristor
,
Stream Processing
Monday, December 01, 2014
Links of the day 1 - 12 - 2014
Today's links 1/12/12014: functional programming pattern, virtual supercomputer on demand and VMWARE software defined datacenter architecture.
- Functional Programming Patterns : nice look into design pattern for Functional programming
- Virtual Supercomputer: Massive Solutions, announced a beta version of their computational service platform. It provides secure Internet access to high-performance parallel cluster resources and applications on demand.
- VMware Software-Defined Data Center : reference architecture providing an overview of the solution and the logical architecture as well as results of the tested physical implementation
Wednesday, September 24, 2014
Links of the day 24 - 09 - 2014
Today's links 24/09/2014 : #Storage, #Cloud , #Datacenter, #Watson , #IBM
- Hardware and its effect on Software : Predicting the future of server and data center technology by Per Brashers - Slides ( XLDB’13 )
- Watson Developers Cloud: Build a new generation of apps on top of Watson technology and framework - Red Hat Summit 2014
Labels:
cloud
,
datacenter
,
ibm
,
links of the day
,
storage
,
watson
Tuesday, November 19, 2013
Openstack consumption model : DiY vs Enterprise
Here are
some personal comments on Openstack adoption in the industry and their
statistics.
Openstack statistics:
While the
Openstack consortium is putting great effort in diffusing its statistics
to market / broaden its adoption ( you can see the latest batch here: http://www.openstack.org/blog/ 2013/11/openstack-user-survey- statistics-november-2013/
) you have to take them with a grain of salt.
One of
the thing that is not immediately visible, but you can guess it, is the
consumption model of Openstack. Openstack cloud deployment can be broadly
classified in two different category: Do it yourself ( DiY) , and
(semi) contracted out ( aka enterprise cloud).
DiY vs Enterprise cloud consumption model
My own
brew definition (partially adapted-stolen from some of Simon Wardley blog posts , check out his
great
blog ):
- DiY : Complete redesign of an infrastructure architecture, most of the time as close as possible to a true public cloud architecture ( or exact copy in the case of public cloud provider) and/or often to fit very specific needs. These solution aim to have the lowest cost possible by reusing as much open source tools out there combine with custom solution.
- Enterprise: "It's like cloud but the virtual infrastructure has higher levels of resilience at a high cost when compared to using cheap components in a distributed fashion. The core reason behind Enterprise Cloud is often to avoid any transitional costs of redesigning architecture i.e. it's principally about reducing disruption risks (including previous investment and political capital) by making the switch to a utility provider relatively painless." I.e. they want to be able to run "legacy" application alongside new one without having to throw away a lot of their existing investment ( HW - software - skillset - HR , etc.. ). These Enterprise cloud solution are sometimes delivered/consumed as (heavily) custom packaged Openstack solution from specific vendor : HP , mirantis, etc... These solutions are slowly making their way within the heavily regulated company segment ( compliance issues require on premise deployment).
On
the
other hand the Enterprise ones tend to be hybrid where you have a mix of
opensource solution and "enterprise" solution. Sometimes this is also
sprinkled with a dose of consulting. End consumer of these cloud
are sometime not aware that the solution they are buying is based on
open stack (
ex: in PaaS – Managed services scenario). By example Swisscom cloud
managed services ( offer SAP -ERP - BW - etc..) relies on Piston cloud
who relies on Openstack.
Often some key aspect are being bought separately for
their
enterprise features ( or performance, support) by ex: -storage : coraid (pNFS) / inktank (CEPH) , and
networking : arista- .
Openstack user base is skewed:
Now, why the
distinction is relevant to Openstack statistics: Cost savings open technology , avoid vendor lock-in are the main drivers, this should be hinting which one of
the two is predominantly represented in the users surveyed => DiY.
As a
result, the stats are quite skewed toward home grown solution where most of the
cloud stack is build using open source solution almost exclusively. This is why
CEPH or Ubuntu and other open source solution heavily dominate the stats
but they do NOT dominate cloud spending ( this is reinforced
with
the fact that most of the surveyed user have a 1-50 nodes cloud ... ~60%
+ of
the overall and only 20% of the clouds are used for production purpose
). I would love to see a venn diagram of the different dimensions. If anybody has access to the raw survey data I would be happy to do the diagram for them
By example, for the cloud storage solution of some Openstack
enterprise deployment in companies I came across you would see all the usual
suspect (EMC - Netapp - HP, etc..). However, they would be sometime
relegated as secondary player role while a certain number of newcomers took the
front stage. This newcomers obviously leverage the emerging market stage /
first mover to grab market share : Inktank ( commercial CEPH version) , Coraid
( pNFS , popular as it allow a natural transition from classical NFS
setup ) , and finally the wild card : Intel who is trying to push its
Lustre solution ( from their whamcloud acquisition) into cloud storage . Note: I am not too aware
on how glusterFS is fairing..
Openstack , a complex and rapidly evolving environment:
As you
can see Openstack ecosystem is heavily dominated by company using the DiY
model. This is further fuelled by the current state of Openstack which is
more a toolkit that you need to (heavily) customize to build your own cloud compared
to the self-contained solution like cloudstack, eucalyptus, flexiant, etc.. It makes
me feel like openstack is the Debian equivalent for cloud.
However,
its adoption is growing heavily in the corporate world (and as a result
fragmentation risk too): Oracle has its own home-brew version (but does not contribute
back anything and is completely closed) , SAP is starting some effort, Intel
is a big proponent (see IDF 13), Big managed services player are using it as their basic
building block ( T-system, Swisscom ,
Huawei , IBM etc..) .
A tough ecosystem for startups and established company alike:
Complex Ecosystem,
fragmentation, heavy reliance on custom solution makes it tricky for company to position themselves within this environment. Too low and they might be completely ignored due to the heavy DiY proportion , leading to a long struggle ( or death) hoping that the crossing of the desert will come soon.Too high level and they end up fighting with a multitude of Openstack clone as well as with other cloud solution.
There is no right decision there, maybe using a coherent layered approach across the openstack ecosystem would enable the creation of consistent revenue stream while limiting the race to the bottom competition ( competing on price). I probably will expand more on this concept in a follow up blog post.
There is no right decision there, maybe using a coherent layered approach across the openstack ecosystem would enable the creation of consistent revenue stream while limiting the race to the bottom competition ( competing on price). I probably will expand more on this concept in a follow up blog post.
PS: as I
write this postI came across the following Gartner blog post that echo in
some way my thought on Openstack ecosystem http://blogs.gartner.com/ alessandro-perilli/what-i-saw- at-the-openstack-summit/
. Again this is to be taken with a grain of salt as Gartner has a long history
of being a big VmWare supporter.
Labels:
business model
,
cash
,
cloud
,
consumption model
,
datacenter
,
DiY
,
ecosystem
,
enterprise
,
eucalyptus
,
flexiant
,
money
,
moneytization
,
openstack
,
SAP
,
Swisscom
,
ubuntu
Saturday, August 03, 2013
Hecatonchire Version 0.2 Released!
Version 0.2 of Hecatonchire has been released.
What's New:
We are now focusing on Stabilizing the code as well as robustness ( we aim at making the code production ready by 0.4) . Also, we are starting significant work to transparently integrate Hecatonchire so it can be transparently leverage via a cloud stack and more specifically openstack.
You can download it here : http://hecatonchire.com/#download.html
You can see the install doc here: https://github.com/hecatonchire/heca-misc/tree/heca-0.2/docs
And finally the changelog there : http://hecatonchire.com/#changelog-0.2.html
Or you can just pull the Master branch on github: https://github.com/hecatonchire
Stay tuned for more in depth blog post on Hecatonchire.
What's New:
- Write Invalidate coherency model added for those who want to use Heca natively in their application as Distributed Shared Memory( more on that in a subsequent post)
- Significant improvement in performance of page transfer as well as a numbres of bugs squashed.
- Specific Optimisation for KVM.
- Scale out memory mirroring
- Hybrid Post copy live migration
- Moved to linux Kernel 3.9 Stable
- Moved to Qemu-kvm 1.4 stable
- Added Test / Proof of concept tools ( specifically for the new coherency model)
- Improved Documentation
We are now focusing on Stabilizing the code as well as robustness ( we aim at making the code production ready by 0.4) . Also, we are starting significant work to transparently integrate Hecatonchire so it can be transparently leverage via a cloud stack and more specifically openstack.
You can download it here : http://hecatonchire.com/#download.html
You can see the install doc here: https://github.com/hecatonchire/heca-misc/tree/heca-0.2/docs
And finally the changelog there : http://hecatonchire.com/#changelog-0.2.html
Or you can just pull the Master branch on github: https://github.com/hecatonchire
Stay tuned for more in depth blog post on Hecatonchire.
Labels:
cloud
,
data center
,
datacenter
,
distributed
,
distributed computing
,
hecatonchire
,
In memory Database
,
infiniband
,
iwarp
,
kernel
,
kvm
,
rdma
,
SoftIwarp
,
virtualization
Thursday, August 01, 2013
Slide Deck - Project Hecatonchire - The Lego Cloud : Status, Vision, Roadmap update 08/2013
This slide deck provide the vision, status and roadmap update of the Hecatonchire project
Website: http://hecatonchire.com/
Git: https://github.com/hecatonchire
Website: http://hecatonchire.com/
Git: https://github.com/hecatonchire
Labels:
cloud
,
datacenter
,
distributed
,
hecatonchire
,
In memory Database
,
memory
,
rdma
,
SAP
,
SAP HANA
Wednesday, July 30, 2008
Virtualization: Energy Efficiency vs Energy Sustainability
Virtualization is touted as a key solution in order to optimize datacenter efficiency. It allows clients to consolidate work onto fewer computers, increasing utilization, which can significantly reduce energy and maintenance bills and simplify their infrastructure.
It permits to evolve from computer systems using 5 percent to 12 percent of their capacity toward consolidated system running at 80% (remaining 20%: hypervisor + margin for manoeuvre).
Ok, now let’s have a look at what this means in number:
Before: (Source :"Epa report on server and data center energy efficiency")

After :
It’s a whopping increase from 1.4 Watts to 11.2 Watts. An 800% efficiency increase!!!!!
OK, now if you look at the big picture its only: an increase from 1.4% to 11.2% .. A really small increase of 9.8% . It’s still good but not the silver bullet the marketing drones want us to believe.
But the pitcture get a little darker if we start digging a little bit. Lets get some facts first, from Managing energy and server resources in hosting centers, we learn:
It permits to evolve from computer systems using 5 percent to 12 percent of their capacity toward consolidated system running at 80% (remaining 20%: hypervisor + margin for manoeuvre).
Ok, now let’s have a look at what this means in number:
Before: (Source :"Epa report on server and data center energy efficiency")

After :

It’s a whopping increase from 1.4 Watts to 11.2 Watts. An 800% efficiency increase!!!!!
OK, now if you look at the big picture its only: an increase from 1.4% to 11.2% .. A really small increase of 9.8% . It’s still good but not the silver bullet the marketing drones want us to believe.
But the pitcture get a little darker if we start digging a little bit. Lets get some facts first, from Managing energy and server resources in hosting centers, we learn:
- When server are idle they drawn 60% of their peak power consumption
- The power comsumption is roughly linear with the load
Ok now, it means that if we move from an average 10% utilization ratio to a 90% (I roughly added 10% to the 80% for hypervisor related load), we end up drawing: 96% of the peak power consumption.
So far so good, let’s be bold and consider that we achieve an 8:1 consolidation ratio ( the reality is more around a 3:1 ratio) and we are still running the same hardware (while most of the time new , more powerfull, and ressource hungry hardware is deployed).
At the datacenter level it means we removed 7 servers out of 8 servers. An 87.5% reduction, which translate into an approximate reduction of power consumption of the servers by : 81.25 % not to forget that we can also reduce significantly the power , cooling and lightning consumption.
However, if you are a datacenter manager you don't want all this empty space (and unused servers) to go to waste (remember datacenter space is expensive). So we simply re-commissioned those servers for other task, lease them etc...
What do we end up with: a 36% power usage increase of the server’s power consumption of the datacenter + the rest. Roughly a 35% power consumption increase, BUT ! we increased the efficiency.
Now the big question where is the sustainability aspect in that?
Lets have a look at the sustainability definition :
"the quality of a state or process that allows it to be maintained indefinitely; the principles of sustainability integrate three closely interlinked elements—the environment, the economy, and the social system—into a system that can be maintained in a healthy state indefinitely."Virtualization sustainability check :
- Economy : Check ... More bangs for bukcs, revenue UP , shareholders happy
- Social : Kind of Check ... We provide more services, customers happy
- Environment : ... mm nope cannot find it, we are actually consumming more energy.
Result: Fail
Ok , now some will argue that you can turn off unused servers by using smart automation systems in order to reduce the environmental impact. Right .., why would a company would reduce its server ROI?
On top of that, here comes the savior buzzword : the cloud : Unused servers will be pushed into a cloud infrastructure so they can be used dynamically for other task .. No more waste
Ok , now some will argue that you can turn off unused servers by using smart automation systems in order to reduce the environmental impact. Right .., why would a company would reduce its server ROI?
On top of that, here comes the savior buzzword : the cloud : Unused servers will be pushed into a cloud infrastructure so they can be used dynamically for other task .. No more waste
Labels:
datacenter
,
greenIT
,
greenwashing
,
sustainability
,
virtualization
Tuesday, July 29, 2008
Green IT: Sustainability , strategy and hype
Recently, i have gradually been more and more invovle on "Green IT" topics and related projects (the extact term is : "to be caught up in the system"). I realized that the current strategy applied by company is mainly capability driven, to put it into simple word it means: doing more with the same amount or less resources (think virtualization).
The main metrics for those efforts is TCO reduction and actual environmental impact is just a side effect and used for "green washing" the company strategy.
On top of that, I have yet to see the actual numbers for postive environmental impact of such strategy. Company are claiming millions in savings, however, where are those million going? They are reinvested in non sustainable related effort.
The following graph depicts my opinion of the actual sustainable IT strategy taken by many company. Currently going green means reducing TCO. But since most of the money saved is reinvested in non environmental related projects. There is actually no real effort toward environmental sustainability.
However, these “cheap and green” solutions for TCO will soon be exhausted and company will actually need to spend money to maintain their environmentally friendly masquerade. This will in return reverse the trends, the TCO will rise because they want to be “green” and finally (hopefully) will generate a positive ecological impact.
The strategy will switch from being capability driven to business driven. How fast this change will occur. It will depend of various factors:
Let’s be a little bit pessimistic and add the hype cycle curve to the picture... (ok i m not fully objective here with the curve placement but its for the sake of the demonstration )

The main metrics for those efforts is TCO reduction and actual environmental impact is just a side effect and used for "green washing" the company strategy.
On top of that, I have yet to see the actual numbers for postive environmental impact of such strategy. Company are claiming millions in savings, however, where are those million going? They are reinvested in non sustainable related effort.
The following graph depicts my opinion of the actual sustainable IT strategy taken by many company. Currently going green means reducing TCO. But since most of the money saved is reinvested in non environmental related projects. There is actually no real effort toward environmental sustainability.

However, these “cheap and green” solutions for TCO will soon be exhausted and company will actually need to spend money to maintain their environmentally friendly masquerade. This will in return reverse the trends, the TCO will rise because they want to be “green” and finally (hopefully) will generate a positive ecological impact.
The strategy will switch from being capability driven to business driven. How fast this change will occur. It will depend of various factors:
- Awareness: Public Opinion increasing
- Economic: Consumer Demand Increasing Corporate; Customer Demand Increasing; Shareholder Demand Increasing
- Social and Environmental: Consensus growing on Environmental Impact Political
- Political: Government Laws / Regulations increasing
Let’s be a little bit pessimistic and add the hype cycle curve to the picture... (ok i m not fully objective here with the curve placement but its for the sake of the demonstration )

What I want to demonstrate is the risk created by the current hype to environmental strategy within companies and more particularly to the critical section: switching from capability to business driven strategy. It will coincide with the "Trough of Disillusionment" . At this stage the technologies fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology. And when the press lose interest so do the board members and share holders. This abandonment will be accelerated with the rising costs to maintain the environmentally friendly masque .
As consequences there is a high probability that companies will never crossover toward providing actual sustainable environmental solutions. The only things that will force them will probably come from public and political pressure due to ecological issue (not mentionning catastrophic ecological event).
I hope I’m wrong, but company are not a person. They have no social or ethic responsibility per se. And when you look at the result from Robert Hare, a University of British Columbia Psychology Professor and FBI consultant, which used diagnostic criteria from the DSM-IV to analyse the "personality" of the corporate "person". In his finding he compares the profile of the modern, profit-driven corporation to that of a clinically-diagnosed psychopath.
What a wonderfull world..........
As consequences there is a high probability that companies will never crossover toward providing actual sustainable environmental solutions. The only things that will force them will probably come from public and political pressure due to ecological issue (not mentionning catastrophic ecological event).
I hope I’m wrong, but company are not a person. They have no social or ethic responsibility per se. And when you look at the result from Robert Hare, a University of British Columbia Psychology Professor and FBI consultant, which used diagnostic criteria from the DSM-IV to analyse the "personality" of the corporate "person". In his finding he compares the profile of the modern, profit-driven corporation to that of a clinically-diagnosed psychopath.
What a wonderfull world..........
Labels:
cycle
,
datacenter
,
green
,
hype
,
IT
,
strategy
,
sustainability
Monday, July 28, 2008
Moving from green IT to green beret IT
I came across this link and realised that it might not be easy for these guys everyday. So i decided to adapt the famous song :The Ballad Of The Green Berets by Staff Sergeant Barry Sadler
Fighting admin from the sky
Fearless men who code and die
Men who email just what they say
The brave men of the GreenIT Beret
Green wings upon their chest
These are men, ICT's best
One hundred men will patch today
But only three win the GreenIT Beret
Trained to live off datacenter's land
Trained in ups, broadband
Men who code by night and day
Uptime peak from the GreenIT Berets
Green wings upon their chest
These are men, ICT's best
One hundred men will patch today
But only three win the GreenIT Beret
Back at home a young wife waits
Her GreenIT Beret has met his fate
His servers died from being slashdotted
Leaving her his last request
Put green wings on my son's chest
Make him one of ICT's best
He'll be a admin they'll test one day
Have him win the GreenIT Beret.
Green wings upon their chest
These are men, ICT's best
One hundred men will patch today
But only three win the GreenIT Beret
Labels:
afghanistan
,
data center
,
datacenter
,
green
,
greenIT
,
sustainability
Subscribe to:
Posts
(
Atom
)