- An Opinionated Guide to ML Research : ML Research guide for newbies and a good refresher for seasoned veterans.
- On Drafting an Engineering Strategy : short post describing how to draft an engineering strategy when you start in a new role and new need to layout the next 10-18 month roadmap
- Contrastive Self-Supervised Learning : Contrastive self-supervised learning techniques is an interesting class of methods that build representations by learning to encode what makes two things similar or different.
A blog about life, Engineering, Business, Research, and everything else (especially everything else)
Showing posts with label strategy. Show all posts
Showing posts with label strategy. Show all posts
Tuesday, March 17, 2020
[Links of the Day] 17/03/2020 : Machine Learning research Guide, Engineering Strategy, Contrastive Self Supervised Learning Techniques
Labels:
contrastive
,
deep learning
,
engineering
,
links of the day
,
machine learning
,
strategy
Tuesday, June 05, 2018
Microsoft aim at undercutting AWS strategic advantage with its Github acquisition
Microsoft acquired Github code sharing platform. This is a brilliant move. It allows Microsoft to offset some of the insane advantages that AWS gained over the last couple of year via its innovate, leverage, commoditise strategy.
![]() |
ILC model by Simon Wardley |
ILC relies on the following mechanisms: the larger the ecosystem, the higher the economy of scale, the more users, the more products being built on to of it, and the more data gathered. AWS continuously use this data trove to identify patterns and apply it to determine what feature they are going to build and commoditise next. The end goal is to offer more industrialised components to make the entire AWS offer even more attractive. It's a virtuous circle, even if sometimes AWS cannibalise existing customer product and market share on the way. Effectively, AWS customers are AWS R&D department that feedback information into the ecosystem.
As a result, AWS methodically eat away at the stack by standardising and industrialising components built on top of their existing offer. It further stabilises the ecosystem and enables them to tap further into the higher level of the IT value chain. As a result, AWS can reach more people while organically growing their offer at blazing speed with minimal risk. Because, apparently, all these startups are taking all the risks instead of AWS.
How does Microsoft acquisition play into this? Well, Microsoft with its Azure platform is executing a similar play to the one that AWS is delivering. However, Microsoft has a massive gap to bridge to catch up to AWS. And the difference is widening at incredible speed as the economy of scale offers an exponential advantage. AWS has a significant head start in the ILC game, which confers them a massive data collection advantage over its competitor. However, Microsoft can hope to bridge that gap by directly undercutting AWS and instantly tap into the information pipeline coming from GitHub. By doing so, Microsoft can combine the information coming from its Azure platform with Github. Providing them with an invaluable insight that combines actual component usage and developers interest and use. Moreover, this will also offer valuable insight into AWS, and other cloud platforms as a majority of projects ( opensource or not) deploying onto these are hosted on Github.
![]() |
Cloud Wardley Map with Github position |
I quickly drew the Wardley map above to demonstrate how smart the acquisition of Github is. You can clearly see how the code sharing platform enables Microsoft to undercut AWS strategic advantage by gaining ecosystem information straight from the developers and the platforms above. As Ballmer once yelled: Developers, developers, developers!
Tuesday, July 18, 2017
[Links of the Day] 18/07/2017 : Banking API, how Cooperation strategy evolve, Distributing file system images
- Teller : API for your bank account, already support a couple of UK bank. This is nice, however, I wonder how banks will react? Also, EU is forcing bank open their API but UK is leaving the EU, and there is a chance that the UK banking system will try to seize the chance to create its own independent banking API. Creating more barrier to entry for fintech startup. Anyway this is rather cool, however, I am still disappointed that most banks do not expose an API to access your data.
- How Cooperation Evolves : the authors looked at evolution as a thermodynamic process and found how cooperation strategy evolve and how they can be manipulated.
- Casync : A tool for distributing file system images, really cool if you have to update images often and want to have a cheap traffic and storage wise solution.
Labels:
API
,
bank
,
cooperation
,
distribution
,
evolution
,
fintech
,
links of the day
,
strategy
,
tool
Thursday, June 15, 2017
[Links of the Day] 15/06/2017 : Corporate Wargaming, ScatterText , Forecasting and BigData
- Scattertext : Nice and easy to use tool allowing to find independent terms in corpora and present them in an attractive way via interactive scatter plot.
- Forecasting in the light of Big Data : the authors look at how forecasting is changing in the light of new tool and data collection capability emerge. And as often the authors suggest that the best approach would be to combine model and quantitative analysis in order to obtain the best forecasting strategy. However, I am afraid that this would require a new framework and also training data scientist to leverage this two opposite methodology correctly.
- Competitive Wargaming and Simulations for Business Forecasting & Analytics : slide deck providing a good insight on how wargaming can help shape the decision process and strategy of a company. However, like any tool, it's about as much about the preparation and how to leverage the outcome of the game itself.
Labels:
bigdata
,
forecasting
,
links of the day
,
natural language
,
strategy
,
wargaming
Tuesday, December 06, 2016
Another tale of Execs bottom-up Blindness : SAP, Oracle, [Insert Software Giant here] vs AWS
After watching this year's AWS re:Invent show, I can help but have this strange feeling of “déjà vu”. AWS achieve to deliver exciting new product and solution that take the industry by storm. GreenGrass literally take my last year prediction and makes it reality. What’s even scarier is that, with GreenGrass, AWS achieve the feat of unifying #IoT and #DevOps under a common platform.
But my feeling of déjà vu didn’t come from the GreenGrass announcement. It came from the Step Functions announcement. And it felt like a textbook repeat of what happened to Detroit's big three when Toyota took over the US car market: Another case of bottom up blindness.
Step is the natural next step in the evolution of AWS product portfolio. It nicely complement the serverless lambda product and allow to organise your server-less logic flow in a transparent manner.
But what is more important is the implication of such release. Step allow you to create and coordinate complex workflow. Which is just a step off having a full blown ERP. It is actually even better than an ERP as Step allow you to coordinate any kind of distributed application. Allowing you to define business process workflows that blend seamlessly business logic and application logic.
24 hours before the release of Step, I was mentioning on Twitter that AWS just needed a good process workflow service to open up the ERP business.
@avideitcher Yep, but they are working their way up , DB / BI -> need job / task / process workflow , then you add CRM / ERP— Benoit Hudzia (@blopeur) December 1, 2016
One day later, Step Function emerged and it won’t be long before we see emergent ERP like functionality hosted on Step / Lambda. This release sound like a swan song for the like of SAP or Oracle. However they had plenty of warnings. But like many in other industry they buried their head in the sand.
With Step, AWS has now completed offering all the building block to develop a full blown ERP without the hassle of taking care of all the nitty gritty details ( scaling, resilience, deployment etc..). To name some of the main one:
- Database : Redshift / Aurora / DynamoDB
- Data Ingestion : Firehose
- BI : Quicksight
- Business process : Step + lambda
- Mobile : AWS Mobile
SAP, especially, should have taken a hint when they announced the availability of their in memory database on AWS in 2012 and AWS announce Redshift DB a couple of day later.
SAP , ORACLE , and many other are repeating the same error that other industry giant fell for:
Failure to master small product :
They are addicted to the revenue they extract from the fat margin of the top 20% customers ( think Nestle, CocaCola, Caterpillar, etc..). These customer deploy these massive ERP systems while smaller customer tend to be thrown upon because the margin extracted from them are not high enough. Sadly, like with the automotive industry, history repeats itself. The Detroit's big three lost to Toyota because they didn’t care to lose the small car market. Margin where small enough they deemed that Toyota could have the small car market share as long as they retained the higher end one. However by doing so Toyota gained a foothold and worked its way up the food chain while they lost market share. AWS is doing the same thing. It started with the infrastructure and is now on their doorstep looking at their crown jewel. By losing the smaller product mastery, they lose the knowledge necessary to deliver product for the companies that will become the giant of tomorrow.
Failure to embrace cloud :
SAP sold their hosting operation in 2009 to T-Systems. They literally sold out all their expertise that would help them transition to offer solid cloud solution. They have been left in the dust by the competition and are fooling themselves if they think they can catch up. Elisson on the other side is trying to fool its shareholder by promising to catch up AWS. However at the current rate of their infrastructure investment they will reach current AWS infrastructure size in FIVE YEAR !!
Failure to simplify their stack :
Anybody who used an SAP or Oracle system knows how painful it is to deploy the simplest web service .. let alone an ERP system. Moreover, it is almost impossible to learn about the systems if you are not working for either a company that use them or the companies that produce them
Failure to learn :
These software giant traditionally didn’t run their own systems. And to be honest, hardware and operational cost was often a fraction of the overall license cost. Because of that, it was easy to tell the customer to throw more hardware at the software problem. However, everything change when you start to offer your solution as a service. And SAP experienced this the hard way with their SAP Bydesign solution (discontinued in 2013 and revived a year later). Rumors was that the company was spending 7 euro to run the system for every euro it was getting from its customers.
However, they didn’t learn from their mistakes and change their approach to build, deliver, and run their systems. Look at S/4 HANA, even today you cannot run it on anything else than the humongous X1 instance. And this lack of learning seems to be widespread among the software giant industry. Surprisingly we are almost 5 year after HANA availability on AWS announcement and I have yet to see a customer running a production HANA on AWS.
actually they don't us it for prod @jplsegers if you read the case study its only test/dev ... pic.twitter.com/m3hT7bU7au— Benoit Hudzia (@blopeur) November 16, 2016
Because of these failures from bottom-up blindness, these companies easily fall prey of the Tower of Hanoï fallacy and as a result they cannot :
- transition to a new value chain
- acquire new technical skills/knowledge
- expand to new market and business model
- compete with ecosystem (cloud) natives
- manage revenue stream self cannibalization
Traditional software giants have feet of clay and AWS already chipped its way up to the knee without them noticing.
Wednesday, August 24, 2016
[Links of the Day] 24/08/2016 : Berkley Data science texbook, IaaS pricing trends, Wargaming conference
- IaaS Pricing Patterns and Trends : Interesting to see that Google is really aggressive on its pricing.
- Computational and Inferential Thinking : Texbook for UC Berkley foundation of data science class.
- Connections 2016 : Report of the connections 2016 conference on Wargaming. There is definitely more than a thing or two to be learned by corporation on how to leverage war-games for improving and testing various strategy and understand competition behavior. Sadly the slides deck are not available yet. I wish that there was also video recording of this event.
Labels:
data science
,
IaaS
,
links of the day
,
pricing
,
strategy
,
wargaming
Wednesday, May 11, 2016
[Links of the day] 11/05/2016: teaching with wargaming , EU FRAND OSS threat, Teams Coordination tradeoff
- Teaching strategy with wargames : fantastic article on how the teaching philosophy behind wargaming in the US army (navy and marines)
- FRAND : FRAND stand for Fair, Reasonable, And Non-Discriminatory licensing terms. This is the standard proposed by EU to be used for the future single digitial market. This standard, if implemented, will become a massive roadblock for the use, creation and consumption of open source software because it will not allow royalty free license to be used. However, its not too late and EU citizen comment on the proposal here.
- Coordination amongst teams trade-offs: Team coordination is like designing a distributed system , compromise have to be made.
Labels:
EU
,
license
,
links of the day
,
management
,
open source
,
strategy
,
team
,
wargaming
Tuesday, May 10, 2016
Corporate Strategy Conformism
One of the frustrating aspects of corporate behavior is the tendency for a large portion of the enterprise population to chose the most common rather than the most profitable strategy. The natural assumption is that the market interaction, amongst humans and between corporate entities, is driven by the desire to achieve straightforward payoff maximisation. It is not just an assumption; it is often a contractual obligation that management should first and foremost consider the interests of shareholders in its business actions.
As a result, conformists’ strategy behavior within a corporation’s executive seems to be a contrarian to the requirement of the strategic decision making process. These comportements seem to be widely spread within the corporate world. Every year, we see a new fade coming and spreading like wildfire, bi-modal from Gartner, we need a platform, etc..
This generated quite a lot of frustration from me as I was trying to understand why such behavior was commonplace. I wasn’t fully satisfied by herding instincts or widespread incompetence justification that were so often put forward. This just didn’t add up because in a competitive system, under-performing strategies should have been eradicated a long time ago due to evolutionary constraints, and yet instead the conformist strategy persisted.
Behavioral conformity :
Recently I came across a series of Game Theory papers [ 1 - 2 - 3 - 4 ] that provide the beginning of an answer . This research indicated that spatial selection for cooperation is enhanced if an appropriate fraction of the population chooses the most common rather than the profitable strategy within interaction range.
One of the premises of this research relies on the aspect that humans are social animals that are not solely driven by the desire to maximise fitness, but also aim to socialize and identify oneself within a group of like minded individuals. The main idea is that some of the individuals participating in a competitive game do not adopt the strategy that is the most profitable but instead, the strategy that is the most common in the group or within one's interaction range. Another interesting side effect of this behavior is that the individual and group benefits from the strategic approach homogeneity, which can explain why this behavior persist in the face of more individualistic corporation.
As executives in a corporation are still humans (until proven otherwise), we can assume that these behaviors are transposed, to a certain extend, to their strategy decision making process by incorporating conformity as an alternative to straightforward payoff maximisation in cooperation strategy. We can begin to have a valid explanation as to copycat behavior of management. Naturally, not all participants are conformists, but they all have this tendency to become such, to a certain degree.
Within a network of enterprises doing business and competing against each other, you end up having different participants with various degrees of conformism tendencies. And the repartition and diversity ultimately has an impact on the ecosystem landscape. However, the amount of information available for the strategic decision making choices greatly influences the conformity tendency and this is often abused by consulting and analyst companies.
Conformity by Information overload :
It has been demonstrated that conformist tendencies for choosing a particular strategy is reinforced with the increase of available information. And that is why consulting and analyst companies are able to hook-in so many corporations with a similar pitch. When overwhelmed with information, humans tend to turn to a third party to make the decision for them.
This approach is well known and applied in retail. For example, as soon as you step into a mobile phone shop, you are assailed by a multitude of phones to chose from. As the customer starts to feel disoriented by this tsunami of choice, the “helpful” shop assistant quickly step in to advise you. At this stage, customers are more than happy to be led to the “ideal” phone they need.
Following a similar scenario in the corporate world, management often relinquish their strategic decision power after being bombarded with information. This onslaught, being orchestrated by the same businesses that will sell them the strategy they “need”.
Luckily for advisors and the advised corporation, conformist behavior delivers some non negligible benefits.
Benefits of conformism :
What is interesting is that the research [ 5 - 6 ] show that the adoption of strategy is most common within the interaction range of the player regardless of the expected payoff. While you would expect this to be detrimental, it has been shown that the participant adopting the conformist approach, coordinates their behavior in a way that minimizes individual risk and ensures that their payoff will not be much lower than average.
Moreover, the effect of conformism in game theory is similar to the one we witness in the business world. This behavior fosters the emergence of large homogeneous clusters competing with each other in the same ecosystem. The participants in these groups benefit from the cooperation by virtue of the network reciprocity. And from this network reciprocity effect, they benefit from a minimization of invasion risks by defectors.
As a result, while you do not get the chance to greatly outperform the market, you are still providing a better than average return, while benefiting from a group protection. These types of results are generally regarded as positive by company boards and hence conformist behavior is rarely questioned, but usually encouraged (but not always consciously).
Leveraging conformism situational awareness
The great thing about conformist behavior, is that once you have learned to spot it within the business ecosystem, you can leverage it to your advantage. By drawing a map of a business’ ecosystem, you can identify its’ participants, strategic approach and spot homogeneous clusters competing in the population.
Armed with this information you can identify weak clusters to disrupt. Weak clusters are often characterized by the presence of conformist leaders. Leaders are corporations with a high collective influence in the network. If leaders conform, they lose the capability to capitalise on their central position within the network by forfeiting their capacity to search for a more successful strategy. This means that the business within the cluster will suffer from a form of sclerosis and won’t be able to coordinate and/or formulate a counter strategy when challenged. As a result, individual corporations in the cluster are more vulnerable as the network effect dwindles.
Another way to leverage conformism at your advantage is to spot companies that copy neighboring strategies, when the neighbor’s business belongs to a different evolutionary stage. Imagine that Company A is selling utility components and use a platform strategy. Company B uses Company A’s product but sells custom components. If Company B decides to mimic Company A’s platform play, you then have the opportunity to to out-compete Company B by industrializing Company B’s components.
Conformism or Anti-conformism ?
The conformist approach can be a valid choice as it creates non negligible benefits. However, this needs to be an informed choice, not a contrived one. Corporations need to understand the state of the playing field and decide whether or not leveraging the cluster effect might benefit them or hinder them. Moreover, by understanding how network cooperation effects are detrimental to them, companies can tailor their own strategy to take advantage of the conformist tendencies of their surroundings. In the end it boils down to understanding the surrounding corporate environment and to know when to blend in or not.
Labels:
conformism
,
corporate
,
game theory
,
management
,
situational awareness
,
sociology
,
strategy
Tuesday, February 16, 2016
Is Amazon using Lumberyard to replicate its Video business model in Gaming?
Amazon recently launched its own gaming engine : Lumberyard. This should not seems as a surprise with the stream of high level investments they have been doing in the field over the past couple of years: Twitch.Tv or licensing Crytech engine (which form the basis of lumberyard) to name a few. Moreover, I will not extend on Amazon underlying strategic play as Simon Wardley already did brilliant job explaining it here and there.
A lot of discussions analyzing Amazon move have been centered around the long term strategic play in the AR/VR field. However, in the short term, Amazon might be aiming to accelerate the value chain shrinkage while potentially moving away from the traditional gaming industry business model to a service based approach and ultimately a complementary business model.
Historically, Work-for-hire & Royalty advance practices generated significant upfront fixed cost in the video game development business model which resulted in making publishers as the de facto main financial operator. Publisher typically mitigated these financial risks via portfolio management which exacerbate the reliance on franchise game (86% of the market).
With the switch to digital distribution platform and the explosion of mobile gaming, the physical logistics needs drastically decreased while the barrier to entry vanished. This commoditization trend effectively shrinked the value chain significantly as show in the diagram below.
Moreover technology evolution enabled an increased variety in revenue model :
- Subscription : Subscribers pay periodically to get access to the game (ex: World of Warcraft)
- Utility : metering usage, i.e. a pay as you go approach. This model is widely used among MMOs in China.
- Advertisement : sometime used in combination with other model in order to enhance revenue. Pure advertisement model are mainly found in mobile.
- Micro-transaction model : dominate Eastern markets
- Licensed : historical revenue model
- Free to play : combination of other revenue models , ex Advertisement + micro transaction.
There is two other business model that are still nascent in the gaming industry: Service and Complementary. And this is where, I believe, Amazon is aiming all along with its gaming push.
If we look at the value chain above Amazon's plan seems extremely straightforward. By facilitating production systems via “free” access to lumberyard Amazon facilitate the emergence of gaming studio. This open platform with efficient underlying support system (AWS) and with great customer exposure (Twitch.tv) will drive the commoditization of content creator and by transitivity content itself. This approach literally cut the grass under the foot of traditional gaming corporation that relied on a high barrier to entry ( via game engine licensing, distribution network, backend, etc..).
By analyzing beyond the pure technological aspect we can quickly theorize that Amazon might be aiming at pivoting the gaming revenue model completely. Amazon could push for a Netflix like service model. However, there is a greater chance that it will follow the same approach it used for Amazon Video. Amazon could start offering Video Game access (downloading via app store steam style first, streaming later) free as a complementary to Amazon Prime customers. Prime serving as an incentive and creating opportunities for more lucrative cross-sell and up-sell opportunities. The Gaming service attracts customers to Amazon store, where they can purchase the content which is not available for free, as well as other products from Amazon. Moreover the overall business model effect would be further reinforced through the Twitched.tv broadcast platform.
Obviously, to support and accelerate this model, Amazon will need to start producing its own games. It needs to offer an attractive gaming experience that cannot be easily replicated while co-opting the rest of the industry at the same time.
One of the key element regarding the pace of change will be dependent of the commoditization of the hardware platform and co-optation of existing one. If Amazon is able to broker a deal with MSFT or Sony ( the later is more likely because they already run their services on AWS). They would be able to gain a foothold in the gamer market. However, by co-opting the “hardcore” PC market , TV causal (Fire TV) and mobile, Amazon should be able to squeeze out the competition. Even if the console put up a fight, they would be able to enshrine in concrete any market gain by enrolling top game studio and capturing gaming franchise.
Last but not least, the value of console hardware is dropping fast while console software value is increasing and already exceeding hardware. Similar relations are to be found for handheld devices with an even greater gap. Amazon, just has to wait for for the gap to reach a critical point and then wipe out the nascent video game streaming industry by leveraging its existing expertise from VDI (workspace). All of this would be a textbook replay of the Amazon Video strategy.
The future of gaming is about to enter a new era. While the AR/VR future is exciting there is a gaming business model war looming that will hit way before these technologies reach maturity.
Labels:
amazon
,
aws
,
business
,
business intelligence
,
business model
,
gaming
,
lumberyard
,
mapping
,
strategy
,
value chain
Wednesday, February 10, 2016
The Tower of Hanoi Fallacy
The drive behind pursuing upward value chain motion is commendable in the the eyes of the shareholders. However, it often result in a costly failure or worse becoming a zombie company (Yahoo...). Most of the time, such approach fail because corporations do not understand and satisfy the core requirement driving the Innovate - Leverage - Commoditise strategy via leveraging a real ecosystem play. As a result, companies end up deploying brittle strategy with very weak underlying gameplay which under perform in a highly competitive market.
One of the main reason behind the failure of such strategy is company's lack of customer's ecosystem understanding. Software vendors, by example, have the false sense that they understand the use of their competencies (database systems or ERP) beyond the customer horizon. They regularly misunderstand the higher level of the stack valued by customers. SalesForce customers do not care if the underlying DB is from Oracle or SAP Hana. In AWS case, nobody care they run Xen, KVM or VmWare. This is perfectly illustrated in the cloud value chain mapping bellow. The only important parts is the last layer exposed to the customer itself. Anything under it is just the submerged part of the cloud iceberg.
Additionally, companies do not only fall prey to the Stack Fallacy. Sometimes, rather than (just) trying to move up the stack, corporation try to also move laterally in a bid to absorb adjacent stack. Large software vendor (ex: Oracle & SAP) are quite attracted to this tower of Hanoi play as look to force their way into a parallel stack with alternative business models. Recently these very large software corporation embrace such strategy by trying compete directly against IaaS and PaaS solution rather than co-opting them.
These large companies tend to have a seemingly good awareness of their competences based on the resources and knowledge that they gathered via their marketing departments, sales teams, R&D dept, etc.. This awareness allow them to maximise revenue from the software value chain (mapped below, interpretation and trimmed down version of the software value chain by Pussep and Al.) .
However, these companies have build enormous revenue base on this type of value chain. And, transitioning to a different consumption model generate a struggle originating from the financial implication of transforming from licensing to SaaS model combined with the self cannibalization of their existing revenue streams. Basically, they cannot (out)innovate competition from the market they transfer into due to internal financial tension.
Moreover, while they have a good understanding of their value chain. They often completely underestimate the new one they need to adopt. The assumption is that it is just another Tower of Hanoi play.
Rather than co-opting existing platform by building as a customer on top of the cloud value chain they suddenly need to expand their capabilities in order to internalize the new requirement. You can see in the diagram below the daunting challenge they are facing. Not to mention they still require to maintain their old revenue streams in order to transition. This bipolarity is not without reminding Gartner Bimodal strategy, which if often considered to be harmful.
As a result without situational awareness, it is easy to fall prey of the Tower of Hanoï fallacy as it is near impossible to achieve:
- transition to a new value chain
- acquisition of new technical skills/knowledge
- expand new market and business model understanding
- compete with ecosystem natives
- manage revenue stream self cannibalization.
You might ask what type of play these behemoth (SAP - ORACLE) should adopt? Here is a very succinct possible strategy based off Wardley IBM vs Aws play.
- Target a proxy / platform play by leveraging the inherent inertia of their customers. Hint : lifespan of SAP ERP solution average 15 years.
- Provide a AWS/Azure/GooG cloud proxy service for customer and third party ISV. Their customer are looking for stable, robust and reassuring solution for the backbone of their enterprise. Providing an first safe path for extension of "legacy" on premise solution with external cloud service (even and especially not their own). And second a migration path for full blown cloud deployment.
- Play the platform play card but without recreating their own cloud IaaS. SAP & Oracle have already something in that domain which should help make the transition as long as ego don't get in the way. Then encourage the rest of their ecosystem to join while focusing on operational excellence. To be honest they have such a massive ecosystem it is surprising they didn't try to push this further already. Moreover, leveraging Cloud foundry is also another option, however this might be too late for this.
- Monitor the growing ecosystem in order to spot successful emerging services. Acquire them rather than copying them in order as in the beginning it is all about ecosystem expansion rather than commoditization. This will help create confidence in the solutions as well as attract newcomer with the potential promise of future acquisition.
- Offer a form of cloud insurance market solution for as a long term pay. They can leverage it to expand their data mining capabilities while satisfying the customer risk averse need.
Labels:
cloud
,
enterprise
,
erp
,
IaaS
,
mapping
,
oracle
,
SaaS
,
SAP
,
stack fallacy
,
strategy
,
value chain
,
wardley
Wednesday, February 03, 2016
AzureStack : Beyond hybrid cloud play, capturing the future "near shoring" cloud market
Microsoft released AzureStack last week. This solution, built on top of the forthcoming Window server 2016, enables customers to deploy a private cloud with hybrid capabilities. This product extend beyond limited subset of Azure features offered by Azure pack as Microsoft promised a full 1:1 code match and feature match of the Azure cloud.
Microsoft argue that the real value of this offer stem from the combination of scale, automation and app development capabilities that has been evolved from the Azure platform. And that Microsoft is bringing these to the enterprise through Azure Stack.
In reality, this offer is a private cloud for public cloud Trojan horse. From this venture, Microsoft will be able to gather immense value from capturing workload informations to via their APIs while being adjacent to customer data. Not only it allows Microsoft to tailor Azure in order to ease its cloud service adoption. But also allows them to gather invaluable information about potential services that could be internalized within its own cloud solution via an ecosystem play. To a certain extend, it permit to outplay Amazon by reaching directly into its customer premises without having to upfront the CAPEX.
To be honest, I think Microsoft Azure private cloud is Sauron like genius - https://t.co/UnFD0W468E - for 2009. Problem is, it's 2016.— swardley (@swardley) February 2, 2016
Some may argue that that Microsoft is late in deploying this strategy and I tend to agree with the analysis. But, I would also argue that Microsoft might try to jump an evolution step in the history of computing. One of the possibility is that Microsoft is looking at capturing the future market of cloud near shoring via AzureStack.
What is cloud near shoring you might ask : it is the opposite of cloud bursting. It allows company to move critical workload and/or data closer to the end user, in this case literally within the company itself while retaining the majority of its operations within a public cloud.
You have to remember that we are still bounded by the law of physics. We cannot transfer information faster than the speed of light and as a result with have a physical restriction when it comes to bandwidth and latency. Moreover new workloads are emerging that will require the presence of closely geo-replicated assets which will bootstrap the need for such cloud usage patterns. Immersive VR is one example, another is the real time business analytic combine with machine intelligence. In the future, companies might be hosting 90% or more of their workload + data in the cloud while running part of the services closer to where it is needed.
The possibilities around this concept are quite vast: you can offer on premise or near premise near shoring solution via a form of CDN for cloud workload : CWDN or you Microsoft can resell for you your unused private cloud ressource to local user by example.
As you can see, the original intent of enabling private public cloud transition via an on premise cloud might just be a deceiving late move enabling the capture of the next cloud market use case for Microsoft.
Wednesday, January 27, 2016
Brittle vs Ductile Strategy
Companies and startups often pursue a path of "brittle strategy" and in it’s execution, it can be translated, in layman terms, into something like this:
Heard about the guy who fell off a skyscraper? On his way down past each floor, he kept saying to reassure himself: “So far so good... so far so good... so far so good.” How you fall doesn't matter. It's how you land!- Movie : La Haine (1995)
Brittle strategy :
A brittle strategy is based on a number of conditions and assumptions, once violated, collapses almost instantly or fails badly in some way. That does not mean a brittle strategy is weak, as the condition can either be verified true in some cases and the payoff from using this strategy tends to be higher. However the danger is that such a strategy provides a false sense of security in which everything seems to work perfectly well, until everything suddenly collapses, catastrophically and in a flash, just like a stack of cards falling. Employing such approach, enforces a binary resolution: your strategy will break rather than be compromising, simply because there is no plan B.
From observation, the medium to large corporate company strategies’ landscape is often dominated by brittle "control" strategies as opposed to robust or ductile strategies. Both approach have their strong parts and applicability to corporate win the corporate competition game.
The key to most brittle strategy, especially the control one, is to learn every opponent option precisely and allocate minimum resources into neutralizing them while in the meantime, accumulating a decisive advantage at critical time and spot. Often, for larger corporations, this approach is driven by the tendency to feed the beast within the company that is to say the tendency is to allocate resources to the most successful and productive department / core product / etc.. within the company. While this seems to make sense, the perverse effect is that it is quite hard to shift the resources in order to be able to handle market evolution correctly. As a result of this tendency, the company gets blindsided by a smaller player which in turn uses a similar brittle strategy to take over the market.
The startup and small company ecosystem sometimes/often opts for brittle strategy out of necessity due to economic constraints and ecosystem limitations because they do not have the financial firepower to compete with larger players over a long stretch of time, they need to approach things from a different angle. These entities are forced to select an approach that allows them to abuse the inertia and risk averse behavior of the larger corporations. They count on the tendency of the larger enterprise to avoid leveraging brittle strategies, made to counter other brittle strategies. These counter strategies often fail within bigger market ecosystem as they are guaranteed to fail against the more generic ones. Hence, small and nimble company try to leverage the opportunity to gain enough market share before the competition is able to react.
Ductile strategy :
The other pendant of the brittle strategy is the ductile strategy. This type of strategy is designed to have fewer critical points of failure, while allowing to survive if some of the assumptions are violated. This does not mean the strategy is generally stronger, as the payoff is often lower than a brittle one - it’s just a perceived safer one at the outset.
This type of approach, will fail slowly under attack while making alarming noises. To use an analogy, this is similar to the the approach employed with a suspension bridge using stranded cables. When such a bridge is on the brink of collapse, will make loud noises allowing people to escape danger. A Company can leverage, if the correct tools and processes are correctly put in place, similar warning signs to correct and adapt in time, mitigating and avoiding catastrophic failure.
To a certain extend, the pivot strategy for startups offer a robust option to identify the viability of a different hypothesis about the product, business model, and engine of growth. It basically allows the Company to iterate quickly fast over the brittle strategy until a successful one is discovered. Once found, the Company can spring out and try to take over the market using this asymmetrical approach.
For a bigger structure, using the PST model combined with Mapping provides an excellent starting point. As long as you have engineered within your company and marketed the correct monitoring system to understand where you stand at anytime. Effectively, you need to build a layered, strategic approach via core, strategic and venture efforts combined with a constant monitoring of your surroundings. This allow you to take risks with calculated exposure. By having the correct understanding of your situation (situational awareness), you will be able to mitigate threats and react quickly via built-in agility.
However, we cannot rely solely on techniques that allow your strategy to take risk while being able to fail gracefully. We need techniques that do so without insignificant added cost. The cost differential between stranded and solid cables in a bridge is small, and like bridges, the operational cost between ductile and brittle strategy should be low. However, this topic is beyond the scope of this blog post but I will endeavor to expand on the subject in a subsequent post.
Ductile vs Brittle :
The defining question between the two type of strategies is rather simple: which strategy approach will guarantee a greater chance of success? From a market point of view this question often turns into : is there a brittle strategy that defeats the robust strategy?
By estimating the percentage of success a brittle strategy has against the other strategies in use, weighted by how often each strategy is used by each competitor you can determinate the chances of success.
Doing this analysis is a question of understanding the overall market meta competition. There will be brittle strategies that are optimal at defeating other brittle strategies but will fail versus robust. However, the robust one will succeed against certain brittle categories but will be wiped out with other. Worse still, there is often the recipe for a degenerate competitive ecosystem if any one strategy is too good or counter strategies are too weak overall.
Identifying the right strategy is an extremely difficult exercise. Companies do not openly expose their strategy/ies and/or often they do not have a clear one in the first place. As a result, if there is a perception that the brittle strategy defeats the ductile one, therefore the brittle strategy approach ends up dominating the landscape. Often strategy consulting companies rely on this perception in order to sell the “prêt a porter” strategy of the season.
Furthermore, ductile strategies tend to be often dismissed as not only do they require a certain amount of discipline, but also the effort required in its success can be daunting. It requires a real time understanding of the external and internal environment. It relies on the deployment of a fractal organisation that enables fast and risky moves, while maintaining a robust back end. And finally, it requires the capability and stomach to take risk beyond maintaining the status quo. As a result, the brittle strategy often ends up more attractive because of its simplicity, more so that it’s benefit from an unconscious bias.
The Brittle strategy bias:
Brittle strategies have problems "in the real world". They are often unpredictable due to unforeseen events occurring. The problem is we react and try to fix things going forward based on previous experience. But the next thing is always a little different. Economists and businessmen have names for the strategy of assuming the best and bailing out if the worst happens, like "picking pennies in front of steamrollers" and "capital decimation partners".
It is a very profitable strategy for those who are lucky and the "bad outcome" does not happen. Indeed, a number of “successful” companies have survived the competitive market using these strategies and because the (hi)story is often only told by the winner’s side only, we inadvertently overlook those that didn’t succeed, which in turn means a lot of executives suffer from the siren of the survival bias, dragging more and more corporations into similar strategy alongside them.
In the end all this lot ends up suffering from a more generalized red queen effect whereby they spend a large amount of effort standing still (or copying their neighbors approach). This is why when a new successful startup emerges, you see a plethora of similar companies claiming to apply a similar business model. At the moment it's all about UBER for X and most of these variants. If they are lucky, they will end up mildly successful. But for most of them, they will fail as the larger corporations have been exposed and probably bought into the hype of the approach.
Labels:
brittle
,
ductile
,
enterprise
,
startup
,
strategy
Wednesday, January 06, 2016
Links of the day 06/01/2016: art of ware, data analytic platform, #blockchain in 2015 #fintech
- Qminer : data analytics platform for processing large-scale real-time streams containing structured and unstructured data. Really cool if you want to provide real time classification or sentiment analysis solution within your product. [github]
- The art of ware : great reinterpretation of Sun Tzu classic for creating and marketing IT products. A classic and must read.
- Blockchain 2015: Slidedeck presenting and anlyse of the blockchain in the Finabcial service landscape
Labels:
analytic
,
blockchain
,
fintech
,
links of the day
,
strategy
Monday, November 23, 2015
Canonical Land and Grab Strategy to capture the private cloud market.
Canonical recently released their Autopilot product. Autopilot fit within the bring your own hardware (ByoHW) market segment for private cloud. This product allow you to deploy and manage a full Openstack cloud. Canonical makes heavy use of its JuJu, MaaS and other in house (but open sourced) software. You can have a glimpse of how this product work in this blog post. This post also provide a glimpse of the complexity associated with the deployment and orchestration of an Openstack cloud. However, what is even more interesting is their pricing strategy and offering an insight on how they plan to "land & grab" the private cloud market.
Land :
For anything under 10 servers, it's basically free (but no support). Note that you need at least 5 server for a deployment and 6 for HA, so you have a little bit of wiggle room there but not much. With this strategy, Canonical tries to specifically target the biggest user segment of Openstack. You have to remember that most deployment (78%) are less than 100 nodes with 36% less than 10. Moreover, according to the survey most of these small deployment are DIY and use off the shelf Openstack solution (source or distro). We can deduce that there is a very small sales and profit potential in this segment as these users rarely have the budget. Most of the time they relies on in-house expertise and community support.
However, it is quite smart to try to hook them up by providing them with easy to deploy, robust and maintainable solution. As the user base is large there is a significant market potential for some of them to upgrade as they start reaching a certain size and their needs change the benefit from the paid solution arise. Obviously the objective is to use the land and grab approach with the least barrier (as in free) to entry possible in order to "land" the biggest Openstack user segment out there.
Grab:
If we look now at the pricing model, we can notice that they offer three type of payment plan per VM/Hour (0.003$/Hour), per Server/Year (1000$ / Year), or per availability zone. If you run less than the equivalent of 39 VM full time for a year per server (see chart below), you are better off going for the VM pay as you go model. This might be a smart choice if you tend to run big VMs such as the one for big data application. However, if run a hoard of small VMs you might quickly pay way more than the per server pricing plan, especially if you cram 1000+ VMs per rack.
The nice thing about this tiered approach is that it allows you to test the water with the pay as you go model and then switch to a more “classic” model if you discover that you will cross the 39 VMs per server ratio. Naturally heavy user will prefer the per servers model, but the dynamic one will be more easily adopted by a crowed already accustomed to the AWS cloud pricing model.
Moreover, even without a highly dynamic instance count, the per VM/H model should remain the preferred payment for the majority of small to medium Openstack deployment. As based on the survey, the vast majority (74%) of deployment have less than 1000 VMs. If we count that 78% of the deployments have less than 100 Nodes we can safely assume that the average number of VM per nodes is less than 39. As you can see, this pricing model is a very smart way of targeting the Openstack ecosystem by reducing friction to transition from free to a paying model.
Moreover, even without a highly dynamic instance count, the per VM/H model should remain the preferred payment for the majority of small to medium Openstack deployment. As based on the survey, the vast majority (74%) of deployment have less than 1000 VMs. If we count that 78% of the deployments have less than 100 Nodes we can safely assume that the average number of VM per nodes is less than 39. As you can see, this pricing model is a very smart way of targeting the Openstack ecosystem by reducing friction to transition from free to a paying model.
Retain
Obviously, when you cross the 75 servers line customer might want to switch to the availability zone pricing plan. However, based on the Openstack statistics, there is very few deployment with more than 100 nodes out there. As a result, this pricing model is probably there to show an natural upgrade path in pricing and support for potential customer. The objective is to demonstrate that they are able to service the complete spectrum of Openstack customers. While, I do not know yet how well Canonical fare on the integration/consulting side against Mirantis or other integrator out there, I suspect that they want to show a full end to end pricing model and solution in order to be taken seriously by the market. Moreover, it help reassure the customer that they can accompany them as their cloud needs grow.
Wednesday, November 11, 2015
Links of the day 11/11/2015 : Networking for storage, Google ML and Platform design toolkit
- Low Latency Networking for Storage : overview of Intel networking stack and library for storage at the excellent Tech Field Day events.
- TensorFlow : google opensource "some" of its machine learning tools.
- Platform Design toolkit : interesting toolkit and canvas focusing on how to create a platform to access and leverage the potential of an ecosystem is increasingly recognized as the most powerful way to achieve market success.
Labels:
ecosystem
,
links of the day
,
machine learning
,
networking
,
platform
,
storage
,
strategy
Wednesday, September 09, 2015
No, "you weren't ahead of time", you just were riding the wrong diffusion curve
“Launched ahead of their time” - a claim a lot of startups (and indeed more established companies) use to explain their product failure. In some rare cases, a product is truly ahead of it’s time, however, there is no market for it at all and no supporting component within the supply chain enabling it to be viable commercially and economically. But in most cases, these claims can be boiled down to a lack of traction from their offering.
In this blog post, I will focus on the “prematurely interrupted” hockey stick growth curve that some companies experience and the misunderstanding surrounding same. It looks and feels like exponential growth, but the ride terminates far earlier than the potential market research predicted. Incomprehension, surprise and denial are often common when the sales flat-line occur because customer feedback was great. As a consequence, companies use the “ahead of their time” excuse to explain their failure. However, the truth is the market for the product they built simply dried up.
Often these companies misunderstood the true reality of the diffusion of innovations curve presented below. With successive groups of consumers adopting the new technology (shown in blue), its market share (yellow) will eventually reach saturation level. The interpretation is that the technology adoption implies same product consumption across a consumer group.
In this graph, each phase of the adoption is represented by a different customer group that requires a tailored product in order for them to adopt it. While the concept and technology to a certain extent is similar across each consumer group, the actual product may vary drastically in shape and form. As a result, the technology, product, and consumption model evolves with each phase at different pace. In the graphic below, I have overlayed the actual diffusion curve of each sub group on top of the diffusion of innovation curve, in order to make it clearer. Note that this concept is derived from Wardley’s mapping technique tying diffusion and evolution within a single map.
As you can see, each customer type represents an independent sub-market with its own characteristic and inertia. It can be extremely easy to become trapped within a sub customer ecosystem. Often companies validated their products within such subspace and show impressive stats along a number of dimensions, such as high engagement, viral coefficient, or long-term retention. However, what is important to understand is how big is the customer market you validate your product in as well as asking the question, does it belong to a bigger ecosystem? Without this information, a company can quickly end up trapped into a local maxima. As a result, companies get boxed into a line of creative design thinking making tiny incremental improvements but never looking beyond that one solution. They became addicted to positive reinforcement, created out of their customer feedback, thereby preventing them from looking beyond that one solution to an innovative solution along different creative lines of thinking. That's how a company ends up having hipchat vs slack. The only difference between the two is the packaging of technology and it allows one to thrive along a bigger diffusion curve, while the other one seems stuck.
As mentioned, the technology evolves over time and with each diffusion wave. Quite often from genesys, custom built, product, and finally utility. However, there are many chasms to cross as there are a multitude of competing versions created, evolved (and dying). To be able to cross from one stage to another requires not only to understand the technological requirements of the new consumption model for the diffusion curve, but also the economic imperative associated with it, as shown in the graphic below. The reality is that the market fabric is a fractal tissue, made of a multitude of diffusion curves. You have the actual technology evolution as shown in the graph below, for each of these curves you have the same similar sub-curve representing the various adoption rate. These sub-curves are then subdivided and overlapped with smaller ones created by each company's product/services competing within the space.
This overall complex fabric creates a difficult environment for determining the correct strategy to apply. Identifying the current state of the ecosystem, its direction and when to adapt is a daunting task with a multitude of variables to take into consideration (which I might try to take a stab at in a future post). For the lucky or for the visionary, that spot the trend early enough, they may then attempt to sell early, or pivot their strategy. Pivoting their strategy is a rather difficult operation to execute correctly or even at the right time. Too early or too late and you can lose momentum of the current diffusion wave while the next one might not have picked up yet. In this case, your capacity to wait it out depends ruthlessly on your burn rate. Many companies fail at that stage simply because of bad timing.
To conclude, often when a product, company or startup claims to have failed in their endeavours because they were “ahead of their time”, this is a misconception. In reality and unfortunately in the majority of cases, they simply did not understand the ecosystem they had evolved in and got stuck in a local maxima. For some, it turned into a kiss of death while others, into a curse of zombification.
This overall complex fabric creates a difficult environment for determining the correct strategy to apply. Identifying the current state of the ecosystem, its direction and when to adapt is a daunting task with a multitude of variables to take into consideration (which I might try to take a stab at in a future post). For the lucky or for the visionary, that spot the trend early enough, they may then attempt to sell early, or pivot their strategy. Pivoting their strategy is a rather difficult operation to execute correctly or even at the right time. Too early or too late and you can lose momentum of the current diffusion wave while the next one might not have picked up yet. In this case, your capacity to wait it out depends ruthlessly on your burn rate. Many companies fail at that stage simply because of bad timing.
To conclude, often when a product, company or startup claims to have failed in their endeavours because they were “ahead of their time”, this is a misconception. In reality and unfortunately in the majority of cases, they simply did not understand the ecosystem they had evolved in and got stuck in a local maxima. For some, it turned into a kiss of death while others, into a curse of zombification.
Monday, August 17, 2015
Links of the day 17/08/2015 : #NVMe & #RDMA , #Strategy , Cryptography in hostile environment
- NVMe over RDMA fabric : interesting bit PMC Sierra and Mellanox unveiled NVMe ove RDMA fabric as well as peer direct technology for NVM storage. This open up a certain world of possibility where you could combine without CPU involvement GPU - NVM(e) - RDMA. Literally offloading all the storage operations.
- Strategy Scenario and the use of mapping : excellent series of posts by Simon Wardley showing how leveraging his mapping technique allow CEO - CIO to navigate the tortuous strategic decision. The Analysis of the scenario can be found here
- The network is hostile : TL;DR: we don't encrypt enough and early enough
Labels:
cryptography
,
links of the day
,
Mellanox
,
network
,
nvme
,
rdma
,
strategy
Wednesday, July 23, 2014
Links of the day 23 - 07 - 2014
Today a really cool programming language, code optimization, AWS profitability, Algorithms book and last but not least strategy
- Commuter : an automated scalability testing tool that hunts down unnecessary sharing in your code. It try to identify shared cache lines within the code that can severely limit your software's ability to scale. [github]
- No more hiding cloud revenue any longer for Amazon : For years, Amazon Web Services hid its revenue in the shadow of Amazon.com’s quarterly earnings statements. However, due to AWS growth and US accounting regulation it might need to finally disclose its profitability in more details.
- Algoxy : Book of Elementary Algorithms and Data structures
- Two very nice blog post from Simon Wardley - Playing chess with companies and Notes on organisation - Very good introduction on how to use Mapping to define and understand a Strategy as well as
- Escher: A language for programming in metaphors. This is a really cool programming language with a lot of potential, but probably also to far reaching and complex for your average developer.
Labels:
code
,
escher
,
links of the day
,
optimization
,
programming languages
,
strategy
Sunday, January 24, 2010
After follow the moon, Avoid the law
After the follow the moon green trend for cloud computing. Companies are finally catching up to the potential of cloud computing to avoid constraining law and tax systems: like this company from india .
Companies will soon realise that they can leverage cloud computing the same way they use off shore accounts and tax heavens. Not only they will be able tododge "optimize" taxes but also they will be able to avoid limitating law systems by moving the data or the processing of data in a less restrictive location.
Currently, governments force companies to maintain their data on the country soil in order to control its use through legal means. However, it is really easy to move compute loads with the cloud to where the legal and regulatory environment is more favourable, while leaving the data where it is.
How fast the law will catch up? Not fast enough. And the problem gets worse when you think that the law system will need to be harmonised at a planetary scale to of any use. I already foresee country creating data and processing taxes heaven law in order to attract cloud providers and their customers. The same way Ireland slashed its corporate taxe to attract company on its soil.
By allowing to access, store and process data in a fast, seamless and transparent way, the cloud is creating a legal void that companies will exploit in order to maximise their profits and minimize their exposure. How long will we have to wait for cloud providers to advertise their new features : legal cloud location zone? Already been done :
I think their is a burgeoning market for IT consultants and lawyers in legal IT optimization strategy.
Companies will soon realise that they can leverage cloud computing the same way they use off shore accounts and tax heavens. Not only they will be able to
Currently, governments force companies to maintain their data on the country soil in order to control its use through legal means. However, it is really easy to move compute loads with the cloud to where the legal and regulatory environment is more favourable, while leaving the data where it is.
How fast the law will catch up? Not fast enough. And the problem gets worse when you think that the law system will need to be harmonised at a planetary scale to of any use. I already foresee country creating data and processing taxes heaven law in order to attract cloud providers and their customers. The same way Ireland slashed its corporate taxe to attract company on its soil.
By allowing to access, store and process data in a fast, seamless and transparent way, the cloud is creating a legal void that companies will exploit in order to maximise their profits and minimize their exposure. How long will we have to wait for cloud providers to advertise their new features : legal cloud location zone? Already been done :
According to Microsoft, the geolocation feature is also necessary for legal reasons since many Azure users apparently have “requirements on where they can place their code and data and where they cannot.”
I think their is a burgeoning market for IT consultants and lawyers in legal IT optimization strategy.
Tuesday, July 29, 2008
Green IT: Sustainability , strategy and hype
Recently, i have gradually been more and more invovle on "Green IT" topics and related projects (the extact term is : "to be caught up in the system"). I realized that the current strategy applied by company is mainly capability driven, to put it into simple word it means: doing more with the same amount or less resources (think virtualization).
The main metrics for those efforts is TCO reduction and actual environmental impact is just a side effect and used for "green washing" the company strategy.
On top of that, I have yet to see the actual numbers for postive environmental impact of such strategy. Company are claiming millions in savings, however, where are those million going? They are reinvested in non sustainable related effort.
The following graph depicts my opinion of the actual sustainable IT strategy taken by many company. Currently going green means reducing TCO. But since most of the money saved is reinvested in non environmental related projects. There is actually no real effort toward environmental sustainability.
However, these “cheap and green” solutions for TCO will soon be exhausted and company will actually need to spend money to maintain their environmentally friendly masquerade. This will in return reverse the trends, the TCO will rise because they want to be “green” and finally (hopefully) will generate a positive ecological impact.
The strategy will switch from being capability driven to business driven. How fast this change will occur. It will depend of various factors:
Let’s be a little bit pessimistic and add the hype cycle curve to the picture... (ok i m not fully objective here with the curve placement but its for the sake of the demonstration )

The main metrics for those efforts is TCO reduction and actual environmental impact is just a side effect and used for "green washing" the company strategy.
On top of that, I have yet to see the actual numbers for postive environmental impact of such strategy. Company are claiming millions in savings, however, where are those million going? They are reinvested in non sustainable related effort.
The following graph depicts my opinion of the actual sustainable IT strategy taken by many company. Currently going green means reducing TCO. But since most of the money saved is reinvested in non environmental related projects. There is actually no real effort toward environmental sustainability.

However, these “cheap and green” solutions for TCO will soon be exhausted and company will actually need to spend money to maintain their environmentally friendly masquerade. This will in return reverse the trends, the TCO will rise because they want to be “green” and finally (hopefully) will generate a positive ecological impact.
The strategy will switch from being capability driven to business driven. How fast this change will occur. It will depend of various factors:
- Awareness: Public Opinion increasing
- Economic: Consumer Demand Increasing Corporate; Customer Demand Increasing; Shareholder Demand Increasing
- Social and Environmental: Consensus growing on Environmental Impact Political
- Political: Government Laws / Regulations increasing
Let’s be a little bit pessimistic and add the hype cycle curve to the picture... (ok i m not fully objective here with the curve placement but its for the sake of the demonstration )

What I want to demonstrate is the risk created by the current hype to environmental strategy within companies and more particularly to the critical section: switching from capability to business driven strategy. It will coincide with the "Trough of Disillusionment" . At this stage the technologies fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology. And when the press lose interest so do the board members and share holders. This abandonment will be accelerated with the rising costs to maintain the environmentally friendly masque .
As consequences there is a high probability that companies will never crossover toward providing actual sustainable environmental solutions. The only things that will force them will probably come from public and political pressure due to ecological issue (not mentionning catastrophic ecological event).
I hope I’m wrong, but company are not a person. They have no social or ethic responsibility per se. And when you look at the result from Robert Hare, a University of British Columbia Psychology Professor and FBI consultant, which used diagnostic criteria from the DSM-IV to analyse the "personality" of the corporate "person". In his finding he compares the profile of the modern, profit-driven corporation to that of a clinically-diagnosed psychopath.
What a wonderfull world..........
As consequences there is a high probability that companies will never crossover toward providing actual sustainable environmental solutions. The only things that will force them will probably come from public and political pressure due to ecological issue (not mentionning catastrophic ecological event).
I hope I’m wrong, but company are not a person. They have no social or ethic responsibility per se. And when you look at the result from Robert Hare, a University of British Columbia Psychology Professor and FBI consultant, which used diagnostic criteria from the DSM-IV to analyse the "personality" of the corporate "person". In his finding he compares the profile of the modern, profit-driven corporation to that of a clinically-diagnosed psychopath.
What a wonderfull world..........
Labels:
cycle
,
datacenter
,
green
,
hype
,
IT
,
strategy
,
sustainability
Subscribe to:
Posts
(
Atom
)