Friday, December 23, 2016

[Links of the Day] 23/12/2016 : Microsoft Configurable cloud (with fpga), Open Pilot OSS driving software, Deep learning is all about rigor

  • Microsoft's Production Configurable Cloud : built in custom nic + fpga for highly configurable and dynamic network stack in Microsoft DC. The work is really impressive. It demonstrate how pervasive FPGA and customization hardware will be in future datacenter. 
  • Open Pilot : open source driving agent providing Adaptive Cruise Control (ACC) and Lane Keeping Assist System (LKAS) for Hondas and Acuras. This is a really interesting solution and I wonder how fast other company will start to leverage or opensource their own solution in order to accelerate adoption. However without extremely strong verification and proof system ( formal method ) it will be extremely hard ( and illegal probably ) to deploy such software at this stage. 
  • Nuts and Bolts of Building Deep Learning : Andrew Ng reiterated at NIPS2016 that there is no secret AI equation that will let you escape your machine learning woes. All you need is some rigor. [video]

Wednesday, December 21, 2016

[Links of the day] 21/12/2016 : Bigdata Systems comparison, State of EU tech & NuttX RTOS

Monday, December 19, 2016

[Links of the Day] 19/12/2016 : Cloud storage consistency models, heterogeneous memory management and atomic consistency for storage class memory

  • Consistency Models for Cloud Storage Services : A must read for anybody relaying on any for of cloud storage. It is imperative to understand the consistency model of these service in order to avoid bad surprises. Sadly, a lot of cloud storage out there lack of official documentation on the subject or are really fuzzy and lack proof. 
  • Soft2LM : heterogeneous memory management , basically optimise memory allocation and migration between tier in order to minimise power consumption while maximizing performance. 
  • Free atomic consistency in storage class memory with software based write-aside persistence : interesting article on a software stack that aim to deliver atomic consistency for SCM in write aside scenario. I am not sure how often write aside pattern though. 

Tuesday, December 06, 2016

Another tale of Execs bottom-up Blindness : SAP, Oracle, [Insert Software Giant here] vs AWS

After watching this year's AWS re:Invent show, I can help but have this strange feeling of “déjà vu”. AWS achieve to deliver exciting new product and solution that take the industry by storm. GreenGrass literally take my last year prediction and makes it reality. What’s even scarier is that, with GreenGrass, AWS achieve the feat of unifying #IoT and #DevOps under a common platform. 

But my feeling of déjà vu didn’t come from the GreenGrass announcement. It came from the Step Functions announcement. And it felt like a textbook repeat of what happened to Detroit's big three when Toyota took over the US car market: Another case of bottom up blindness.

Step is the natural next step in the evolution of AWS product portfolio. It nicely complement the serverless lambda product and allow to organise your server-less logic flow in a transparent manner. 

But what is more important is the implication of such release. Step allow you to create and coordinate complex workflow. Which is just a step off having a full blown ERP. It is actually even better than an ERP as Step allow you to coordinate any kind of distributed application. Allowing you to define business process workflows that blend seamlessly business logic and application logic. 

24 hours before the release of Step, I was mentioning on Twitter that AWS just needed a good process workflow service to open up the ERP business. 

One day later, Step Function emerged and it won’t be long before we see emergent ERP like functionality hosted on Step / Lambda. This release sound like a swan song for the like of SAP or Oracle. However they had plenty of warnings. But like many in other industry they buried their head in the sand. 

With Step, AWS has now completed offering all the building block to develop a full blown ERP without the hassle of taking care of all the nitty gritty details ( scaling, resilience, deployment etc..). To name some of the main one: 
  • Database : Redshift / Aurora / DynamoDB 
  • Data Ingestion : Firehose
  • BI : Quicksight
  • Business process : Step + lambda 
  • Mobile : AWS Mobile
SAP, especially, should have taken a hint when they announced the availability of their in memory database on AWS in 2012 and AWS announce Redshift DB a couple of day later. 
SAP , ORACLE , and many other are repeating the same error that other industry giant fell for:

Failure to master small product :
They are addicted to the revenue they extract from the fat margin of the top 20% customers ( think Nestle, CocaCola, Caterpillar, etc..). These customer deploy these massive ERP systems while smaller customer tend to be thrown upon because the margin extracted from them are not high enough. Sadly, like with the automotive industry, history repeats itself. The Detroit's big three lost to Toyota because they didn’t care to lose the small car market. Margin where small enough they deemed that Toyota could have the small car market share as long as they retained the higher end one. However by doing so Toyota gained a foothold and worked its way up the food chain while they lost market share. AWS is doing the same thing. It started with the infrastructure and is now on their doorstep looking at their crown jewel. By losing the smaller product mastery, they lose the knowledge necessary to deliver product for the companies that will become the giant of tomorrow. 

Failure to embrace cloud

SAP sold their hosting operation in 2009 to T-Systems. They literally sold out all their expertise that would help them transition to offer solid cloud solution. They have been left in the dust by the competition and are fooling themselves if they think they can catch up. Elisson on the other side is trying to fool its shareholder by promising to catch up AWS. However at the current rate of their infrastructure investment they will reach current AWS infrastructure size in FIVE YEAR !! 

Failure to simplify their stack :

Anybody who used an SAP or Oracle system knows how painful it is to deploy the simplest web service .. let alone an ERP system. Moreover, it is almost impossible to learn about the systems if you are not working for either a company that use them or the companies that produce them

Failure to learn

These software giant traditionally didn’t run their own systems. And to be honest, hardware and operational cost was often a fraction of the overall license cost. Because of that, it was easy to tell the customer to throw more hardware at the software problem. However, everything change when you start to offer your solution as a service. And SAP experienced this the hard way with their SAP Bydesign solution (discontinued in 2013 and revived a year later). Rumors was that the company was spending 7 euro to run the system for every euro it was getting from its customers.
However, they didn’t learn from their mistakes and change their approach to build, deliver, and run their systems. Look at S/4 HANA, even today you cannot run it on anything else than the humongous X1 instance. And this lack of learning seems to be widespread among the software giant industry. Surprisingly we are almost 5 year after HANA availability on AWS announcement and I have yet to see a customer running a production HANA on AWS. 

Because of these failures from bottom-up blindness, these companies easily fall prey of the Tower of Hanoï fallacy and as a result they cannot :
  • transition to a new value chain
  • acquire new technical skills/knowledge
  • expand to new market and business model
  • compete with ecosystem (cloud) natives
  • manage revenue stream self cannibalization

Traditional software giants have feet of clay and AWS already chipped its way up to the knee without them noticing.

Friday, December 02, 2016

[Links of the Day] 02/12/2016 : AWS best practice, 1k+ RISC-V with Shared memory, Verification of distributed systems

  • AWS Well-ArchitectedFramework : AWS document outlining high level cloud best practices. Not really in depth technical solution but provide good guideline for organisations. 
  • Towards Thousand-Core RISC-V Shared Memory Systems : MIT is advocating for leveraging its TARDIS cache coherence protocol to scale RISC-V architecture to 1k+ cores. But the interesting thing is that they are advocating for a shared memory system using a 3d mesh. What's interesting is that it seems that RISC-V and TARDIS are oddly compatible architecture wise. Now we need to see if the cache technology can deliver on its promise. 1K core is a hell of a lot of coherence to maintain. 
  • The Verification of a Distributed System : talk by Caitie McCaffrey where she present strategies to prove system correctness. This is rather important as too often companies build distributed system and swear that they satisfy some part of the CAP theorem. But too often they crumble.. Especially if @aphyr decided to take some interest into it (or even better get paid to do so)