- FPGA extinction level event : article looking at the evolution of the FPGA vendor market. It seems that if Xilinx get aquired, 80% of the FPGA market vanished ( Altera was acquired recently by Intel). This as far reaching implication for the market as consolidation occurs and focus seems to be toward datacenter solution at the detriment of the rest of the market.
- Filo : consolidated consensus as a cloud service. Really interesting paper looking at the possibility to offer a consensus system as a service within cloud. This would greatly help anybody out there relying on their zookeeper / consul / etc.. and allow them to focus even more on the business logic.
- OsCon : slides of the excellent OsCon are up . Lots of docker related stuff .. but if you look past it there is also some gems such as Netflix SSH Bastion talk or the Build to Lead talk.
Monday, May 30, 2016
Wednesday, May 25, 2016
- Urika-GX : Cray just released its analytics solution. Which leverage the Aries network fabric, Luster , Mesos and the usual suspect Hadoop / Sparks (+ some cray specific stuff). Looks like a neatly packaged solution for company that really need to have near real time insights.
- CCIX : ARM, Qualcomm, AMD, Xilinx, Huawei, IBM, Mellanox, and Xilinx all agree on CCIX interconnects definition. CCIX aims at delivering a cache coherent interconnect for accelerators. The nice feature is that it target inline accelerator as well as endpoint without the need to explicitly manage the coherency. It means that we will soon see device to device communication without CPU interaction while maintaining data coherence within the same system. This is a long overdue feature as software and CPU start to become the bottleneck in analytic and high speed networking fabric (NVF). Imagine, shifting data between your RDMA card to GPGPU and back to your NVM without CPU involvement. And finally having the CPU fetching just the information it needs for display or final transformation based on user requests...
- Awesome Webhooks : Loads of API webhooks out there.
Tuesday, May 24, 2016
[Links of the day] 24/05/2016: Hybrid Memory Cube performance , Storage history and Energy computing problem
- Performance Exploration of the Hybrid Memory Cube : Thesis evaluating the performance challenge of Hybrid Memory Cube (HMC). HMC is an emerging main memory technology that leverages advances in 3D fabrication techniques to create a memory device with several DRAM dies stacked on top of a CMOS logic layer.
- Computing’s Energy Problem : Mark Horowitz 2014 ISCC presentation on the energy computing challenge and how application needs to be more energy aware
- Storage History : great presentation of the storage history from 1956 4.4 MB RAMAC to modern day storage system. With some great anecdotes thrown in the middle.
Monday, May 23, 2016
[Links of the day] 23/05/2016: computational law , EU data protection regulation & 2M packets/s on AWS
- CodeX : Stanford R&D lab of computational law — the branch of legal informatics concerned with the automation and mechanization of legal analysis. I really think that we are getting to the point where every single IP and Assets will be handled/ represented by an AI and where the actual ownership of assets will be dynamically changing at the speed of computation Accelerando style. [Youtube Videos]
- 2 Millions packets seconds on AWS : leveraging SRIOV tech on AWS to process packet at high speed.
- EU General Data Protection Regulation : a great document dissecting the upcoming European General Data protection Regulation directive. This directive cover a very wide range of legislation, from right to be forgotten, data protection to compliance.
Friday, May 20, 2016
Thought for today: when I reflect on certain microservice architecture, I feel like an investor banker-type is pitching to me a new form of mortgage-backed securities (MBS). Like MBS, not all microservices are identical;they are organised in layers, each with a different level of priority in the technical debt repayment stream.
As a result, one ends up with an heterogenous architecture with different levels of risk and rewards that is constantly evolving while maintaining its structure via contracts (API). Effectively, the microservice approach is a form of architecture securitization by pooling various types of technical debt, generated by small individual microservice.
Subprime software risk and rise
The securitization of microservices debt has the advantage of providing more resources for development efforts at a time when we have a developer shortage and inflation is undermining a traditional source development funding. However, microservice-backed securities can also lead to an inexorable rise of the subprime software industry while creating hidden, systemic risks.
The granularity of microservice of securitized assets can mitigate the technical credit risk of individual developer groups. Unlike monolithic architecture project technical debt, the securitized technical debt is non-stationary due to changes in volatility that are time-and structure-dependent. If the development process and integration (devops) is properly structured, and the pooled microservices performs as expected, the overall technical debt risk of all layers of microservice structured debt improves. However, if improperly structured, the affected microservice layers may experience dramatic quality deterioration which can ultimately cause the overall project to fail.
The main issue with this form of software project securitization is that it limits the project, program manager and architect’s ability to monitor risk, and that further reliance of a nebula of securitized microservice via cloud API may be particularly prone to high spike in technical debt accumulation. With proper monitoring and constant effort, this risk can be efficiently mitigated. As the great benefit from the microservice approach is that the corrective effort is contained within the domain of the particular failing microservice.
Open sourcing : ultimate securitization tool
For corporations, the total face value of a large project decreases overtime, because like mortgages, the technical debt of a solution is not paid back as a single payment, but rather paid along with the interest in each periodic release. The microservice based approach is smooth and provides a continuous repayment plan of the project debt, greatly reducing the debt cliff risk. However, it also fragments the visibility of the overall debt risk.
When the cost of maintaining these microservices or even software stack becomes higher, the face value of the project companies tries to repackaged and reused in order to collateralize the technical debt obligations. Similar to MBS, the lower-priority in terms of business value and higher-interests services, form the bottom layer of the microservice architecture stack. They are the service bus, database, key/value store etc.. making the overall application work, doing direct delivery of business value.
When even the internal reuse is not able to cushion the cost of these software layers, corporations decide to use the ultimate securitization tool. They opensource the project, which allows for the refinancing of the underlying technical debt and redistributes it through the capital structure of influx new developer resources provided by the open source community.
In return, the open source community (and other companies), benefit from the principal and interest development efforts. However, like any financial product, open source efforts need to be carefully shepherded and promoted in order to yield the optimal benefits from the operation.
All in all, Microservice architectures are a great model for bringing greater fluidity to the software development ecosystem, however like any tools. User need to be wary of the unforeseen consequences if they lose track of the key metrics the wanted to optimise in the firs place.
Wednesday, May 18, 2016
- Statistical Compression Cache Designs : Cache memories play a critical role in bridging the latency, bandwidth, and energy gaps between cores and off-chip memory. Compressing Cache allow to save space and can offer good tradeoff in term of space / cache hit ratio.
- Yet Another Compressed Cache : INRA version of cache compression
- Phase-change memory : IBM demonstrate reliable storing of 3 bits of data per cell in PCM. However it still a long way from competing against 3DXpoint but can become a valid replacement in future storage solution
Tuesday, May 17, 2016
- Database Systems Lectures: Carnegie Mellon University lectures on database system. It gives a really good overview of the state of the art of database systems.
- Intelligence without representation & Intelligence Without Reason : 1991 Seminal paper by Rodney A. Brooks from the MIT artificial intelligence lab. In these the author argue that intelligent behavior could be generated without having explicit manipulable internal representations and it also can be generated without having explicit reasoning systems present.
- Noisy Neighbor analysis : a look at the effect of deploying heavy workload onto modern storage systems and the collateral effect on overall performance for all the participant in the cluster.
Monday, May 16, 2016
- Transitively Deadlock-Free Routing Algorithms : interesting routing solution for the BULL (now Atos) BXI fabric for HPC system. 4-level rearrangeable non-blocking fat-tree which support up to 64 800 nodes, 11 160 switches, 194 400 inter-switches links. Problem : this represent 50GB of routing table, and errors occurs (often). Obviously recomputing the routing tables for each fault is not an option. The authors propose a process using offline/online recompute with non blocking routing table update process. Interestingly enough the proposed solution looks a lot like online linux kernel patch update system. [slides]
- Architect's Clue Bucket : Big slide deck by Ruth Malan looking at sfotware architecture and how to use Clues to deliver great product. The author look at the type of clue: design principles, heuristics, tipds, hints... How to organise them : mapping the clue landscape and finally where and how to look for clues
- 1977 Cloud : Insightful paper describing what would be today's modern cloud solution.. Sometimes I think that the 70s were caught into a time warp caused by hardware lag. Software tech advanced way faster than the hardware tech and we just spent the past 30-40 years waiting for it to catch up. Sadly, we forgot (and reinvented the wheel many time) while waiting.
Friday, May 13, 2016
- nvmesh : pure software product using a shared nothing architecture that leverages, NVMe SSD, SR-IOV and RDMA. Performance are interesting: 4M read and 2.8M write 4k IOPS, 16GB/s throughput and super low latency with 90µs/25µs for read and write from client to server. Whats is really interesting is the dual mode of operations: shared nothing with direct storage access for really fast access or centralized one which offer more redundancy and serviceability feature at the cost of a lower ( but still fast ) performance [video]
- Fine-grained Metadata Journaling on NVM : the authors propose to move away from the limitation of block based journaling to a fine grained approach more suitable for NVM storage. They propose to move to a inode based transaction and journaling approach, each inode representing 256 byte. The solution seems cache friendly however it beg the question : why do we need to go through the CPU .. With DAX and other system it should be more efficient to completely bypass it[slides]
- Fast and Failure-Consistent Updates of ApplicationData in Non-Volatile Main Memory File System : being crash consistent is the number 1 requirement for any storage solution. Current File system optimized for NVM doesn't seem to be good enough. The authors propose an alternative file system specifically tailored for consistency and high performance by moving away from the FS level consistency and target application level consistency solution. Naturally this put a greater burden on the application layer.. Then again researcher really need to move away from the classical FS solution and deliver a new paradigm. [slides]
Thursday, May 12, 2016
- Lustre + Omnipath : HPC filesystem of choice meet Intel Omnipath fabric. Intel was poised to release such crossover as it continue to push in the HPC domain and rack infrastructure domination . Remember that Intel acquired Whamcloud (Lustre) a while back.
- Storage Media Overview: Historic Perspectives of storage solution. Interesting snippet of information all storage media revenue decreased from 2014 to 2015 except for NAND. However, NAND revenue increased by 30% in 2014 but only 3% in 2015. Hinting a plateau of the tech and entering a commoditization phase with lower margin. [Video]
- Bridges :supercomputer being built at the Pittsburgh Supercomputing Center (PSC), they have a really cool Virtual Tour .
Wednesday, May 11, 2016
[Links of the day] 11/05/2016: teaching with wargaming , EU FRAND OSS threat, Teams Coordination tradeoff
- Teaching strategy with wargames : fantastic article on how the teaching philosophy behind wargaming in the US army (navy and marines)
- FRAND : FRAND stand for Fair, Reasonable, And Non-Discriminatory licensing terms. This is the standard proposed by EU to be used for the future single digitial market. This standard, if implemented, will become a massive roadblock for the use, creation and consumption of open source software because it will not allow royalty free license to be used. However, its not too late and EU citizen comment on the proposal here.
- Coordination amongst teams trade-offs: Team coordination is like designing a distributed system , compromise have to be made.
Tuesday, May 10, 2016
One of the frustrating aspects of corporate behavior is the tendency for a large portion of the enterprise population to chose the most common rather than the most profitable strategy. The natural assumption is that the market interaction, amongst humans and between corporate entities, is driven by the desire to achieve straightforward payoff maximisation. It is not just an assumption; it is often a contractual obligation that management should first and foremost consider the interests of shareholders in its business actions.
As a result, conformists’ strategy behavior within a corporation’s executive seems to be a contrarian to the requirement of the strategic decision making process. These comportements seem to be widely spread within the corporate world. Every year, we see a new fade coming and spreading like wildfire, bi-modal from Gartner, we need a platform, etc..
This generated quite a lot of frustration from me as I was trying to understand why such behavior was commonplace. I wasn’t fully satisfied by herding instincts or widespread incompetence justification that were so often put forward. This just didn’t add up because in a competitive system, under-performing strategies should have been eradicated a long time ago due to evolutionary constraints, and yet instead the conformist strategy persisted.
Behavioral conformity :
Recently I came across a series of Game Theory papers [ 1 - 2 - 3 - 4 ] that provide the beginning of an answer . This research indicated that spatial selection for cooperation is enhanced if an appropriate fraction of the population chooses the most common rather than the profitable strategy within interaction range.
One of the premises of this research relies on the aspect that humans are social animals that are not solely driven by the desire to maximise fitness, but also aim to socialize and identify oneself within a group of like minded individuals. The main idea is that some of the individuals participating in a competitive game do not adopt the strategy that is the most profitable but instead, the strategy that is the most common in the group or within one's interaction range. Another interesting side effect of this behavior is that the individual and group benefits from the strategic approach homogeneity, which can explain why this behavior persist in the face of more individualistic corporation.
As executives in a corporation are still humans (until proven otherwise), we can assume that these behaviors are transposed, to a certain extend, to their strategy decision making process by incorporating conformity as an alternative to straightforward payoff maximisation in cooperation strategy. We can begin to have a valid explanation as to copycat behavior of management. Naturally, not all participants are conformists, but they all have this tendency to become such, to a certain degree.
Within a network of enterprises doing business and competing against each other, you end up having different participants with various degrees of conformism tendencies. And the repartition and diversity ultimately has an impact on the ecosystem landscape. However, the amount of information available for the strategic decision making choices greatly influences the conformity tendency and this is often abused by consulting and analyst companies.
Conformity by Information overload :
It has been demonstrated that conformist tendencies for choosing a particular strategy is reinforced with the increase of available information. And that is why consulting and analyst companies are able to hook-in so many corporations with a similar pitch. When overwhelmed with information, humans tend to turn to a third party to make the decision for them.
This approach is well known and applied in retail. For example, as soon as you step into a mobile phone shop, you are assailed by a multitude of phones to chose from. As the customer starts to feel disoriented by this tsunami of choice, the “helpful” shop assistant quickly step in to advise you. At this stage, customers are more than happy to be led to the “ideal” phone they need.
Following a similar scenario in the corporate world, management often relinquish their strategic decision power after being bombarded with information. This onslaught, being orchestrated by the same businesses that will sell them the strategy they “need”.
Luckily for advisors and the advised corporation, conformist behavior delivers some non negligible benefits.
Benefits of conformism :
What is interesting is that the research [ 5 - 6 ] show that the adoption of strategy is most common within the interaction range of the player regardless of the expected payoff. While you would expect this to be detrimental, it has been shown that the participant adopting the conformist approach, coordinates their behavior in a way that minimizes individual risk and ensures that their payoff will not be much lower than average.
Moreover, the effect of conformism in game theory is similar to the one we witness in the business world. This behavior fosters the emergence of large homogeneous clusters competing with each other in the same ecosystem. The participants in these groups benefit from the cooperation by virtue of the network reciprocity. And from this network reciprocity effect, they benefit from a minimization of invasion risks by defectors.
As a result, while you do not get the chance to greatly outperform the market, you are still providing a better than average return, while benefiting from a group protection. These types of results are generally regarded as positive by company boards and hence conformist behavior is rarely questioned, but usually encouraged (but not always consciously).
Leveraging conformism situational awareness
The great thing about conformist behavior, is that once you have learned to spot it within the business ecosystem, you can leverage it to your advantage. By drawing a map of a business’ ecosystem, you can identify its’ participants, strategic approach and spot homogeneous clusters competing in the population.
Armed with this information you can identify weak clusters to disrupt. Weak clusters are often characterized by the presence of conformist leaders. Leaders are corporations with a high collective influence in the network. If leaders conform, they lose the capability to capitalise on their central position within the network by forfeiting their capacity to search for a more successful strategy. This means that the business within the cluster will suffer from a form of sclerosis and won’t be able to coordinate and/or formulate a counter strategy when challenged. As a result, individual corporations in the cluster are more vulnerable as the network effect dwindles.
Another way to leverage conformism at your advantage is to spot companies that copy neighboring strategies, when the neighbor’s business belongs to a different evolutionary stage. Imagine that Company A is selling utility components and use a platform strategy. Company B uses Company A’s product but sells custom components. If Company B decides to mimic Company A’s platform play, you then have the opportunity to to out-compete Company B by industrializing Company B’s components.
Conformism or Anti-conformism ?
The conformist approach can be a valid choice as it creates non negligible benefits. However, this needs to be an informed choice, not a contrived one. Corporations need to understand the state of the playing field and decide whether or not leveraging the cluster effect might benefit them or hinder them. Moreover, by understanding how network cooperation effects are detrimental to them, companies can tailor their own strategy to take advantage of the conformist tendencies of their surroundings. In the end it boils down to understanding the surrounding corporate environment and to know when to blend in or not.
Monday, May 09, 2016
[Links of the day] 09/05/2016: OSS bio metric framework , Deep learning framework comparative study & dropbox magic pocket
- OpenBR : open source bio-metric framework, I can't wait for the first community driven mass recognition system to come out. No more secrets...
- Inside the Magic Pocket : really good case study and architecture behind the storage system design to replace AWS S3 after Dropbox moved out of AWS [HN discussion]
- Comparative Study of Deep Learning Software Frameworks : version 3 of the extensive study of deep learning framework. What is interesting is while tensor flow is deemed extremely versatile it seriously lag behind the other framework performance wise.
Thursday, May 05, 2016
- 9front : excellent book on plan9 and 9front , the first chapter is a must read for anybody interested in the field of distributed systems and OS.
- Kinetic : OVH start deploying in Beta Eth Connected drive
- Go best practices : well the title say it all
Our module "SATA2IP" is ready ! Forget the SATA. Another way to consider the storage: 1 HDD = 1 ip and let's scale ! pic.twitter.com/PqPn9rNq7F— Octave Klaba / Oles (@olesovhcom) May 3, 2016
Wednesday, May 04, 2016
- OpenCoArray : Fortran is not dead, and the work on the Co array with accelerator demonstrate it.
- Openserver summit :
- pcie 4.0 : Some really nice improvement with the upcoming standard in term of performance and especially RAS. However not mr-iov capability yet.. This is sorely missing to make PCIe a true contender on the rack scale fabric level.
- Azure SmartNIC : Microsoft use FPGA based smartnic to shorten the update cycle of their Azure cloud fabric. Its a really impressive solution.
- Persistent Memory over Fabrics : Mellanox pushing for RDMA based persistent memory solution. Probably trying to corner the market quickly as 3dXpoint and Omnipath solution from Intel are just around the corner. However what caught my attention is slide 14: HGST PCM Remote Access Demo. What is really interesting is that HGST is probably one step away from merging NVM and RDMA fabric onto a single package. With that they would be able to offer a direct competition with DSSD at lower cost ( following the Eth Drive model ).
Tuesday, May 03, 2016
Linux Storage, Filesystem, and Memory-Management Summit : Loads of really good talk , here is a selection :
- VM as containers : Current effort focus on solving 2 main problems : 1. total VM memory consumption is superior to what application that runs in. 2. Storage access : a lot of the storage aspect focus on moving the storage stack back to the host ( providing DAX or Fuse). However all these aspects require carefull design in order to avoid compromising security and isolation features of virtual machines.
- Bulk memory-allocation APIs : What do we want ? we want loads of memory fast - when do we want it ? -N...O...W.. :) [slides]
- Persistent memory as remote storage : a look into leveraging RDMA for remote persistent storage access. A really good discussion around the possibility to move from PULL to PUSH mode for remote access . However this would require a lot of change and addition to work with the RDMA stack. Probably too much for it to be a viable option in the short term. Another aspect of the discussion was related to the durability guarantee of remote storage protocol. It is interesting to see that their is a consensus regarding the need for an API to hide the different durability behavior variation of the fabric / protocol / HW. This is sorely missing and why storage solution often trap you down a certain path and cannot evolve to adopt new tech, fabric, and hardware.
Monday, May 02, 2016
- Storage Transition : NVM Express and PCI Express in the Client and Data Center
- Intel Non-Volatile Memory Inside : A look into 3d NAND and next gen SSD
- Modern Storage Architectures : The implication of new fabric and NVM tech for modern storage architectures
- No more secrets : recreate the famous "decrypting text" effect as seen in the 1992 movie Sneakers