- Business Models : A long list of various business model with examples. The descriptions of the models are short and self-explanatory. Great short read.
- Indexing Billions of Text Vectors : when you have to use text vectors and you need to search them fast, K nearest neighbour search to the rescue.
- 2016-2019 Progress Report - Advancing Artificial Intelligence : US National Science and Technology Council report on Artificial Intelligence. It seems that AI crept up in the radar of the legislator and executive. Luckily they will understand that AI R&D has become pervasive in all sectors of the industry and without continuous investment US will quickly fall behind in this arms race. [slidedeck]
A blog about life, Engineering, Business, Research, and everything else (especially everything else)
Showing posts with label business model. Show all posts
Showing posts with label business model. Show all posts
Tuesday, December 17, 2019
[Links of the Day] 17/12/2019 : Business Models throughout history, Indexing Billions Vectors, US Progress report on #AI
Monday, March 28, 2016
[Links of the day] 28/03/2016: Hierarchy of engagement, Latency measurement, TAO consistency at Facebook
- The Hierarchy of Engagement : Another excellent Greylock partners's slides deck on how to leverage the Hierarchy of Engagement to fuel the growth of your company.The proposed hierarchy model has three levels: 1) Growing engaged users, 2) Retaining users, and 3) Self-perpetuating.
- Measuring and Understanding Consistency at Facebook : paper summary of Facebook highly consistent DB : TAO. Interesting thing is that they have a hierarchical consistency model with synchronous cache consistency and asynchronous cache, DB/storage invalidation model.
- How NOT to Measure Latency : in-depth overview of Latency and Response Time Characterization, including proven methodologies for measuring, reporting, and investigating latencies, and overview of some common pitfalls encountered (far too often) in the field
Labels:
business
,
business model
,
consistency
,
db
,
Distributed systems
,
engagement
,
facebook
,
growth
,
latency
,
links of the day
Tuesday, February 16, 2016
Is Amazon using Lumberyard to replicate its Video business model in Gaming?
Amazon recently launched its own gaming engine : Lumberyard. This should not seems as a surprise with the stream of high level investments they have been doing in the field over the past couple of years: Twitch.Tv or licensing Crytech engine (which form the basis of lumberyard) to name a few. Moreover, I will not extend on Amazon underlying strategic play as Simon Wardley already did brilliant job explaining it here and there.
A lot of discussions analyzing Amazon move have been centered around the long term strategic play in the AR/VR field. However, in the short term, Amazon might be aiming to accelerate the value chain shrinkage while potentially moving away from the traditional gaming industry business model to a service based approach and ultimately a complementary business model.
Historically, Work-for-hire & Royalty advance practices generated significant upfront fixed cost in the video game development business model which resulted in making publishers as the de facto main financial operator. Publisher typically mitigated these financial risks via portfolio management which exacerbate the reliance on franchise game (86% of the market).
With the switch to digital distribution platform and the explosion of mobile gaming, the physical logistics needs drastically decreased while the barrier to entry vanished. This commoditization trend effectively shrinked the value chain significantly as show in the diagram below.
Moreover technology evolution enabled an increased variety in revenue model :
- Subscription : Subscribers pay periodically to get access to the game (ex: World of Warcraft)
- Utility : metering usage, i.e. a pay as you go approach. This model is widely used among MMOs in China.
- Advertisement : sometime used in combination with other model in order to enhance revenue. Pure advertisement model are mainly found in mobile.
- Micro-transaction model : dominate Eastern markets
- Licensed : historical revenue model
- Free to play : combination of other revenue models , ex Advertisement + micro transaction.
There is two other business model that are still nascent in the gaming industry: Service and Complementary. And this is where, I believe, Amazon is aiming all along with its gaming push.
If we look at the value chain above Amazon's plan seems extremely straightforward. By facilitating production systems via “free” access to lumberyard Amazon facilitate the emergence of gaming studio. This open platform with efficient underlying support system (AWS) and with great customer exposure (Twitch.tv) will drive the commoditization of content creator and by transitivity content itself. This approach literally cut the grass under the foot of traditional gaming corporation that relied on a high barrier to entry ( via game engine licensing, distribution network, backend, etc..).
By analyzing beyond the pure technological aspect we can quickly theorize that Amazon might be aiming at pivoting the gaming revenue model completely. Amazon could push for a Netflix like service model. However, there is a greater chance that it will follow the same approach it used for Amazon Video. Amazon could start offering Video Game access (downloading via app store steam style first, streaming later) free as a complementary to Amazon Prime customers. Prime serving as an incentive and creating opportunities for more lucrative cross-sell and up-sell opportunities. The Gaming service attracts customers to Amazon store, where they can purchase the content which is not available for free, as well as other products from Amazon. Moreover the overall business model effect would be further reinforced through the Twitched.tv broadcast platform.
Obviously, to support and accelerate this model, Amazon will need to start producing its own games. It needs to offer an attractive gaming experience that cannot be easily replicated while co-opting the rest of the industry at the same time.
One of the key element regarding the pace of change will be dependent of the commoditization of the hardware platform and co-optation of existing one. If Amazon is able to broker a deal with MSFT or Sony ( the later is more likely because they already run their services on AWS). They would be able to gain a foothold in the gamer market. However, by co-opting the “hardcore” PC market , TV causal (Fire TV) and mobile, Amazon should be able to squeeze out the competition. Even if the console put up a fight, they would be able to enshrine in concrete any market gain by enrolling top game studio and capturing gaming franchise.
Last but not least, the value of console hardware is dropping fast while console software value is increasing and already exceeding hardware. Similar relations are to be found for handheld devices with an even greater gap. Amazon, just has to wait for for the gap to reach a critical point and then wipe out the nascent video game streaming industry by leveraging its existing expertise from VDI (workspace). All of this would be a textbook replay of the Amazon Video strategy.
The future of gaming is about to enter a new era. While the AR/VR future is exciting there is a gaming business model war looming that will hit way before these technologies reach maturity.
Labels:
amazon
,
aws
,
business
,
business intelligence
,
business model
,
gaming
,
lumberyard
,
mapping
,
strategy
,
value chain
Monday, October 20, 2014
(Big) Data is a double edged sword
Previously, we looked at how not to fall into the mirage of unicorn hunting in your “big data” and why you should not delay too much in adopting data science techniques into your business operations. In this post we will look at why data can be both your best and worst enemy.
Data is a double edged sword.
The enterprise with the best data will greatly benefit from having a significant advantage over its competitors and consequently, enterprises should seek to amass as much data as possible. As we previously learned, an enterprise leveraging its own data allows it to gain a competitive edge on the chessboard. However, more often than not, enterprises are facing a big dilemma: who generates and then consequently who owns this precious information? Quite often most of it originates from the customer and in order to alleviate this issue and repatriate the precious data points back into the mothership, enterprises leverage the XaaS model.
These ”X” (anything) as a service products benefit consuming companies by lowering the cost of operations, reducing or eliminating CAPEX. It also to a certain extent provides data aggregation, market comparisons and a range of other useful capabilities. Whilst useful for lowering cost and product implementation / service delivery for the deployer, the real beneficiary is in fact the XaaS provider.
The provider can then leverage this information by monitoring the consumer behaviour and usage of its product in order to identify the spread of new successful innovations. This is basically what Amazon and others have been applying quite successfully over the past decade and is known as the Innovate - Leverage - Commoditise model (ILC). And in certain extreme case they enter market not to make money but simply to collect more data to drive other parts of their business.
As you can see, you have to control which data you need to keep and which you can leak or generate for a third party. Without this understanding, your enterprise business might end up being exploited as it just becomes a puppet within a bigger ecosystem which you do not own. In Fact, more often than not, the service provider is a wolf in sheep’s clothing: he presents himself as wanting to ‘help out’ but in fact and unfortunately, there is less collaboration and more exploitation driving his intentions.
Enterprises are therefore facing a dilemma and they have to adopt and consume XaaS in order to stay competitive, while trying to avoid leaking their innovation by feeding the ecosystem with more information. One efficient way to counter the later is to form their own ecosystem and leverage data from it which in turn enables them to partially workaround the enterprise’s inherent innovation limitations. However, this is often easier said than done.
The data gathered is as important as the data generated as this can either make or break an enterprise. Creating one’s own ecosystem to draw information from will quickly become critical as an enterprise cannot solely rely on a single source of information to stay competitive.
Maybe what we will begin to see in the near future is the emergence of information exchange or even data collectivism among enterprises (a behaviour triggered by collective prisoner's dilemma) in order to counterbalance the mastodons of data vacuuming, such as Google or Amazon.
Wednesday, September 10, 2014
Links of the day 10 - 09 - 2014
Today's links: #docker , #unikernel , #jit , #virtualization , #IDF14 , Business Model
- Just-In-Time Summoning of Unikernels : spawning CloudOs ( #OSv, #MirageOS , ..) on demand via forwarding DNS. Interesting concept but probably limited by how fast you can spawn these minions.
- Freemium for Enterprise software : Should start-up and established company use the gaming industry freemium model? Not sure.. but interesting perspective.
- Bare-metal, #Docker Containers, and #Virtualization: The Growing Choices for Cloud apps #IDF14 presentation which discuss discusses bare metal, containers, and virtualization as different options for running workloads with an interesting comments by Scott Lowe on its blog
Labels:
bare-metal
,
business model
,
docker
,
IDF14
,
jit
,
unikernel
,
virtualization
Friday, January 31, 2014
On avoiding vendor lock-in by leveraging Openstack
One of the main drivers for user to adopt Openstack is to avoid vendor lock-in (see stats here ).
Architecture is rapidly becoming a commodity
Arguably,
if you develop you own cloud solution you are locking yourself into
yourself. Openstack in its current state require so much effort, customization, and maintenance that you end up building your own cage. Managing your maintenance and devs cost becomes critical in order to have a good ROI. Unless you plan to resell these services or expose them directly to your customers you won't benefit from the scaling strategy.
Often you might be better off with a vendor lock-in as you "should" more easily control your costs and ROI. Or better, contract out your Openstack implementation from a third party and outsource the maintenance and development cost while retaining a certain degree of flexibility.
Different size , different strategy different risks
SMB customers can be very aggressive about getting into the cloud, and they do not have a legacy to deal with, whereas the enterprises tend to be very risk-averse. They have to protect what they have, and they cannot be as aggressive.
As a result we are seeing a number of mature enterprises looking toward a multicloud strategy. Whether that is through multiple platforms or whether it's deploying on an open cloud platform, the outcome that they are trying to achieve is the same. Enterprises are increasingly transitioning from general-purpose tools to point solutions as their IT environments become bigger and more complex.
Tuesday, November 19, 2013
Openstack consumption model : DiY vs Enterprise
Here are
some personal comments on Openstack adoption in the industry and their
statistics.
Openstack statistics:
While the
Openstack consortium is putting great effort in diffusing its statistics
to market / broaden its adoption ( you can see the latest batch here: http://www.openstack.org/blog/ 2013/11/openstack-user-survey- statistics-november-2013/
) you have to take them with a grain of salt.
One of
the thing that is not immediately visible, but you can guess it, is the
consumption model of Openstack. Openstack cloud deployment can be broadly
classified in two different category: Do it yourself ( DiY) , and
(semi) contracted out ( aka enterprise cloud).
DiY vs Enterprise cloud consumption model
My own
brew definition (partially adapted-stolen from some of Simon Wardley blog posts , check out his
great
blog ):
- DiY : Complete redesign of an infrastructure architecture, most of the time as close as possible to a true public cloud architecture ( or exact copy in the case of public cloud provider) and/or often to fit very specific needs. These solution aim to have the lowest cost possible by reusing as much open source tools out there combine with custom solution.
- Enterprise: "It's like cloud but the virtual infrastructure has higher levels of resilience at a high cost when compared to using cheap components in a distributed fashion. The core reason behind Enterprise Cloud is often to avoid any transitional costs of redesigning architecture i.e. it's principally about reducing disruption risks (including previous investment and political capital) by making the switch to a utility provider relatively painless." I.e. they want to be able to run "legacy" application alongside new one without having to throw away a lot of their existing investment ( HW - software - skillset - HR , etc.. ). These Enterprise cloud solution are sometimes delivered/consumed as (heavily) custom packaged Openstack solution from specific vendor : HP , mirantis, etc... These solutions are slowly making their way within the heavily regulated company segment ( compliance issues require on premise deployment).
On
the
other hand the Enterprise ones tend to be hybrid where you have a mix of
opensource solution and "enterprise" solution. Sometimes this is also
sprinkled with a dose of consulting. End consumer of these cloud
are sometime not aware that the solution they are buying is based on
open stack (
ex: in PaaS – Managed services scenario). By example Swisscom cloud
managed services ( offer SAP -ERP - BW - etc..) relies on Piston cloud
who relies on Openstack.
Often some key aspect are being bought separately for
their
enterprise features ( or performance, support) by ex: -storage : coraid (pNFS) / inktank (CEPH) , and
networking : arista- .
Openstack user base is skewed:
Now, why the
distinction is relevant to Openstack statistics: Cost savings open technology , avoid vendor lock-in are the main drivers, this should be hinting which one of
the two is predominantly represented in the users surveyed => DiY.
As a
result, the stats are quite skewed toward home grown solution where most of the
cloud stack is build using open source solution almost exclusively. This is why
CEPH or Ubuntu and other open source solution heavily dominate the stats
but they do NOT dominate cloud spending ( this is reinforced
with
the fact that most of the surveyed user have a 1-50 nodes cloud ... ~60%
+ of
the overall and only 20% of the clouds are used for production purpose
). I would love to see a venn diagram of the different dimensions. If anybody has access to the raw survey data I would be happy to do the diagram for them
By example, for the cloud storage solution of some Openstack
enterprise deployment in companies I came across you would see all the usual
suspect (EMC - Netapp - HP, etc..). However, they would be sometime
relegated as secondary player role while a certain number of newcomers took the
front stage. This newcomers obviously leverage the emerging market stage /
first mover to grab market share : Inktank ( commercial CEPH version) , Coraid
( pNFS , popular as it allow a natural transition from classical NFS
setup ) , and finally the wild card : Intel who is trying to push its
Lustre solution ( from their whamcloud acquisition) into cloud storage . Note: I am not too aware
on how glusterFS is fairing..
Openstack , a complex and rapidly evolving environment:
As you
can see Openstack ecosystem is heavily dominated by company using the DiY
model. This is further fuelled by the current state of Openstack which is
more a toolkit that you need to (heavily) customize to build your own cloud compared
to the self-contained solution like cloudstack, eucalyptus, flexiant, etc.. It makes
me feel like openstack is the Debian equivalent for cloud.
However,
its adoption is growing heavily in the corporate world (and as a result
fragmentation risk too): Oracle has its own home-brew version (but does not contribute
back anything and is completely closed) , SAP is starting some effort, Intel
is a big proponent (see IDF 13), Big managed services player are using it as their basic
building block ( T-system, Swisscom ,
Huawei , IBM etc..) .
A tough ecosystem for startups and established company alike:
Complex Ecosystem,
fragmentation, heavy reliance on custom solution makes it tricky for company to position themselves within this environment. Too low and they might be completely ignored due to the heavy DiY proportion , leading to a long struggle ( or death) hoping that the crossing of the desert will come soon.Too high level and they end up fighting with a multitude of Openstack clone as well as with other cloud solution.
There is no right decision there, maybe using a coherent layered approach across the openstack ecosystem would enable the creation of consistent revenue stream while limiting the race to the bottom competition ( competing on price). I probably will expand more on this concept in a follow up blog post.
There is no right decision there, maybe using a coherent layered approach across the openstack ecosystem would enable the creation of consistent revenue stream while limiting the race to the bottom competition ( competing on price). I probably will expand more on this concept in a follow up blog post.
PS: as I
write this postI came across the following Gartner blog post that echo in
some way my thought on Openstack ecosystem http://blogs.gartner.com/ alessandro-perilli/what-i-saw- at-the-openstack-summit/
. Again this is to be taken with a grain of salt as Gartner has a long history
of being a big VmWare supporter.
Labels:
business model
,
cash
,
cloud
,
consumption model
,
datacenter
,
DiY
,
ecosystem
,
enterprise
,
eucalyptus
,
flexiant
,
money
,
moneytization
,
openstack
,
SAP
,
Swisscom
,
ubuntu
Tuesday, July 21, 2009
Cloud bursting and the real world.
Cloud bursting is kind of a hot topic and presented as the panacea for companies that want to provide the service in house without overprovisionning for peaks loads.
As always , most of people around talk about the different solutions on how cloud bursting should be done or will be done. But they are missing the big picture behind it. Cloud is not a technology it’s a business model .Where is the cost model?
With the current state of technology , cloud bursting is highly limited to very specific type of applications. On top of the the economic model is not really clear and nobody has any real idea of the cost.
One of the main problem comes from that you pay for your cloud server on demand, Cloud bandwidth etc.. But on top of that you need to pay for the cloud server to access your data and synchronize the data back internally (not to mention the security and trust issue).
I think the real question is how the cloud bursting cost model stand up against classic peaks provisioning or even better internal cloud resource re purposing?
As always , most of people around talk about the different solutions on how cloud bursting should be done or will be done. But they are missing the big picture behind it. Cloud is not a technology it’s a business model .Where is the cost model?
With the current state of technology , cloud bursting is highly limited to very specific type of applications. On top of the the economic model is not really clear and nobody has any real idea of the cost.
One of the main problem comes from that you pay for your cloud server on demand, Cloud bandwidth etc.. But on top of that you need to pay for the cloud server to access your data and synchronize the data back internally (not to mention the security and trust issue).
I think the real question is how the cloud bursting cost model stand up against classic peaks provisioning or even better internal cloud resource re purposing?
Labels:
business model
,
cloud
,
cost
,
reality check
Subscribe to:
Posts
(
Atom
)



