Monday, April 30, 2018

Artificial Intelligence Applications Ecosystem : How To Grow Reinforcing Loops ?




1. Introduction


Earlier this month the National Academy of Technologies of France (NATF) issued its report on the renewal of artificial intelligence and machine learning. This report is the work of the ICT (Information and Communication Technology) commission and may be seen as the follow-up of its previous report on Big Data. The  relatively short 100 pages document that was released builds on previous reports such as France IA report or the White House reports. It is the result of approximately 20 in-depth interviews (2 hours) with various experts from academy, software providers, startups and industry.

AI is a transforming technology that will find its way into most human activities. When writing this report, we tried to avoid what had been previously well covered in the earlier reports – which are summarized on pages 40-45 – and to focus on three questions:

  • What is Artificial Intelligence today?, from a practical perspective – from an AI user perspective. Although the “renewal” of interest in AI is clearly the consequence of the extraordinary progress of deep learning, there is more to AI than neural nets. There is also more to AI than the combination of symbolic and connectionism.
  • What recommendations could be made to companies who are urged everyday by news article to jump onto the “AI opportunity” – very often with too much hype about what is feasible today ? Many startups that we interviewed feared that the amount of hype would create disillusion (“we bring a breakthrough, but they expect magic”).
  • What could be proposed to public stakeholders to promote the French Artificial Intelligence Ecosystem, in the spirit of Emmanuel Macron’s discourse. Because of the diverse background of the NATF members, and because of our focus on technology, software and industrial applications, we have a large perspective of the AI ecosystem.


We never start to write a report without asking ourselves why yet another report would be needed. In the field of AI, Machine Learning or Big Data,  more reports have been written than a bookcase would hold. Our “innovation thesis” – what distinguishes our voice from others – can be roughly sumarized as follows:

  • The relative lag that we seen in France compared to other countries such as the USA when looking at AI applications is a demand problem, not a supply problem. France has enough scientists, startups and available software to surf the wave of AI and machine learning and leverage the powerful opportunities that have arisen from the spectacular progress in science, technology and practice since 2010.
  • Most successful applications of Artificial Intelligence and Machine Learning are designed as continuous learning and adaptive processes. Most often, success emerges from the continuous improvement that is fed by the iterative collection of feedback data. The “waterfall” cascade “business question -> specification of smart algorithm -> implementation -> execution” is something of the past century.
  • There is no successful AI application without software mastery. This is actually a consequence of the previous point as this blog post will make abundantly clear.
  • Similarly, AI success is most often drawn from future data more than past data. Too much emphasis has been put on “existing data as gold” while existing data in most company is often too sparse, to heterogenous, without the relevant meta-data (annotations). Data collection is a continuous active process, not something that is not once, to ensure that insights derived from AI are both relevant to the current world and constantly challenged by feedback.


This blog post is both intended as a short summary for English-speaking readers and a teaser for those who should read the full report. It is organized as follows. Part 2 describes the “tool box” of available AI techniques and attempts to give a sense of which are better suited to which types of problems. This part represents a large share of the full report, since we found after a few interviews that it was truly necessary to show what AI is from a practical viewpoint. Most existing successful industrial applications of AI today are not deep neural nets … while at the same time getting ready to leverage this “new” technology through massive data collection is urgently necessary. Part 3 details a few recommendations for companies who want to embark into their “AI journey”. From collecting the right amount of data, securing a large amount of computing power, ensuring a modern and open software environment to running teams in a “data science lab” mode, these recommendations are focused of favoring the “emergence” of AI success, rather than a top-down, failure-proof approach. The last part takes a larger perspective at the AI ecosystem and makes some suggestion about improving the health of our French AI applications. This last part is a follow-up to a number of brilliant posts about the “Villani report on AI” and the lack of systemic vision, such as Olivier Ezratty detailed and excellent review, Philippe Silberzhan criticism on the French obsession with “plans” and Nicolas Collin superb article about innovation ecosystems.



2. Taking a wide look at AI and ML



The following picture is borrowed from the report (page 52) and tries to separate five categories of  AI & ML techniques, sorted along two axes which represent two questions about the problem at hand:
  • Is the question narrowly and precisely defined – such as recognition of a pattern in a situation  - or is it more open-ended ?
  • Do we have lots of data available to illustrate the question – and to train the machine learning algorithm – or not ?


Sorting the different kinds of AI techniques along these two axes is an obvious simplification, but it helps to start. It also acts as a reminder that there is more than one kind of AI ... and that the benefits of Moore’s Law apply everywhere.








Let us walk quickly through each five categories:

  • The “agent / simulation” category contains methods that exploits the available massive computational power to run simulation of smart agents to explore complex problems. Sophisticated interaction issues may be explored through the framework of evolutionary game theory (for instance, GTES : Game-Theoretical Evolutionary Simulation). There exists a very large variety of approaches, which are well suited to the exploration of open questions and complex systems. COSMOTECH is a great illustration of this approach.
  • The “semantic category” is well illustrated by IBM Watson, although Watson is a hybrid AI system that leverages many techniques. The use of semantic networks, ontologies and knowledge management techniques has allowed “robot writers” to become slowly by surely better at their crafts, as illustrated by their use in few news articles. These techniques are well suited to explore very large amount of data with an open-ended question, such as “please, make me a summary”.
  • The middle category, which applicability spans over a large score, is made of “classical” machine learning and data mining algorithms, which are used today in most industrial applications (from predictive maintenance to fraud detection). This category is in itself a “toolbox” with many techniques that ranges from unsupervised clustering (very useful for exploration) to specific pattern recognition. Getting the best of this toolbox may be obtained through a “meta AI” that helps to select and parameterize the best algorithm or combination thereof. Einstein from Salesforce or TellMePlus are great illustration of these advanced “meta” AI techniques.
  • The fourth category is a combination of GOFAI (Good Old-Fashion AI), Rule-Based Systems, Constraint and generic problem solvers, theorem provers and techniques from Operations Research and Natural Language Processing. Here also, the toolbox is very large; putting these techniques into an AI landscape usually irritate scientists from neighboring domains, but the truth is that the frontiers are very porous. The interest of these approach is that they do not require a lots of training data, as long as the question is well defined. Although “expert system” sounds obsolete, many configuration and industrial monitoring systems are based on rules and symbolic logic.
  • The fifth category is deep learning, that is the use of neural nets with many layers to solve recognition problems. Deep neural nets are extremely good at complex pattern recognition problems such as speech recognition or machine vision. On the other had they require massive amount of qualified data to train. DNN are the de facto standard method today for perception problems, but their applicability is much larger (portfolio management, geology analysis, predictive maintenance, …) when the question is clear, and the data is available.


This is a somewhat naïve classification but is allows for a complete “bird view”. A more up-to-date, such as PWC list of Top 10 AI technology Trends for 2018, is necessary to better understand where the focus is today, but is usually more narrow and misses the contribution of other fields. A short description of deep neural nets (DNN) and convolution neural nets (CNN) is proposed in our report, but this is a fast moving field and recent online articles such as “Understanding Capsule networks” have an edge on written reports. There are still many open issues, that are recalled in the report, such as explicability (mostly for deep learning – which is one of the reason the other techniques are often preferred), the robustness of the trained algorithm outside its training set, the lack of “common sense” (a strength of human intelligence versus the machine) and the ability to learn from small data sets.

Even if deep neural nets may not deserve to be called a “revolution” considering that they have been around for a while, the spectacular progress seen since 2010 is a definitely a game changer. From AlphaGo and machine vision to automatic speech recognition, the reports recalls a few of the dramatic improvement that the combination of massive computing power, very large annotated data sets and algorithmic improvements have produced. This is “almost a revolution” for two reasons. First, although they require massive amount of training data and a precise question to work on (of the classification kind), DNN are very broad in their applicability. With the large availability of libraries such as TensorFlow, DNN has become a new, general purpose, tool in the tool box that may be applied to all types of applied business problems. Second, in a “system of systems” approach, DNN provide “perception” to smart systems, from machine vision to speech recognition, that may be used as input for other techniques. This is obviously useful when explainability is required. In a biomimetic way, DNN provides the low-level cognition methods (the recognition of low level patterns) while more classical methods ranging from rules to decision trees propose an explainable diagnosis. Similarly, semantic techniques and DNN for speech recognition work well together. It is clear that the “revolution” caused by the progress of DNN is in front of us, especially because machine vision will become ubiquitous with the combination of cheap cameras and neuromorphic chipsets.

The beauty of AI, and what makes it hard to understand from the outside, is the richness and complexity of techniques that allow the customization and combination of the techniques presented in the previous picture. There is an emerging consensus that the next generation of AI breakthroughs will come from the combination of various techniques. This is already the case today : many of the extraordinary systems, such as Watson, AlphaGo or Todai Robot, use a number of techniques and meta (hybridization) techniques. Here is a short overview to appreciate the richness of possible hybridization :

  • Reinforcement Learning has been around for a long time, it is one of the oldest technique from AI that is based on a continuous loop of incremental changes directed through a reward function. Libratus, the extraordinary AI poker player is based on the combination of smart reinforcement learning and non-connectionist machine learning.
  • Randomization (such as Monte-Carlo approach) and massive agents communities are borrowed from simulation methods to help explore large search space. They are often combined with “generation” methods, that are used to randomly explore parameterized sets of models, to produce additional data sets when not enough data is available, to explore meta-heuristic parameter sets (the smart “AI factories” such as Einstein or TellMePlus are examples of such approaches) or to increase the robustness such as the “Generative Adversarial Networks” example.
  • Evolutionary Game Theory brings game theoretical equilibriums such as Nash equilibrium into the iterative reinforcement loop (searching for the fixed point of what game theorists call the “best response”). Evolutionary game theory is great for simulating a smart group of actors that interact with each other (versus a unique single “smart” system).
  • Modular learning is based on the idea that some low-level behavior may be learned on one data set and transferred to another learning system. In the  world of neural nets, we speak of “transfer learning”. Modular learning is related of “systems of systems” architecture which is in itself a complex topic that will deserve a complete blog post. I refer as an example to the event-driven architecture proposed in this older post, which also related to biomimicry ( using not only the cortex, but the full cognitive/perception/emotion human system as an inspiration).


What is not covered in the report due to lack of time, is the “meta-meta level”, which is how the previous list of meta-techniques can be combined, as they often are in truly advanced systems. This shows that the first figure is naïve by construction and that practice really matters. Knowing the list of primitive techniques and algorithms is not enough to design a “smart” system that delivers value in a robust manner. There is no surprise there, most of these meta/combination methods have been around for a long time and used, for instance, in operations research application development.

 

3. Growing Continuous Learning Data Processing Flows


The first and main message that the report conveys to aspiring AI companies is to build their “training data set” strategy. There is nothing original here, this is the advice that comes from all kinds of experts (see the report), but there is more to it than it may sound.

  • The foundation of an AI strategy, as well as a digital strategy, is to collect, understand and distribute the data that is available to a company.  Collecting data into data lakes require a common shared business data model (data models are the corner stone of digital transformation, as well as collaborative and smart systems); collating data into training data sets require meta-data that captures business know-how as much as possible. I refer the reader to the “Mathematical Corporation” book that is full of great examples of companies that have started their AI journey with a data collection strategy.
  • Although it is not necessary to start experiment with deep learning at first (simpler methods often deliver great results and make for an easier first step in the learning curve), each company should become ready to leverage the transforming power of deep neural nets, especially in the machine vision field. As a consequence, one must get ready for collective massive data sets (millions to billions sample) and start to collect images and video in an industrial manner.
  • One must think about AI applications as processes that are built from data, meta-data, algorithms, meta-heuristics and training protocols in a continuous iterative improvement cycle. For most applications, data needs to be collected continuously, both to get fresh data that reflects the present environment (versus the pas) and to gather feedback data to evaluate the relevance of the training protocol. Autonomous robots are perfect examples of this continuous data flow acquisition and processing cycle.
  • The most critical expertise that companies will develop through time are the annotated data and the training protocols. This is where most of the business knowledge and expertise – hence the differentiation -  will crystallize. Algorithms will often be derived semi-automatically from the training sets (cf. the “data is the new code” motto from our previous report).



The second message of the report is that the time to act is now. As shown from the examples presented in “The Mathematical Corporation” or those found in our report, the “AI toolbox” of today is already effective to solve a very large number of business problems and to create new opportunities.

  • As stated in the introduction, Artificial Intelligence is a transforming technology. The journey of collecting, analyzing, and acting on massive amounts of business data tend to transform business processes and to produce new business insights and competitive IP.
  • AI is a practice, it takes time to grow. The good news is that AI expertise about your business domain is a differentiating advantage, the bad news is that it takes time and it is hard to catch up with your competitors if you get behind (because of the systemic effect of the loop that will be shown in the next section).
  • Computing power plays a critical role with the speed at which you may learn from data. In the past two years, we have collected a large evidence set of companies that have accelerated their learning by orders of magnitude when switching from regular servers to massive specialized resources such as GPU or TPU (ASIC).
  • Everything that is needed is already there. Most algorithms are easily available in open-source libraries. Many AI “workbench” solutions are available that facilitate the automatic selection of learning algorithms (to Einstein and TellMePlus we could add Holmes or Solidware, for instance).



The following picture – taken from a presentation at MEDEF last year -  illustrates the conditions that need to be developed to be “AI-ready”. In many companies, the right question is “what should we stop doing to prevent our teams from developing AI ?” versus “ what should we do/add/buy to develop our AI strategy ?”. This picture acts as “Maslow pyramid” : the foundation is the understanding that creating value from these approaches is an emergent process, that requires to “hedge one’s bets” and rely on empowered, distributed and autonomous teams. The second step is, as we just saw, to collect data and to grant access widely, modulo the privacy and IP constraints. The third step is to give the teams access with the relevant software environment: as up-to-date as possible, because algorithmic innovation is coming from the outside in a continuous flow, and with enough computing power to learn fast. Last these teams need to operate with a “data lab culture”, which is a combination of freedom and curiosity (because opportunities may be hidden from first sight) together with scientific rigor and skepticism. False positives, spurious correlation, non-robust classifiers … abound in the world of machine learning, especially when not enough data is available.





The third message that the reports addresses to companies is to think of AI in terms of flows (of enriched data) and (business) processes, not of technologies or value-added functions.

  • Although this is not a universal pattern, most advanced AI systems (from Amazon’s or Netflix’s recommendation engines to Criteo or Google AddSense add placement engines, through Facebook or Twitter content algorithms) are built through the continuous processing of huge flows of data as well as the continuous improvements of algorithms with millions of parameters. Distributed System Engineering and Large Data Flows Engineering are two critical skill domains to successfully bring AI to your company.
  • One must walk before running: there is a learning curve that applies to many dimensions. For instance, the path to complex algorithms must be taken step by step. First one play with simple libraries, then you associate with a local expert (university faculty member, for instance). Then you graduate to more complex solutions and you organize a Kaggle or local hackathon to look for fresh ideas. Once you have mastered these steps, you are in a better position to hire a senior expert at the international level, provided that your problem is worth it.
  •  “Universal AI” is not ready : AI for your specific domain is something that needs to be grown, not something that can be bought in a “ready for use” state. Which is why most of the experts that we interviewed were skeptics about the feasibility of “AI as a service” yet (with the exception of lower-level components such specific pattern recognition). Today’s (weak) AI is diversified to achieve the best results. The great “Machine Learning for Quantified Self” book is a good illustration: none of the techniques presented in this book are unique to Quantified Self, but the domain specificity (short time series) means that some classical techniques are better suited than others.

  • The numerous examples from the report or “The Mathematical Corporation” show that data collection must expand beyond the borders of the company, hence one must think as a “data exchange platform”.  This is another reason why software culture matters in AI : open data systems have their own ecosystems, mindsets and software culture.




4. How to Stimulate a Demand-Based Application Ecosystem



The following figure is taken from the press conference when we announced the report. It illustrates the concept of “multiple AI ecosystems” around the iterative process that we have just mentioned. The process is the backbone of the picture:  pick an algorithm, apply it to a data set to develop a “smart” system, deliver value/services from running this new system, develop its usage and collect the resulting data that will enrich the current data set (either to continuously enrich the dataset or to validate/calibrate/improve the precision/value of the current system), hence the loop qualified as “Iterative development of AI practice”. Each of the five steps comes with its own challenges, some of which are indicated on the figure with an attached bubble. Each step may be seen as an ecosystem with its players, its dominant platforms, its state-of-the-art practices and its competitive geography.






This figure, as well as a number of twin illustrations that may be found on Twitter, illustrates the competitive state of the French ecosystem:

  • The “upstream” science & algorithms part of the ecosystem is doing pretty well. French scientists are sought over, their open source contributions are well spread and recognized, and the French AI startups are numerous.
  • The “system engineering” step is less favorable to France since most major players are American (with the exception of China with its closed technology borders). Because of the massive advance of US in the digital world, practical expertise about large-scale distributed systems is more common in the US than in France. Trendsetting techniques in system engineering comes from the US, where they are better appreciated (cf. the success of Google’s “Site Reliability Engineering” book).
  • The “service ecosystem” reflects the strength of demand, that is the desire from CEOs and executive committees to leverage exponential technologies to transform their organization. I am borrowing the wording from “Exponential Organizations” on purpose: there is a clear difference in technology-saviness and risk-appetite across the Ocean, as recalled by most technology suppliers (large and small) that we interviewed.
  • The “service usage” ecosystem shows the disadvantage of Europe with its multiple languages and cultures compared to continent-states such as US or China. Trust is a major component of the willingness to try new digital services. We found that France is not really different from other European countries or even from the US but is lagging behind Asian countries such as South Korea or China.
  • Data collection is harder in Europe that elsewhere, mostly because of stricter regulation and heavier bureaucracy. It is fashionable to see GDPR as a chance for Europe but we believe that GDPR should be softened with application rules that support experimentation.
  • Last, although access to computing technology is ubiquitous, American companies tend to have a small edge as far as accessing large amounts of specialized resources is concerned.


One could argue with each of these assessments, finding them biased or harsh, but what matters is the systemic loop structure. To see where France will be compared to US or China five years from now, one must assess our ability to run the cycles many, many times for each “AI + X” domain. Hence a small disadvantage gets amplified many times (which is also why companies that started earlier and that have accumulated data and practice tend to learn faster than newcomers).

Stimulating the bottom-up growth of an emerging ecosystem is not easy, it is much more difficult than promoting a top-down strategic plan.  The report makes a few proposals, among which I would like to emphasize the following four:
  • Technical literature has been declining because of the Internet business model change, and this is especially keen for French technical press which is close to extinct. These communication channels used to play a key role for small innovative players to establish their credibility with regards to large corporate customers. Through hackathons, contests and challenges, technical evaluation, public spending and large-scale flagship projects, etc. public stakeholders must invest to help small but excellent technical players made their voices heard.
  • Stimulating the “pull”, i.e., the demand for AI and machine learning solution, is itself a difficult task but it is not impossible. The communication efforts of public stakeholders should focus more on successful applications as opposed to the fascinating successes of technology providers.
  • The NATF proposes to facilitate the setting up of Experimentation Centers associated to “IA + X” domains, through the creation of critical masses of data, talents, computing power and practices. An experimentation center would be a joint effort by different actors from the same business domain – in the spirit of an IRT – to build a platform where new algorithms could be tested against existing data sets and vice-versa.
  • Last, following the recommendation from the CGEIET report, the NATF strongly supports the certification of data analytics processes, where the emphasis is on end-to-end certification from collection to continuous usage and improvement. Au


It should be also said that one will find in the NATF report a summary of obviously relevant recommendations made in previous reports such as training or better support for research and science. I strongly recommend the UK report written by Wendy Hall and Jérôme Pesenti which addresses these topics very thoroughly.  

5. Conclusion


The following is another “ecosystem schema” that I presented at the press conference earlier this month to position the NATF report with respect to the large crowd of reports and public figures’ opinions about Artificial Intelligence. The structure is the same as the previous cycle, but one may see that major ecosystems players have been spelled out. The short story is that the upstream ecosystem deserves some attention obviously but the numerous downstream ecosystems is where the battle should be fought, recognizing that France is really late for regaining domination in the software platform world.



To summarize, here is our contribution to the wonderful speech from the President Emmanuel Macron about Artificial Intelligence:
  • Don’t pick your battles too strongly in an emergent battlefield, promote and stimulate the large number of “AI+X” ecosystems. It is hard to guess where France will be better positioned 5 years from now.
  • Follow Francois Julien recommendations on how to effectively promote an emergent ecosystem: It is about growth (agriculture) more than selection (hunting). What is required is more stimulation than actions.
  • Focus on demand – how to encourage French companies to actively embrace the power of AI in their business activity – more than supply. Successful medium and large sizes AI companies will emerge in France if the local market exists.


In the same spirit, here is what the report says to companies:
  • Start collecting, sorting, enriching and thinking hard about your data,
  • “Let your teams work” from a mindset and a working environment perspective,
  • Think long-term, think “platform” and think about data flows.


Let me conclude with a disclaimer: the content of the report is a group effort that reflects as faithfully as possible the wisdom shared by the experts who were interviewed during the past 2 years. This blog summary is tinted by the author’s own experience with AI during the past 3 decades:



Sunday, December 3, 2017

Sustainable Information Systems Development and Technical Debt




1. Introduction


This short blogpost is a revisit of the concept of sustainable IS development in the digital age, that is in the age of constant change. Sustainable development for information systems is how to make choices so that building the capability to deliver the required services today does not hinder the capacity to deliver a few years later the services that will be needed then. It is a matter of IS governance and architecture. Each year money is allocated to building new systems, updating others, replacing some and removing others. Sustainable development is about making sure that the ratios between these categories are sustainable in the long run. It is a business necessity, not a IS decision; sustainable IS development is a classical short-term versus long-term arbitrage.

The initial vision of sustainable IS development comes from a financial view of IS. In the word of constant change, the weight of complexity becomes impossible to miss. This is why the concept of technical debt has made such a strong comeback. The “technical debt” measures the time and effort necessary to bring back a system to a “standard state” ready for change or upgrade. In a world with little change, technical debt may be left unclaimed, but the intensification of change makes the debt more visible. The “debt” metaphor carries the ideas of interests that accumulate over time: the cost and effort of cleaning the debt increases as the initial skills that created the system are forgotten, as the aging underlying technologies become more expensive to maintain and as the growing complexity makes additional integration longer; more difficult, and more expensive.

From a practical point of view, complexity is the marginal cost of integration. Complexity is what happens inside the information systems, but the most direct consequence is the impact on the cost when one needs to change or to extend the system. If you are a startup or if you begin a new isolated system, there is no such complexity charge. On the other hand, the cost of change for legacy system is multiplied by the weight of integration complexity. Complexity may be measured as the ration of the total effort of building and integrating a new function into an information system divided by the cost of developing this new function itself.

A dynamic view of sustainable IS development, therefore, needs to take complexity into account. Sustainable development needs to keep the potential for innovation intact for the next generations (in the software sense, from months to years). Complexity and change do not invalidate the previous financial analysis of sustainable development based on refresh rate and obsolescence cost, they make it more pregnant because the financial impact of technical debt grows as the change rate grows. Other said, a static sustainable development model sees change as a necessity to reduce costs whereas a dynamic model sees the reduction of complexity as a necessity to adapt to external change.

The post is organized as follows. The next section recalls some of the key ideas of “Sustainable Information System Development”. SD initial framework is drawn from a model of IT costs that looks at the cumulative effects of system aging. The purpose of SD is to derive governance rules to keep the budget allocation stable in the future and balanced between the maintenance of the current system and the need to adapt to new business requirements. Section 3 provides a short introduction to the concept of technical debt. “Technical debt” measures the effort to return to a “ready for change” state and if often measured with days/months. Time is a very practical unit but it does not make it easier to master technical debt, especially when complexity is involved. Section 4 adds the concept of complexity to the sustainable IS model. Cleaning technical debt is both defined as returning to a “standard” – often defined by rules and best practices – and reducing integration complexity. There is no such thing as a standard here, this is the essence of IS architecture to define a modular target that minimizes integration complexity.



2. Sustainable IS Development


Sustainable IS Development is a metaphor that borrows from the now universal concept of « sustainable development »:

  • The following definition was proposed by the Bruntland commission: "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
  • This is a great pattern to define sustainable development for information systems: “Sustainable IS development is how to build an information system that delivers the services that are required by the business today without compromising the ability of meet in a few years the needs of future business managers.”

This definition may be interpreted at the budgeting level, which makes it a governance framework. This is the basis of what I did 10 years ago – cf. this book or the course about Information Systems that I gave at Polytechnique. It may also be interpreted at the architecture level, as shown in the book “Sustainable IT Architecture: The Progressive Way of Overhauling Information Systems with SOA” by Pierre Bonnet, Jean-Michel Detavernier, Dominique Vauquier, Jérôme Boyer and Erik Steinholtz. In a previous post almost 10 years ago, I proposed a simple IS cost model to derive sustainable IS development as an equation (ISSD). I will not follow that route today, but here is a short summary of the three most important consequences of this model.

ISSD model is based on a simple view of IS, seen as a collection of software assets with their lifecycles and operation costs (in the tradition of Keen). Governance is defined as an arbitrage between:

  • Adding new components to the system, which is the easiest way to add new functions modulo the complexity of integration.
  •  Maintaining and upgrading existing components.
  • Replacing components with newer versions
  •  Removing components – the term “applicative euthanasia” may be used to emphasize that old legacy applications are never “dead”.

ISSD model is a set of equations derived from the cost model that express that the allocation ratio between these four remains stable. The main goal (hence the name “sustainable”) is to avoid that consuming too much money today on “additions” will result in the impossibility to evolve the information system tomorrow. This simple sustainability model (ISSD) shows that the ability to innovate (grow the functional scope) depends on the ability to lower the run cost at “iso-scope”, which requires constant refreshing.

At the core of the SD model is the fact that old systems become expensive to run. This has been proven in many ways:

  • Maintenance & licence fees grow as systems gets older, because of the cumulative effect of technical debt from the software provider side, and because at some points there are fewer customers to share the maintenance cost.
  • Older systems start to get more expensive after a while, when their reliability declines. There is a famous “bathtub” curve that shows the cost of operations as a function of age, and while maturation helps at first (reducing bugs), there is an opposite aging factor at work.
  • The relative cost of operations grows (compared to newer system) because legacy technology and architecture does not benefit from the constant flow of improvement, especially as far as automation and monitoring is concerned. Think of this as the cost of missed opportunity to leverage good trends.

The good practice from the ISSD model is to keep the refresh rate (which is the inverse of the average application age) high enough to benefits from HW/SW productivity gains to accommodate for the necessary scope increase. Remember that “software is eating the world”, hence the sustainable vision of IS is not one of a stable functional scope. Sustainability must be designed for growth.

The average age of your apps has a direct impact on the build/run ratio. This is a less immediate consequence of the ISSD model, but it says that you cannot hope to change the B/R ratio without changing the age of your apps, hence without the proper governance (whatever the technology vendor will tell you about their solution to improve this ratio). This is another of stating what was said in the introduction: Sustainable IS development is a business goal and it is a matter of business governance.


3. Technical Debt



The concept of technical debt is attributed to Ward Cunningham even though earlier references to similar ideas are easy to point out. A debt is something that you carry around, with the option to pay it off which requires money (effort) or to pay interest until you can finally pay off. Similarly, the mediocre software architecture that results either from too many iterative cycles or shortcuts often labelled as “quick and dirty” is a weight that you can either decide to carry (pay the interest: accept to spend more effort and money to maintain the system alive) or pay off (spend the time and effort to “refactor” to code to a better state). For a great introduction to the concept of technical debt, I suggest reading “Introduction to the Technical Debt Concern” by Jean-Louis Letouzey and Declan Whelan.

The key insight about technical debt is that it is expressed against the need for change. I borrow here a quote from Ward Cunningham: “We can say that the code is of high quality when productivity remains high in the presence of change in team and goals.” The debt is measured against an idea standard of software development: “When taking short cuts and delivering code that is not quite right for the programming task of the moment, a development team incurs Technical Debt. This debt decreases productivity. This loss of productivity is the interest of the Technical Debt”. The most common way is to measure TD with time, the time it would take to bring the piece of code/software to the “standards of the day” for adding or integrating new features.

The concept of “interest” associated with Technical Debt is all but theoretical. There is a true cost to keep the technical debt in your code. Although TD is subjective by nature (it measures an existing system versus a theoretical state) most of the divergences that qualify as “technical debt” have a well-documented cost associated to them. You may get a first view about this by reading Samuel Mullen’s paper “The High Cost of Technical Debt”. Among many factors, Samuel Mullens refers to maintenance costs, support costs or labor costs. One would find similar results in any of the older cost models form the 80s such as COCOMO II. Another interesting reference is “Estimating the size, cost, and types of Technical Debt” by Bill Curtis, Jay Sappidi and Alexandra Szynkarski. This CAST study focus on 5 “health factors” (that define the “desired standard”) with the following associated weights:

  • Robustness (18%)
  • Efficiency (5%)
  • Security (7%)
  • Transferability (40%)
  •  Changeability (30%)

Here, TD is the cost to return to standards in these 5 dimensions and the weight is the average contribution of each dimension to this debt. Some other articles point out different costs that are linked with technical debt such as the increased risk of failure and the higher rate of errors when the system evolves.

Complexity is another form of waste that accumulates as time passes and that contributes to the technical debt. This was expressed a long time ago by Meir Lehman (a quote from the CAST paper): “as a systems evolves, its complexity increases unless works is done to maintain or reduce it”. Complexity-driven debt technical is tricky because the “ideal state” that could be used as a standard is difficult to define. However, there is no doubt that iterative (one short step at a time) and reactive (each step as a reaction to the environment) tend to produce un-necessary complexity over time. Agile and modern software development methods have replaced architecture “targets” by a set of “patterns” because the targets tend to move constantly, but this makes it more likely to accumulate technical debt while chasing a moving target.  Agile development is by essence an iterative approach that creates complexity and requires constant care of the technical debt through refactoring.


4. The Inertia of Complexity


In the introduction, I proposed to look at complexity as the marginal cost of integration because it is a clear way to characterize the technical debt produced by complexity. Let us illustrate this through a fictional example. We have a typical company, and this is the time of the year when the “roadmap” and workload (for the next months) has been arbitrated and organized (irrespectively of an agile or waterfall model, this situation occurs anyway). Here comes a new “high priority” project. As the IS manager you would either like to make substitutions in the roadmap or let backlog priority works its magic, but your main stakeholders ask, “let’s keep this simple, how much money do you need to simply do this “on top” of everything else ?”. We all know that this is everything but simple: depending from the complexity debt, the cost may vary from one to 10, or it may simply be impossible. We are back to this “integration cost ratio” (or overweight) that may be close to 1 for new projects and young organizations while getting extremely high for legacy systems. Moreover, adding money does not solve all the issues since the skills needed for the legacy integration may be (very) scarce, or the update roadmap of these legacy components may be dangerously close to saturation (the absence of modularity, which is a common signature of legacy systems, may make the analysis of “impact” – how to integrate a new feature – a project much more difficult than the development itself). This paradox is explained with more details in my second book.

A great tool to model IS complexity is Euclidian Scalar Complexity, because of its scale invariance. Scalar complexity works well to represent both the topology of the integration architecture and the de-coupling positive effects of API and service abstraction. Whereas a simple model for scalar complexity only looks at components and flows, an service-abstraction model adds the concept of API, or encapsulation, smaller nodes between components that represent what is exposed by one components to others. The scalar complexity of an information system represents an “interaction potential” (a static maximal measure), but it is straightforward to derive a dynamic formula if we may make some assumption about the typical refresh rate of each components.

This model of the expected cost of “refreshing” the information system is useful because, indeed, there is a constant flow of change. One of the most common excuse for keeping legacy systems alive is a myopic vision of their operation costs, which are often low compared to renewal costs.  The better reason for getting rid of this technical debt is the integration complexity that the presence of these legacy components adds to the system. However, this requires exhibiting a simple-yet-convincing cost model that transforms this extra complexity into additional costs. Therefore, I be back in another post with this idea of scalar complexity modelling of integration costs.

Meanwhile, the advice that can be given to properly manage this form of technical debt is to be aware of the complexity through careful inventory and mapping; then to strive for a modular architecture (there is a form of circularity here, since modularity is defined as way to contain the adverse effects of change – cf. the lecture that I gave at Polytechnique on this topic). Defining a modular information system is too large a topic for one blog post, although defining a common and share business data model, extracting the business process logic from the applications, crafting a service-oriented architecture through API, developing autonomous micro-services are some of the techniques that come to mind (as one may find in my first book).

A much more recent suggested reading is the article “Managing Technical Debt” by Carl Tashian. This is a great article about how to reduce the complexity-driven technical debt. Here are the key suggestions that he makes:

  • Keep points of flexibility. This is at the heart of a good service-oriented architecture and a foundation for microservices. However, as Tashian rightly points out, microservices are not enough.
  •  Refactor for velocity. Take this two ways: refactor to make your next project easier to develop, but also to constantly improve This is a great insight: when refactoring,  you have the insight of performance monitoring your existing system. It is easier to improve performance while re-factoring than in a crisis mode when the run-time problem has occurred.
  • Keep dependencies to the minimum. Here we see once again the constant search for modularity.
  • Prune the low-performers (usage). A simple-but-hard to follow piece of advice, intended to reduce the total weight.
  • Build with test and code reviews. A great recommendation, that is outside the scope of this post, but obviously most relevant.
  •  Reinforce the IS & software engineering culture.



5. Conclusion


This post is the hybrid combination of two previously well-known ideas:
·         The sustainable management of IT budget should be concerned with application age, lifecycle and refresh rate. To paraphrase an old advertisement for batteries “IT technology progress – such as Moore’s law – is only useful when used”.
·         In the digital world, that is the world of fast refresh rate, the inertia of the system should be kept minimal. This is why the preferred solution is always “no code” (write as little as you can), though a strict focus on value, through SaaS (letting other worry about constant change), though abstraction (write as few lines of code as possible), etc.

The resulting combination states that IT governance must address IS complexity and its impact on both costs and agility in a scenario of constant change. The constant refactoring both at the local (component) and global Enterprise Architecture level (IS as a whole) should be a guiding factor in the resource allocation process. Sustainable IS development is a business decision, which requires the ability to assess the present and future cost of IS operations, lifecycle and integration.

Because the digital world is exposed to more variability (e.g., of end-customer usage) and a higher rate of change, best practices such as those reported in Octo’s book “The Web Giants” are indeed geared towards minimizing inertia and maximizing velocity and agility. The exceptional focus of advanced software companies to keep their code minimal, elegant and modular is a direct consequence of sustainable development requirements.


This post was kept non-technical, without equations or models. It is very tempting, though, to leverage the previous work on scalar complexity and sustainability models to formalize the concept of complexity debt. Scalar complexity is a simple way to assess the complexity of an architecture through its graphic representation of boxes (components) and links (interfaces). The assess the dynamic “dimension” of technical debt associated to complexity, one needs a model for constant change. This way, the metaphor of weight (the inertia of something ponderous like a boat or a whale) may be replaced with a metaphor that captures the level of interaction between moving components.

Devising a proper metaphor is important since the “school of fish versus whale” metaphor is often used and liked by business managers. Complexity debt adds a twist about the “scalar complexity of the school of fish”: its needs to be kept to the minimum to keep the agility of the actual school of fish (for the metaphor to work). I will conclude with this beautiful observation from complex biological system theory:  The behavior of a school of fish or a flock of birds is emergent from local behaviors: fishes and birds only interact with their local neighbors (other said, the scalar complexity is low). Thus the “school of fish” metaphor is actually a great one to design sustainable information systems.





x

Saturday, September 30, 2017

Hacking Growth, when Lean Management meets Digital Software



1. Introduction



This blogpost is both about a truly great book “Hacking Growth : How Today’s Fastest-Growing Companies Drive Breakout” by  Sean Ellis and Morgan Brown – that could be qualified as the reference textbook on Growth Hacking – and a follow-up on many previous conversations in this blog  and other posts (in French) about growth hacking. There are many ways to see Growth Hacking in relation to Lean Startup. Following the lead of Nathan Furr and Jeff Deyer, I see Growth Hacking as the third step of a journey that starts with design thinking, followed by the delivery of a successful MVP (minimum viable product) and continues to growth hacking. The goal of the design thinking is to produce the UVP (Unique Value Proposition) – it is a first iterative loop that produces prototypes. The goal of the “MVP step” is to produce a “vehicle” (a product) to learn how to deliver the UVP through feedback and iteration. The growth hacking phase is about navigating towards success and growth with this MVP.
 I like Nathan Furr idea of a “minimum awesome product”, in the sense that very crude prototypes belong to the design thinking phase (to validate ideas). Growth Hacking works with a product that is “out there” with real customers/users in the real world, and it only works with a  “minimum awesome product” that delivers the value of an awesome UVP (following Ash Maurya). This says the the classical “Learn, Build, Measure” cycle is a common pattern of the three stages: design thinking, MVP building and Growth Hacking.

This post is also a follow up to the post about “software culture and learning by doing and problem solving”. Growth Hacking is a perfect example of “lean continuous improvement culture meets agile software development practice”. It is a structured and standardized practice – in the lean sense – for extracting value from a user feedback learning loop – both in a quantitative and qualitative way. As such, it is more general than typical digital products and may be applied to a large class of software products, on closed as well as open markets. I would argue that learning from users’ feedbacks and users’ usage is a must-do from any software organization, from IT departments to ISVs.

This post is organized as follows. The next part is about Growth Hacking as a control loop. I will first recall that growth hacking is about turning customer feedback into growth thanks to the (digital) product itself (it may be applied to other products that digital, but “hackings” says that it was designed for software defined products). The third part is about the true customer-centricity of Growth hacking and the importance of the “aha moment”, when the customer experiences the UVP (Unique Value Proposition). The last part talks about the dual relationship between teams and user communities. The product is used as a mediation between a product team and a group of communicating users organized into a community.  The product plays a key role for mediating between the two: it delivers new experiences and gather new feedbacks, but there is more than numbers to Growth Hacking.

A last caveat before jumping into our topic: this book is a “user manual” of Growth Hacking for practitioners. A summary does not really work and I will highlight a few key ideas and associated quotes, rather than attempting to cover the book’s material. I urge you to read the book, especially if you are in charge of a digital product or service and trying to grow its usage.


2. The Growth Hacking Control Loop



Growth hacking is about developing market and usage growth through the product itself and a control loop centered on customer feedback. I borrow the following definition from the beginning of the book : “Growth hacking allows companies to efficiently marry powerful data analysis and technical know-how with marketing savvy, to quickly devise more promising ways to fuel growth. By rapidly testing promising ideas and evaluating them according to objective metrics, growth hacking facilitates much quicker discovery of which ideas are valuable and which should be dismissed”. What sets the software world apart is that the product is the media. This is one of the key insights of Growth Hacking (and part of the reasons for the “hacking word”). Using the product as the media to communicate with users has many benefits: it is cheap – fixed cost that is good for scaling –; it is efficient (there is a “rich bag of tricks” that the book illustrates); it is efficient since it reaches 100% of users and it is precise since software analytics enables to know exactly what works and what does not. The authors summarize this as “enabling our users to grow the product for us.”

Growth Hacking may be implemented at all scales – from a startup to a large company – and for a large range of software-defined experience, from mass-market digital goodies such as mobile apps to B2B commercial software. This is repeated many times in the book: “Nor is it just a tool for entrepreneurs; in fact, it can be implemented just as effectively at a large established company as at a small fledgling start-up”.  As told in the introduction, this makes the book valuable for most companies since “software is eating the world”: “General Electric CEO Jeffrey Immelt recently said that “every industrial company will become a software company,” and the same can be said for consumer goods companies, media companies, financial services firms, and more”. What makes the value proposition of Hacking Growth so interesting is the success track with all the Silicon Valley companies that have used this approach: Twitter, Facebook, Pinterest, Uber, LinkedIn … and many more. Even though each story is different and each “growth hack” must be tuned to the specifics of each product, there is a common method: “It wasn’t the immaculate conception of a world-changing product nor any single insight, lucky break, or stroke of genius that rocketed these companies to success. In reality, their success was driven by the methodical, rapid-fire generation and testing of new ideas for product development and marketing, and the use of data on user behavior to find the winning ideas that drove growth”.

Growth Hacking control loop is built around measure – in a classical Plan-Do-Check-Act cycle. The first message here is that Growth Hacking is grounded in data. This is very much in the lean startup spirit: decisions are based on data, and the first step of the approach is to collect the relevant data. This is especially true for large companies, as is explained in the book with a number of examples: “
Recognizing that Walmart’s greatest asset is its data, Brian Monahan, the company’s former VP of marketing, pushed forward a unification of the company’s data platforms across all divisions, one that would allow all teams, from engineering, to merchandising, to marketing, and even external agencies and suppliers, to capitalize on the data generated and collected.” A growth hacking strategy starts with a data analytics strategy.  Taking decisions based on data requires quality (for precision) and quantity (for robustness). There is clearly a “data engineering” dimension to this first step: a “data integration architecture and platform” is often needed to start the journey. The story of Facebook is a good case in point : “ in January of 2009, they took the dramatic step of stopping all growth experiments and spending one full month on just the job of improving their data tracking, collection, and pooling. Naomi Gleit, the first product manager on Facebook’s growth team, recalls that “in 2008 we were flying blind when it came to optimizing growth.” This data fuels a PDCA cycle : “ The process is a continuous cycle comprising four key steps: (1) data analysis and insight gathering; (2) idea generation; (3) experiment prioritization; and (4) running the experiments, and then circles back to the analyze step to review results and decide the next steps.” Here we see the reference to the Lean approach, or to the TQM heritage of Edward Deming.

Growth hacking is a learning process with fast cycle time. Most growth hacks do not yield positive results, so it is critical to try as many as possible. At the end, success depends on “the rapid generation and testing of ideas, and the use of rigorous metrics to evaluate—and then act on—those results”. There is a lot of emphasis on the speed (of implementation) and the rythm (of experimentation). Many examples are given to show the importance of “fast tempo”: “Implementing a method I call high-tempo testing, we began evaluating the efficacy of our experiments almost in real time. Twice a week we’d look at the results of each new experiment, see what was working and what wasn’t, and use that data to decide what changes to test next”. It really boils down to the necessity to explore a large space of optimization, without knowing in advance what will work and what will not. The authors quote Alex Schultz from Facebook : “If you’re pushing code once every two weeks and your competitor is pushing code every week, just after two months that competitor will have done 10 times as many tests as you. That competitor will have learned 10 times, an order of magnitude more about their product [than you].” To achieve this fast cycle, one must obviously leverage agile method and continuous delivery, but one must also use simple metrics, with one goal at a time. A great part of the book deals with the “North Star” metric, the simple and unique KPI that drives a set of experiment: “The North Star should be the metric that most accurately captures the core value you create for your customers. To determine what that is you must ask yourself: Which of the variables in your growth equation best represents the delivery of that must-have experience you identified for your product?”.


3. Growth Hacking is intensely customer-centric


Growth Hacking starts when the product generates a “Aha moment” for the user. The “Aha moment” happens when the user experiences the promise that was made in the UVP. The product actually solves a pain point and the user gets it. Growth Hacking cannot work if the MVP is not a “minimum awesome product” that delivers the promise of a great UVP (redundant, since U means unique). As the authors say, “
no amount of marketing and advertising—no matter how clever—can make people love a substandard product”, hence “one of the cardinal rules of growth hacking is that you must not move into the high-tempo growth experimentation push until you know your product is must-have, why it’s must-have, and to whom it is a must-have: in other words, what is its core value, to which customers, and why”. The book logically refers to the Sean Ellis ratio, and the corresponding survey that is applied to customers to find out who would be truly annoyed if the product was discontinued. Sean Ellis data mining over a very large sample of startups shows that one must reach 40% for this ratio to say that consumers “love your product” and to start scaling successfully.

Getting to a product that may deliver a “Aha moment” is the goal of the MVP cycle. This is another topic, but the book gives a few pieces of sound advice anyway. The most salient one is to stick to the “minimum” of MVP and adding features instead of focusing on simplicity: “all product developers must be keenly aware of the danger of feature creep; that is, adding more and more features that do not truly create core value and that often make products cumbersome and confusing to use.”


The ultimate goal is to make one’s product a customer habit, up to an addictive one. This is a clear consequence of the growth model that is made famous by the Pirate Metrics, once the acquisition flows, retention becomes the heart of the battle. In our world of digital abundance, retention is only won when the product becomes a habit: “
The core mission for growth teams in retaining users who are in this midterm phase is to make using a product a habit; working to create such a sense of satisfaction from the product or service that over time”. The book is full of suggestions and insights to help the reader design a product that could become a habit. For instance, it leverages the “Hook Model” proposed by Nir Eyal.  The hook model has four parts, organized into a cycle: trigger, action, reward, investment. Many of the growth hacks follow this cycle to build up a habit. The book offers a step-by-step set of examples to build triggers (based on customer journeys), to develop all kinds of rewards and to foster customers’ investment into the experience (for instance through personalization and self-customization, leading to the feeling of ownership – a key component of emotional design). As the authors notice: “ some of the most habit-forming rewards are the intangible ones. There are many kinds of rewards to experiment with in this category. There are social rewards, such as Facebook’s “Like” feature, which has been a strong driver in making the posting of photos and comments habitual.”

The autors advocate about leveraging the growing wealth of knowledge that is produced by psychology and behavioral economics. Obviously, the work of Daniel Kahneman is quoted – cf. his great book “Thinking, Fast and Slow”,  but we could also think of Dan Ariely or Richard Thaler. There are many other interesting references to other frameworks to influence customer behavior, such as Robert Cialdini’s six principles. Among those principles, the principle of reciprocity may be used to drive revenue by asking customers to make small commitments, that create a solid bond that may be leveraged later on. The principle of social proof is based on research that shows that we tend to have more trust in things that are popular or endorsed by people that we trust. Another book that is quoted here is “The Art of Choosing” by Sheena Lyengar, I strongly recommend her video here. It is interesting to understand that choice has a cost, and that some of these choices should be avoided : “Debora Viana Thompson, Rebecca Hamilton, and Roland Rust, found that companies routinely hurt long-term retention by packing too many features into a product, explaining “that choosing the number of features that maximizes initial choice results in the inclusion of too many features, potentially decreasing customer lifetime value.


4. Growth Hacking : Teams meet Communities



Growth Hacking is a team sport. The importance of teams is a common thread throughout the book. These are teams in the sense of cross-functional, agile and empowered: “the creation of a cross-functional team, or a set of teams that break down the traditional silos of marketing and product development and combine talents”. Following the lean software principles, the cross-functional team is not a group of “siloed experts”, but T-shaped profiles that bring their own skills and talents but understand each other: “You need marketers who can appreciate what it takes to actually write software and you need data scientists who can really appreciate consumer insights and understand business problems”. I borrow yet another quote on the importance of cross-functionality, since this is a key idea to leverage effectively technology into innovation, in a larger context than growth hacking: “growth hacking is a team effort, that the greatest successes come from combining programming know-how with expertise in data analytics and strong marketing experience, and very few individuals are proficient in all of these skills”.          

Growth Hacking leverages communities of communicating users.  Growth Hacking is a story with three protagonists: the team, the product and a user community – the product as the mediation between the team and the community. The importance of user community is also superbly expressed by Guy Kawasaki … and Steve Jobs. The community is the preferred tool to get deep insights from users because analytics is not enough: “
Preexisting communities to target for insight into how to achieve the aha moment can also, of course, be identified digitally”. The combination of the “aha moment” that we saw in the previous section and the community of “evangelist” is what is needed to start the growth engine: “Once you have discovered a market of avid users and your aha moment—i.e., once product/market fit has been achieved—then you can begin to build systematically on that foundation to create a high-powered, high-tempo growth machine”. The community of active, engaged, communicating users is needed to get qualitative feedback in addition to the quantitative feedback that one gets with software analytics. This deeper insight is needed to truly understand customer behavior: “it’s crucial that you never assume why users are behaving as they are; rather, you’ve always got to study hard data about their behavior and then query them on the basis of observations you’ve made in order to focus your experimentation efforts most efficiently on changes that will have the greatest potential impact”.  This fine understanding of customer behavior is necessary to eradicate friction, which is a key goal of experience design, that is remove “any annoying hindrances that prevent someone from accomplishing the action they’re trying to complete”.

When striving for growth, in the same way than one should focus on a single metric, it is better to focus on a single – or very few - distribution channel. A large part of the book demonstrates this with illustrative examples. Focusing on a distribution channel helps to narrow the diversity of customer experience and makes the iterative optimization of Growth hacking better targeted and more efficient: “Marketers commonly make the mistake of believing that diversifying efforts across a wide variety of channels is best for growth. As a result, they spread resources too thin and don’t focus enough on optimizing one or a couple of the channels likely to be most effective”.  Growth Hacking is often associated with virality. Indeed, virality is a key growth engine and, as Seth Godin explained, virality must be designed as part of the product experience: “when you do focus on instrumenting virality, it’s important that you follow the same basic principle as for building your product—you’ve got to make the experience of sharing the product with others must-have—or at least as user friendly and delightful as possible”.  However, virality is only one aspect that may be tuned by the iterative Growth Hacking optimization cycle. Acquisition and Retention come first in the customer journey and should come first in the growth hacking process.
Bottom of Form

As explained in the introduction, a summary would not do justice to this book, which is full of great illustrative examples and relevant data points and metrics.  This is definitely useful for growing mobile applications: “
For example, for mobile notifications, opt-in rates range from 80 percent at the high end, for services like ride sharing, to 39 percent at the low end for news and media offerings, according to Kahuna, a mobile messaging company”.  Growth hacking is based on building growth models that are validated, tuned or invalidated with experience cycles. The book is filled with key ratios that are more than useful to start this modeling with default values that make sense. Here is another example that is truly valuable for anyone who tries to understand her or his application retention numbers: “According to data published by mobile intelligence company Quettra, most mobile apps, for example, retain just 10 percent of their audience after one month, while the best mobile apps retain more than 60 percent of their users one month after installation”. Focusing on measure is obviously the way to go, but making sense of measures requires modelling and this book is a great help to achieve this.


5. Conclusion





Growth Hacking is the third loop of the following representation of Lean Startup, which was developed and used at AXA’s digital agency.  As explained in the introduction, the goal of the first loop is to produce the proper UVP. No one should ever start developing a product or a service without a first-class UVP – As Ash Maurya said : “life is too short to build products that people will not use”. This is hard work, but many good guides are available, such as Ash Maurya’s Running Lean. Once the UVP is crafted, there are three huge and separate challenges:

  1. To build a MVP that delivers the promise of the UVP. This is actually incredibly difficult for large organizations: there is always a short cut that seems faster (and the pressure to deliver is huge) and there are too many stakeholders that will contribute to dilute the UVP. My personal experience, from the innovation lab to the hands of the customer, over the last 10 years, is that the UVP is lost 90% of the time. As was sated earlier, Growth Hacking starts when the “aha moment” is delivered, but this is not a zero-one situation and Growth Hacking may be used to debug or improve this “aha moment”.
  2. To craft and deliver the story of the UVP to the customer. I have been amazed during the same past 10 years at the number of times a great UVP was built into a product, a service or an app, and customers were simply not aware of it. Each time you would demonstrate the experience to a customer, you would see the “aha moment” and the smile, but 1-to-1 personal demo is not a scalable method. This book precisely addresses this problem. My experience over the years has been that the crafting of this story should be codesigned with the development team. Understanding the link between the pain points, the promise and the user stories is a key factor to build a consistent and delightful experience.
  3. To help the customer, once the UVP is “in the box” and once the customer has understood what it is, so that this experience may actually be found ! This is obviously a question of user experience design and usability, but it is a tough one. Here also, Growth Hacking is more than relevant: continuous iteration is the only way to solve this problem.






 
Technorati Profile