際際滷

際際滷Share a Scribd company logo
1
PRODUCT
INCREMENT
FEEDBACK
AN
ARCHITECTURE
Massimo Fascinari
Technology Advisor
@mfascinari
massimo-fascinari-98185b14
Pictures Stara Praga (Warsaw)
2
Business
(Dev)
Customer
(Ops)
EFFICIENCY = ++ SPEED --- COSTS
EFFECTIVENESS = ++ INTIMACY --- GUESS
DEVOPS
FEEDBACK
THE DIALOGUE
3
WHY IT MATTERS
Each sprint we spend at least
7 000 USD (70%) (*)
for features that are not effective
(*) Source: Massimos arithmetical skills augmented with MS Excels ;-) and based on the following assumptions and observations:
 2 weeks sprint x 7 developers = 560 hours + 20 Pizzas
 Experiments have 30% probability to be successful - Only one third of the ideas tested at Experimentation Platform
(Microsoft) achieved the expected improved metrics (Kohavi, Crook, Longbotham 2009). At Google, only about 10 percent of these [controlled
experiments, were] leading to business changes. (Manzi 2012). Avinash Kaushik wrote that 80% of the time you/we are wrong about what a customer
wants. Netflix considers 90% of what they try to be wrong (Moran 2007, 240)
 Avg. Software Developer Hourly Rate 17.48 USD source: https://www.payscale.com/research/PL/Job=Software_Developer/Salary
4
CONCEPTUAL
ARCHITECTURE
BEHAVIOUR INSIGHT
CUSTOMER INSIGHT
PRODUCT INSIGHT
What customers do outside
the product context
What customers do inside
the product context
How the application and
Infrastructure react
CONTROL
INTIMACY
 Social media monitoring / Sentiment analysis
 Chatbots and Natural Language Processing
 Geolocation
 Ticketing systems / Feedback
 Cohort Analysis
 Web Analytics
 A/B Testing
 Feature Toggling (audience management)
 Telemetry
 Non functional requirements measurement
 Feature Toggling (performance controlling)
5
MATURITY
MODEL
CUSTOMERMATURITY DESCRIPTION PRODUCT BEHAVIOUR GAIN
Level 5 - Leading
Level 1  Ad-Hoc
Level 4 - Managed
Level 3 - Defined
Level 2 - Repeatable
Suggestion and
improvement
hypotheses driven
by AI and
prediction
Insight based on
correlated data.
Decisions taken
on the analysed
observations
Hypotheses are
formulated. Data
are collected in
silos. Decisions
require manual
data analysis
Hypotheses are
formulated.
Validations are
based on limited
performances
data
Hypotheses are
not formulated.
Subjective view
on customers
needs and
behaviours
Application and
infrastructure quality
properties to improve
Application and
infrastructure counters
monitored against
historical patterns
Performances
prediction and self-
healing
Active Telemetry of all
the main application
and infrastructure
counters
Analysis done on
historical variations
Architecture qualitative
attribute actively
monitored
Toggling to introduce
smoothly new features
Minimal architecture
qualitative attribute
metrics monitored and
alerts sets
Functionalities and
features to improve
Audience optimization
Contextual Suggestions
Cohort analysis
variances
Behaviours analytics on
increment scope
A/B Testing
Conversion rate/
Content engagement,
bound rates
Behaviour Flow
Analysis
Functional monitoring
# of user
# of sessions
Conversational analysis
Dialogue and
conversation to improve
Sentiment analysis
focused on product
increment
Customer feedback
captured by survey
Sentiment analysis
No data
No data
Higher
engagement
Reduction of the
experiment costs
Gained Customer
insight
Enhanced
customer
experience
Customer
behaviour insight;
Selected products
increments are
fact based
Customer
behaviour
observed
MTTR managed
and optimized
No insight but it
failed / it did not
Quality at risk
MTTR not efficient
6
SAMPLE TOOLS
BEHAVIOUR INSIGHT
CUSTOMER INSIGHT
PRODUCT INSIGHT
ML
ENGINE
AWS Cloud Watch Application Insight ElasticSearch
Amazon Alexia
AWS Kinesis
DynamoDB AWS EMR
ElasticSearch
FeatureToggle
7
THANK YOU!
> CONNECT
> Pipeline
> Playbook
> Over and beyond Digital Marketing
> EXPLORE
> Lean Start-Up - Eric Ries
> Hypothesis Driven Development - Barry ORelly
> Experiment at Scale  Pavel Dmitriev
> Data Driven Development
@mfascinari
massimo-fascinari-98185b14

More Related Content

Product Increment Feedback Architecture - DevOpsDays - Warsaw

  • 2. 2 Business (Dev) Customer (Ops) EFFICIENCY = ++ SPEED --- COSTS EFFECTIVENESS = ++ INTIMACY --- GUESS DEVOPS FEEDBACK THE DIALOGUE
  • 3. 3 WHY IT MATTERS Each sprint we spend at least 7 000 USD (70%) (*) for features that are not effective (*) Source: Massimos arithmetical skills augmented with MS Excels ;-) and based on the following assumptions and observations: 2 weeks sprint x 7 developers = 560 hours + 20 Pizzas Experiments have 30% probability to be successful - Only one third of the ideas tested at Experimentation Platform (Microsoft) achieved the expected improved metrics (Kohavi, Crook, Longbotham 2009). At Google, only about 10 percent of these [controlled experiments, were] leading to business changes. (Manzi 2012). Avinash Kaushik wrote that 80% of the time you/we are wrong about what a customer wants. Netflix considers 90% of what they try to be wrong (Moran 2007, 240) Avg. Software Developer Hourly Rate 17.48 USD source: https://www.payscale.com/research/PL/Job=Software_Developer/Salary
  • 4. 4 CONCEPTUAL ARCHITECTURE BEHAVIOUR INSIGHT CUSTOMER INSIGHT PRODUCT INSIGHT What customers do outside the product context What customers do inside the product context How the application and Infrastructure react CONTROL INTIMACY Social media monitoring / Sentiment analysis Chatbots and Natural Language Processing Geolocation Ticketing systems / Feedback Cohort Analysis Web Analytics A/B Testing Feature Toggling (audience management) Telemetry Non functional requirements measurement Feature Toggling (performance controlling)
  • 5. 5 MATURITY MODEL CUSTOMERMATURITY DESCRIPTION PRODUCT BEHAVIOUR GAIN Level 5 - Leading Level 1 Ad-Hoc Level 4 - Managed Level 3 - Defined Level 2 - Repeatable Suggestion and improvement hypotheses driven by AI and prediction Insight based on correlated data. Decisions taken on the analysed observations Hypotheses are formulated. Data are collected in silos. Decisions require manual data analysis Hypotheses are formulated. Validations are based on limited performances data Hypotheses are not formulated. Subjective view on customers needs and behaviours Application and infrastructure quality properties to improve Application and infrastructure counters monitored against historical patterns Performances prediction and self- healing Active Telemetry of all the main application and infrastructure counters Analysis done on historical variations Architecture qualitative attribute actively monitored Toggling to introduce smoothly new features Minimal architecture qualitative attribute metrics monitored and alerts sets Functionalities and features to improve Audience optimization Contextual Suggestions Cohort analysis variances Behaviours analytics on increment scope A/B Testing Conversion rate/ Content engagement, bound rates Behaviour Flow Analysis Functional monitoring # of user # of sessions Conversational analysis Dialogue and conversation to improve Sentiment analysis focused on product increment Customer feedback captured by survey Sentiment analysis No data No data Higher engagement Reduction of the experiment costs Gained Customer insight Enhanced customer experience Customer behaviour insight; Selected products increments are fact based Customer behaviour observed MTTR managed and optimized No insight but it failed / it did not Quality at risk MTTR not efficient
  • 6. 6 SAMPLE TOOLS BEHAVIOUR INSIGHT CUSTOMER INSIGHT PRODUCT INSIGHT ML ENGINE AWS Cloud Watch Application Insight ElasticSearch Amazon Alexia AWS Kinesis DynamoDB AWS EMR ElasticSearch FeatureToggle
  • 7. 7 THANK YOU! > CONNECT > Pipeline > Playbook > Over and beyond Digital Marketing > EXPLORE > Lean Start-Up - Eric Ries > Hypothesis Driven Development - Barry ORelly > Experiment at Scale Pavel Dmitriev > Data Driven Development @mfascinari massimo-fascinari-98185b14