title: Combining process data and subjective data to better understand online behavior
presented at the 3rd Social Sciences and the Internet conference, Eindhoven, July 1 2016
http://ssi.ieis.tue.nl/
2. ? At the start of the century (2000ish¡)
? From paper to computer-based experiments
3. ? Running stuff online:
¨C Gathering additional measures
¨C Reading time per page
¨C Clicking patterns
¨C About 5-10% of data is invalid!
We should measure time in controlled Lab experiments too!
¨C 2003-2004
Process tracing
research
with Eric Johnson
3
5. ? Tradeoff between Target and Competitor
¨C Price versus Quality
? Adding 3rd option: Decoy Da to TC set
? D is dominated by target T but not by
competitor C (and hardly ever chosen)
? P(T;DTC) > P(T;TC)
? Violation of independence of irrelevant
alternatives
Attraction
Effect
TC DTC
T 46% 53%
C 54% 47%
8. ? Using Icon Graphs to plot the process data
? Dynamics:
¨C Scanning Phase (all acquisitions until all boxes have been
opened once)
¨C Choice phase (all remaining acquisitions)
¨C For Choice of target and not
8
10. ? Iyengar and Lepper (2000): jam-study
? Apparently, satisfaction is not only a
function of attractiveness but also
of the choice difficulty
Choice overload
More attractive
3% sales
Less attractive
30% sales
Higher purchase
satisfaction
12. Results
perceived recommendation
variety
perceived recommendation
quality
Top-20
vs Top-5 recommendations
movie
expertise
choice
satisfaction
choice
dif?culty
+
+
+
+
-+
.401 (.189)
p < .05
.170 (.069)
p < .05
.449 (.072)
p < .001
.346 (.125)
p < .01
.445 (.102)
p < .001
-.217 (.070)
p < .005
Objective System Aspects (OSA)
Subjective System Aspects (SSA)
Experience (EXP)
Personal Characteristics (PC)
Interaction (INT)
Lin-20
vs Top-5 recommendations
+
+ - +
.172 (.068)
p < .05
.938 (.249)
p < .001
-.540 (.196)
p < .01
-.633 (.177)
p < .001
.496 (.152)
p < .005
-0.1
0
0.1
0.2
0.3
0.4
0.5
Top-5 Top-20 Lin-20
Choice satisfaction
13. ?Median Choice rank
?Top 20: 8.5
?Lin 20: 3.0
?Looking time per item:
?Top 20: 2.8 sec
?Lin 20: 1.4 sec
?Acq. Freq per item:
?Top 20: .64
?Lin 20: .44
Frequency
Time
Behavioral data
14. 14
Psychologists and HCI people are mostly interested in experience¡
User-Centric Evaluation Framework
15. 15
Computers Scientists (and marketing researchers) would study
behavior¡. (they hate asking the user or just cannot (AB tests))
User-Centric Evaluation Framework
16. 16
Though it helps to triangulate experience and behavior¡
User-Centric Evaluation Framework
17. 17
Our framework adds the intermediate construct of perception that explains
why behavior and experiences changes due to our manipulations
User-Centric Evaluation Framework
18. 18
And adds personal
and situational
characteristics
Relations modeled
using factor analysis
and SEM
Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H., Newell, C. (2012). Explaining
the User Experience of Recommender Systems. User Modeling and User-Adapted
Interaction (UMUAI), vol 22, p. 441-504
http://bit.ly/umuai
User-Centric Evaluation Framework
19. ? Two cases that clearly shows the importance of the triangulation of
Behavioral data & Subjective data!
? Video recommender service: satisfaction versus clicks and
viewing times
? Diversification: continuing the choice overload work
¨C Can Diversification reduce choice overload?
¨C Choice difficulty: effort versus cognitive difficulty
19
20. 20
Video Recommender system:
EMIC Pre-trial in UMUAI paper
Knijnenburg, B.P., Willemsen, M.C. & Hirtbach, S. (2010). Receiving recommendations and providing feedback :
the user-experience of a recommender system. E-Commerce and Web Technologies (11th International
Conference, EC-Web 2010, Lecture Notes in Business Information Processing, Vol. 61, pp. 207-216)
22. ? Diversification and list length as two experimental
factors
¨C list sizes: 5 and 20
¨C Diversification: none (top 5/20), medium, high
? Dependent measure: choice satisfaction
¨C Choice difficulty versus attractiveness
¨C Subjective choice difficulty (scale) and objective
choice difficulty (effort: hovers)
? 159 Participants from an online database
¨C Rating task to train the system (15 ratings)
¨C Choose one item from a list of recommendations
¨C Answer user experience questionnaire
Diversification & Choice Satisfaction
23. ? Perceived recommendation diversity
¨C 5 items, e.g. ¡°The list of movies was varied¡±
? Perceived recommendation attractiveness
¨C 5 items, e.g. ¡°The list of recommendations was attractive¡±
? Choice satisfaction
¨C 6 items, e.g. ¡°I think I would enjoy watching the chosen movie¡±
? Choice difficulty
¨C 5 items, e.g.: ¡°It was easy to select a movie¡±
Questionnaire-items
25. ? Perceived Diversity increases with
Diversification
¨C Similarly for 5 and 20 items
¨C Perc. Diversity increases
attractiveness
? Perceived difficulty goes down with
diversification
? Effort (behavioral difficulty) goes up with
list length
? Perceived attractiveness goes up with
diversification
? Diverse 5 item set excels¡
¨C Just as satisfying as 20 items
¨C Less difficult to choose from
¨C Less cognitive load¡!
-0.5
0
0.5
1
1.5
none med high
standardizedscore
diversification
Perc. Diversity
5 items
20 items
-0.2
0
0.2
0.4
0.6
0.8
1
none med highstandardizedscore
diversification
Choice Satisfaction
5 items
20 items
26. ? Behavioral and subjective data are
two parts of the same story:
you often need both to really get
it!
? Try to capture as much of the
process as you can, using smart
interface designs, event tracking
(hovers, clicks) or even cooler stuff
such as modern cheap eye trackers
(Tobii EyeX, EyeTribe)
? User-centric framework allows us to
understand WHY particular
approaches work or not
¨C Concept of mediation: user perception
helps understanding..
What you should take away¡