際際滷shows by User: ZacharySchendel1 / http://www.slideshare.net/images/logo.gif 際際滷shows by User: ZacharySchendel1 / Tue, 22 Sep 2020 18:54:00 GMT 際際滷Share feed for 際際滷shows by User: ZacharySchendel1 DesignTalks TV Innovation at Netflix Schendel 10-2018 /slideshow/design-talks-tv-innovation-at-netflix-z-schendel-10-2018/238608371 designtalkstvinnovationatnetflixzschendel10-2018-200922185400
How user experience research was used to spark 3 new innovations on the Netflix TV user interface.]]>

How user experience research was used to spark 3 new innovations on the Netflix TV user interface.]]>
Tue, 22 Sep 2020 18:54:00 GMT /slideshow/design-talks-tv-innovation-at-netflix-z-schendel-10-2018/238608371 ZacharySchendel1@slideshare.net(ZacharySchendel1) DesignTalks TV Innovation at Netflix Schendel 10-2018 ZacharySchendel1 How user experience research was used to spark 3 new innovations on the Netflix TV user interface. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designtalkstvinnovationatnetflixzschendel10-2018-200922185400-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> How user experience research was used to spark 3 new innovations on the Netflix TV user interface.
DesignTalks TV Innovation at Netflix Schendel 10-2018 from Zachary Schendel
]]>
179 0 https://cdn.slidesharecdn.com/ss_thumbnails/designtalkstvinnovationatnetflixzschendel10-2018-200922185400-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
RecSys 2020 A Human Perspective on Algorithmic Similarity Schendel 9-2020 /slideshow/rec-sys-2020-a-human-perspective-on-algorithmic-similarity-z-schendel-9-2020/238608307 recsys2020ahumanperspectiveonalgorithmicsimilarityzschendel9-2020-200922184943
In the Netflix user interface (UI), when a row or UI element is named Because you Watched..., More Like This, or Because you added to your list, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title. From an algorithmic perspective, there are many ways to define a successful similar recommendation. We sought to broaden that definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt out of place, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken. We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include Inverse Multi-Dimensional Scaling [1], survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations. ]]>

In the Netflix user interface (UI), when a row or UI element is named Because you Watched..., More Like This, or Because you added to your list, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title. From an algorithmic perspective, there are many ways to define a successful similar recommendation. We sought to broaden that definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt out of place, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken. We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include Inverse Multi-Dimensional Scaling [1], survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations. ]]>
Tue, 22 Sep 2020 18:49:43 GMT /slideshow/rec-sys-2020-a-human-perspective-on-algorithmic-similarity-z-schendel-9-2020/238608307 ZacharySchendel1@slideshare.net(ZacharySchendel1) RecSys 2020 A Human Perspective on Algorithmic Similarity Schendel 9-2020 ZacharySchendel1 In the Netflix user interface (UI), when a row or UI element is named Because you Watched..., More Like This, or Because you added to your list, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title. From an algorithmic perspective, there are many ways to define a successful similar recommendation. We sought to broaden that definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt out of place, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken. We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include Inverse Multi-Dimensional Scaling [1], survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/recsys2020ahumanperspectiveonalgorithmicsimilarityzschendel9-2020-200922184943-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In the Netflix user interface (UI), when a row or UI element is named Because you Watched..., More Like This, or Because you added to your list, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title. From an algorithmic perspective, there are many ways to define a successful similar recommendation. We sought to broaden that definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt out of place, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken. We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include Inverse Multi-Dimensional Scaling [1], survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations.
RecSys 2020 A Human Perspective on Algorithmic Similarity Schendel 9-2020 from Zachary Schendel
]]>
9840 0 https://cdn.slidesharecdn.com/ss_thumbnails/recsys2020ahumanperspectiveonalgorithmicsimilarityzschendel9-2020-200922184943-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://public.slidesharecdn.com/v2/images/profile-picture.png https://cdn.slidesharecdn.com/ss_thumbnails/designtalkstvinnovationatnetflixzschendel10-2018-200922185400-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/design-talks-tv-innovation-at-netflix-z-schendel-10-2018/238608371 DesignTalks TV Innovat... https://cdn.slidesharecdn.com/ss_thumbnails/recsys2020ahumanperspectiveonalgorithmicsimilarityzschendel9-2020-200922184943-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/rec-sys-2020-a-human-perspective-on-algorithmic-similarity-z-schendel-9-2020/238608307 RecSys 2020 A Human Pe...