ݺߣshows by User: alansaid / http://www.slideshare.net/images/logo.gif ݺߣshows by User: alansaid / Fri, 01 Sep 2017 07:09:41 GMT ݺߣShare feed for ݺߣshows by User: alansaid Replication of Recommender Systems Research /slideshow/replication-of-recommender-systems-research/79344060 replication-noanim-170901070941
Course held at the 2017 ACM RecSys Summer School at the Free University of Bozen-Bolzano by Alejandro Bellogin (@abellogin) and Alan Said (@alansaid). http://recommenders.net/rsss2017/]]>

Course held at the 2017 ACM RecSys Summer School at the Free University of Bozen-Bolzano by Alejandro Bellogin (@abellogin) and Alan Said (@alansaid). http://recommenders.net/rsss2017/]]>
Fri, 01 Sep 2017 07:09:41 GMT /slideshow/replication-of-recommender-systems-research/79344060 alansaid@slideshare.net(alansaid) Replication of Recommender Systems Research alansaid Course held at the 2017 ACM RecSys Summer School at the Free University of Bozen-Bolzano by Alejandro Bellogin (@abellogin) and Alan Said (@alansaid). http://recommenders.net/rsss2017/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/replication-noanim-170901070941-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Course held at the 2017 ACM RecSys Summer School at the Free University of Bozen-Bolzano by Alejandro Bellogin (@abellogin) and Alan Said (@alansaid). http://recommenders.net/rsss2017/
Replication of Recommender Systems Research from Alan Said
]]>
1943 9 https://cdn.slidesharecdn.com/ss_thumbnails/replication-noanim-170901070941-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Comparative Recommender System Evaluation: �Benchmarking Recommendation Frameworks /slideshow/comparative-recommender-system-evaluation-benchmarking-recommendation-frameworks/40028462 benchmark-141008115150-conversion-gate01
Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.]]>

Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.]]>
Wed, 08 Oct 2014 11:51:50 GMT /slideshow/comparative-recommender-system-evaluation-benchmarking-recommendation-frameworks/40028462 alansaid@slideshare.net(alansaid) Comparative Recommender System Evaluation: �Benchmarking Recommendation Frameworks alansaid Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/benchmark-141008115150-conversion-gate01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Video available here http://www.youtube.com/watch?v=1jHxGCl8RXc Recommender systems research is often based on comparisons of predictive accuracy: the better the evaluation scores, the better the recommender. However, it is difficult to compare results from different recommender systems due to the many options in design and implementation of an evaluation strategy. Additionally, algorithmic implementations can diverge from the standard formulation due to manual tuning and modifications that work better in some situations. In this work we compare common recommendation algorithms as implemented in three popular recommendation frameworks. To provide a fair comparison, we have complete control of the evaluation dimensions being benchmarked: dataset, data splitting, evaluation strategies, and metrics. We also include results using the internal evaluation mechanisms of these frameworks. Our analysis points to large differences in recommendation accuracy across frameworks and strategies, i.e. the same baselines may perform orders of magnitude better or worse across frameworks. Our results show the necessity of clear guidelines when reporting evaluation of recommender systems to ensure reproducibility and comparison of results.
Comparative Recommender System Evaluation: Benchmarking Recommendation Frameworks from Alan Said
]]>
4644 6 https://cdn.slidesharecdn.com/ss_thumbnails/benchmark-141008115150-conversion-gate01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Magic Barrier of Recommender Systems - No Magic, Just Ratings /slideshow/the-magic-barrier-of-recommender-systems-no-magic-just-ratings/36752676 coherence-140708101031-phpapp01
Recommender Systems need to deal with different types of users who represent their preferences in various ways. This difference in user behaviour has a deep impact on the final performance of the recommender system, where some users may receive either better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier of recommender systems, and thus, it could be used to discriminate between easy users (those with a low magic barrier) and difficult ones (those with a high magic barrier). We report experiments where the rating prediction error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements.]]>

Recommender Systems need to deal with different types of users who represent their preferences in various ways. This difference in user behaviour has a deep impact on the final performance of the recommender system, where some users may receive either better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier of recommender systems, and thus, it could be used to discriminate between easy users (those with a low magic barrier) and difficult ones (those with a high magic barrier). We report experiments where the rating prediction error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements.]]>
Tue, 08 Jul 2014 10:10:31 GMT /slideshow/the-magic-barrier-of-recommender-systems-no-magic-just-ratings/36752676 alansaid@slideshare.net(alansaid) The Magic Barrier of Recommender Systems - No Magic, Just Ratings alansaid Recommender Systems need to deal with different types of users who represent their preferences in various ways. This difference in user behaviour has a deep impact on the final performance of the recommender system, where some users may receive either better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier of recommender systems, and thus, it could be used to discriminate between easy users (those with a low magic barrier) and difficult ones (those with a high magic barrier). We report experiments where the rating prediction error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/coherence-140708101031-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Recommender Systems need to deal with different types of users who represent their preferences in various ways. This difference in user behaviour has a deep impact on the final performance of the recommender system, where some users may receive either better or worse recommendations depending, mostly, on the quantity and the quality of the information the system knows about the user. Specifically, the inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user. In this work, we analyse how the consistency of user ratings (coherence) may predict the performance of recommendation methods. More specifically, our results show that our definition of coherence is correlated with the so-called magic barrier of recommender systems, and thus, it could be used to discriminate between easy users (those with a low magic barrier) and difficult ones (those with a high magic barrier). We report experiments where the rating prediction error for the more coherent users is lower than that of the less coherent ones. We further validate these results by using a public dataset, where the magic barrier is not available, in which we obtain similar performance improvements.
The Magic Barrier of Recommender Systems - No Magic, Just Ratings from Alan Said
]]>
2020 6 https://cdn.slidesharecdn.com/ss_thumbnails/coherence-140708101031-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems /slideshow/lsrs/27140115 lsrs-131013033630-phpapp01
he evaluation of recommender systems is crucial for their development. In today's recommendation landscape there are many standardized recommendation algorithms and approaches, however, there exists no standardized method for experimental setup of evaluation -- not even for widely used measures such as precision and root-mean-squared error. This creates a setting where comparison of recommendation results using the same datasets becomes problematic. In this paper, we propose an evaluation protocol specifically developed with the recommendation use-case in mind, i.e. the recommendation of one or several items to an end user. The protocol attempts to closely mimic a scenario of a deployed (production) recommendation system, taking specific user aspects into consideration and allowing a comparison of small and large scale recommendation systems. The protocol is evaluated on common recommendation datasets and compared to traditional recommendation settings found in research literature. Our results show that the proposed model can better capture the quality of a recommender system than traditional evaluation does, and is not affected by characteristics of the data (e.g. size. sparsity, etc.).]]>

he evaluation of recommender systems is crucial for their development. In today's recommendation landscape there are many standardized recommendation algorithms and approaches, however, there exists no standardized method for experimental setup of evaluation -- not even for widely used measures such as precision and root-mean-squared error. This creates a setting where comparison of recommendation results using the same datasets becomes problematic. In this paper, we propose an evaluation protocol specifically developed with the recommendation use-case in mind, i.e. the recommendation of one or several items to an end user. The protocol attempts to closely mimic a scenario of a deployed (production) recommendation system, taking specific user aspects into consideration and allowing a comparison of small and large scale recommendation systems. The protocol is evaluated on common recommendation datasets and compared to traditional recommendation settings found in research literature. Our results show that the proposed model can better capture the quality of a recommender system than traditional evaluation does, and is not affected by characteristics of the data (e.g. size. sparsity, etc.).]]>
Sun, 13 Oct 2013 03:36:30 GMT /slideshow/lsrs/27140115 alansaid@slideshare.net(alansaid) A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems alansaid he evaluation of recommender systems is crucial for their development. In today's recommendation landscape there are many standardized recommendation algorithms and approaches, however, there exists no standardized method for experimental setup of evaluation -- not even for widely used measures such as precision and root-mean-squared error. This creates a setting where comparison of recommendation results using the same datasets becomes problematic. In this paper, we propose an evaluation protocol specifically developed with the recommendation use-case in mind, i.e. the recommendation of one or several items to an end user. The protocol attempts to closely mimic a scenario of a deployed (production) recommendation system, taking specific user aspects into consideration and allowing a comparison of small and large scale recommendation systems. The protocol is evaluated on common recommendation datasets and compared to traditional recommendation settings found in research literature. Our results show that the proposed model can better capture the quality of a recommender system than traditional evaluation does, and is not affected by characteristics of the data (e.g. size. sparsity, etc.). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/lsrs-131013033630-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> he evaluation of recommender systems is crucial for their development. In today&#39;s recommendation landscape there are many standardized recommendation algorithms and approaches, however, there exists no standardized method for experimental setup of evaluation -- not even for widely used measures such as precision and root-mean-squared error. This creates a setting where comparison of recommendation results using the same datasets becomes problematic. In this paper, we propose an evaluation protocol specifically developed with the recommendation use-case in mind, i.e. the recommendation of one or several items to an end user. The protocol attempts to closely mimic a scenario of a deployed (production) recommendation system, taking specific user aspects into consideration and allowing a comparison of small and large scale recommendation systems. The protocol is evaluated on common recommendation datasets and compared to traditional recommendation settings found in research literature. Our results show that the proposed model can better capture the quality of a recommender system than traditional evaluation does, and is not affected by characteristics of the data (e.g. size. sparsity, etc.).
A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems from Alan Said
]]>
2036 3 https://cdn.slidesharecdn.com/ss_thumbnails/lsrs-131013033630-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Information Retrieval and User-centric Recommender System Evaluation /alansaid/ercim2 ercim2-130617111739-phpapp02
Poster describing the ERCIM-funded project on IR- and user-centric recommender system evaluation currently being undertaken in the Information Access group at CWI. Presented at UMAP 2013.]]>

Poster describing the ERCIM-funded project on IR- and user-centric recommender system evaluation currently being undertaken in the Information Access group at CWI. Presented at UMAP 2013.]]>
Mon, 17 Jun 2013 11:17:38 GMT /alansaid/ercim2 alansaid@slideshare.net(alansaid) Information Retrieval and User-centric Recommender System Evaluation alansaid Poster describing the ERCIM-funded project on IR- and user-centric recommender system evaluation currently being undertaken in the Information Access group at CWI. Presented at UMAP 2013. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ercim2-130617111739-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Poster describing the ERCIM-funded project on IR- and user-centric recommender system evaluation currently being undertaken in the Information Access group at CWI. Presented at UMAP 2013.
Information Retrieval and User-centric Recommender System Evaluation from Alan Said
]]>
1041 2 https://cdn.slidesharecdn.com/ss_thumbnails/ercim2-130617111739-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Recommender Algorithm /slideshow/kfn-static/16813881 kfnstatic-130227141920-phpapp02
Collaborative filtering recommender systems often use nearest neighbor methods to identify candidate items. In this paper we present an inverted neighborhood model, k-Furthest Neighbors, to identify less ordinary neighborhoods for the purpose of creating more diverse recommendations. The approach is evaluated two-fold, once in a traditional information retrieval evaluation setting where the model is trained and validated on a split train/test set, and once through an online user study (N=132) to identify users’ erceived quality of the recommender. A standard k-nearest neighbor recommender is used as a baseline in both evaluation settings. our evaluation shows that even though the proposed furthest neighbor model is outperformed in the traditional evaluation setting, the perceived usefulness of the algorithm shows no significant difference in the results of the user study.]]>

Collaborative filtering recommender systems often use nearest neighbor methods to identify candidate items. In this paper we present an inverted neighborhood model, k-Furthest Neighbors, to identify less ordinary neighborhoods for the purpose of creating more diverse recommendations. The approach is evaluated two-fold, once in a traditional information retrieval evaluation setting where the model is trained and validated on a split train/test set, and once through an online user study (N=132) to identify users’ erceived quality of the recommender. A standard k-nearest neighbor recommender is used as a baseline in both evaluation settings. our evaluation shows that even though the proposed furthest neighbor model is outperformed in the traditional evaluation setting, the perceived usefulness of the algorithm shows no significant difference in the results of the user study.]]>
Wed, 27 Feb 2013 14:19:20 GMT /slideshow/kfn-static/16813881 alansaid@slideshare.net(alansaid) User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Recommender Algorithm alansaid Collaborative filtering recommender systems often use nearest neighbor methods to identify candidate items. In this paper we present an inverted neighborhood model, k-Furthest Neighbors, to identify less ordinary neighborhoods for the purpose of creating more diverse recommendations. The approach is evaluated two-fold, once in a traditional information retrieval evaluation setting where the model is trained and validated on a split train/test set, and once through an online user study (N=132) to identify users’ erceived quality of the recommender. A standard k-nearest neighbor recommender is used as a baseline in both evaluation settings. our evaluation shows that even though the proposed furthest neighbor model is outperformed in the traditional evaluation setting, the perceived usefulness of the algorithm shows no significant difference in the results of the user study. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kfnstatic-130227141920-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Collaborative filtering recommender systems often use nearest neighbor methods to identify candidate items. In this paper we present an inverted neighborhood model, k-Furthest Neighbors, to identify less ordinary neighborhoods for the purpose of creating more diverse recommendations. The approach is evaluated two-fold, once in a traditional information retrieval evaluation setting where the model is trained and validated on a split train/test set, and once through an online user study (N=132) to identify users’ erceived quality of the recommender. A standard k-nearest neighbor recommender is used as a baseline in both evaluation settings. our evaluation shows that even though the proposed furthest neighbor model is outperformed in the traditional evaluation setting, the perceived usefulness of the algorithm shows no significant difference in the results of the user study.
User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Recommender Algorithm from Alan Said
]]>
2336 6 https://cdn.slidesharecdn.com/ss_thumbnails/kfnstatic-130227141920-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A 3D Approach to Recommender System Evaluation /slideshow/a-3d-approach-to-recommender-system-evaluation/16770471 cscw-130226001902-phpapp01
In this work we describe an approach at multi-objective recommender system evaluation based on a previously introduced 3D benchmarking model. The benchmarking model takes user-centric, business-centric and technical constraints into consideration in order to provide a means of comparison of recommender algorithms in similar scenarios. We present a comparison of three recommendation algorithms deployed in a user study using this 3D model and compare to standard evaluation methods. The proposed approach simplifies benchmarking of recommender systems and allows for simple multi-objective comparisons.]]>

In this work we describe an approach at multi-objective recommender system evaluation based on a previously introduced 3D benchmarking model. The benchmarking model takes user-centric, business-centric and technical constraints into consideration in order to provide a means of comparison of recommender algorithms in similar scenarios. We present a comparison of three recommendation algorithms deployed in a user study using this 3D model and compare to standard evaluation methods. The proposed approach simplifies benchmarking of recommender systems and allows for simple multi-objective comparisons.]]>
Tue, 26 Feb 2013 00:19:02 GMT /slideshow/a-3d-approach-to-recommender-system-evaluation/16770471 alansaid@slideshare.net(alansaid) A 3D Approach to Recommender System Evaluation alansaid In this work we describe an approach at multi-objective recommender system evaluation based on a previously introduced 3D benchmarking model. The benchmarking model takes user-centric, business-centric and technical constraints into consideration in order to provide a means of comparison of recommender algorithms in similar scenarios. We present a comparison of three recommendation algorithms deployed in a user study using this 3D model and compare to standard evaluation methods. The proposed approach simplifies benchmarking of recommender systems and allows for simple multi-objective comparisons. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cscw-130226001902-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In this work we describe an approach at multi-objective recommender system evaluation based on a previously introduced 3D benchmarking model. The benchmarking model takes user-centric, business-centric and technical constraints into consideration in order to provide a means of comparison of recommender algorithms in similar scenarios. We present a comparison of three recommendation algorithms deployed in a user study using this 3D model and compare to standard evaluation methods. The proposed approach simplifies benchmarking of recommender systems and allows for simple multi-objective comparisons.
A 3D Approach to Recommender System Evaluation from Alan Said
]]>
654 4 https://cdn.slidesharecdn.com/ss_thumbnails/cscw-130226001902-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
State of RecSys: Recap of RecSys 2012 /slideshow/state-of-recsys-recap-of-recsys-2012/14853993 copyofstateofrecsys-121023131059-phpapp02
Recap of some of the papers and presentations at RecSys 2012. Given at the Berlin RecSys Meetup]]>

Recap of some of the papers and presentations at RecSys 2012. Given at the Berlin RecSys Meetup]]>
Tue, 23 Oct 2012 13:10:58 GMT /slideshow/state-of-recsys-recap-of-recsys-2012/14853993 alansaid@slideshare.net(alansaid) State of RecSys: Recap of RecSys 2012 alansaid Recap of some of the papers and presentations at RecSys 2012. Given at the Berlin RecSys Meetup <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/copyofstateofrecsys-121023131059-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Recap of some of the papers and presentations at RecSys 2012. Given at the Berlin RecSys Meetup
State of RecSys: Recap of RecSys 2012 from Alan Said
]]>
1334 3 https://cdn.slidesharecdn.com/ss_thumbnails/copyofstateofrecsys-121023131059-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
RecSysChallenge Opening /slideshow/recsyschallenge-opening/14273703 recsyschallenge-120913052019-phpapp02
The opening slides for the Recommender Systems Challenge at RecSys2012]]>

The opening slides for the Recommender Systems Challenge at RecSys2012]]>
Thu, 13 Sep 2012 05:20:18 GMT /slideshow/recsyschallenge-opening/14273703 alansaid@slideshare.net(alansaid) RecSysChallenge Opening alansaid The opening slides for the Recommender Systems Challenge at RecSys2012 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/recsyschallenge-120913052019-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The opening slides for the Recommender Systems Challenge at RecSys2012
RecSysChallenge Opening from Alan Said
]]>
1147 4 https://cdn.slidesharecdn.com/ss_thumbnails/recsyschallenge-120913052019-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Best Practices in Recommender System Challenges /slideshow/best-practices-in-recommender-system-challenges/14247114 bestpracticesnoanim-120911053355-phpapp01
Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.]]>

Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.]]>
Tue, 11 Sep 2012 05:33:53 GMT /slideshow/best-practices-in-recommender-system-challenges/14247114 alansaid@slideshare.net(alansaid) Best Practices in Recommender System Challenges alansaid Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/bestpracticesnoanim-120911053355-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.
Best Practices in Recommender System Challenges from Alan Said
]]>
6195 8 https://cdn.slidesharecdn.com/ss_thumbnails/bestpracticesnoanim-120911053355-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Estimating the Magic Barrier of Recommender Systems: A User Study /slideshow/estimating-the-magic-barrier-of-recommender-systems-a-user-study/13966981 sigirposterfinal-120814065425-phpapp01
Recommender systems are commonly evaluated by trying to predict known, withheld, ratings for a set of users. Measures such as the Root-Mean-Square Error are used to estimate the quality of the recommender algorithms. This process does however not acknowledge the inherent rating inconsistencies of users. In this paper we present the first results from a noise measurement user study for estimating the magic barrier of recommender systems conducted on a commercial movie recommendation community. The magic barrier is the expected squared error of the optimal recommendation algorithm, or, the lowest error we can expect from any recommendation algorithm. Our results show that the barrier can be estimated by collecting the opinions of users on already rated items.]]>

Recommender systems are commonly evaluated by trying to predict known, withheld, ratings for a set of users. Measures such as the Root-Mean-Square Error are used to estimate the quality of the recommender algorithms. This process does however not acknowledge the inherent rating inconsistencies of users. In this paper we present the first results from a noise measurement user study for estimating the magic barrier of recommender systems conducted on a commercial movie recommendation community. The magic barrier is the expected squared error of the optimal recommendation algorithm, or, the lowest error we can expect from any recommendation algorithm. Our results show that the barrier can be estimated by collecting the opinions of users on already rated items.]]>
Tue, 14 Aug 2012 06:54:23 GMT /slideshow/estimating-the-magic-barrier-of-recommender-systems-a-user-study/13966981 alansaid@slideshare.net(alansaid) Estimating the Magic Barrier of Recommender Systems: A User Study alansaid Recommender systems are commonly evaluated by trying to predict known, withheld, ratings for a set of users. Measures such as the Root-Mean-Square Error are used to estimate the quality of the recommender algorithms. This process does however not acknowledge the inherent rating inconsistencies of users. In this paper we present the first results from a noise measurement user study for estimating the magic barrier of recommender systems conducted on a commercial movie recommendation community. The magic barrier is the expected squared error of the optimal recommendation algorithm, or, the lowest error we can expect from any recommendation algorithm. Our results show that the barrier can be estimated by collecting the opinions of users on already rated items. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sigirposterfinal-120814065425-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Recommender systems are commonly evaluated by trying to predict known, withheld, ratings for a set of users. Measures such as the Root-Mean-Square Error are used to estimate the quality of the recommender algorithms. This process does however not acknowledge the inherent rating inconsistencies of users. In this paper we present the first results from a noise measurement user study for estimating the magic barrier of recommender systems conducted on a commercial movie recommendation community. The magic barrier is the expected squared error of the optimal recommendation algorithm, or, the lowest error we can expect from any recommendation algorithm. Our results show that the barrier can be estimated by collecting the opinions of users on already rated items.
Estimating the Magic Barrier of Recommender Systems: A User Study from Alan Said
]]>
659 2 https://cdn.slidesharecdn.com/ss_thumbnails/sigirposterfinal-120814065425-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Users and Noise: The Magic Barrier of Recommender Systems /slideshow/users-and-noise-estimating-the-magic-barrier-of-recommender-systems/13708791 magicbarrier-120720141405-phpapp01
Recommender systems are crucial components of most commercial websites to keep users satisfied and to increase revenue. Thus, a lot of effort is made to improve recommendation accuracy. But when is the best possible performance of the recommender reached? The magic barrier, refers to some unknown level of prediction accuracy a recommender system can attain. The magic barrier reveals whether there is still room for improving prediction accuracy or indicates that further improvement is meaningless. In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies - noise. In a case study with a commercial movie recommender, we investigate the inconsistencies of the user ratings and estimate the magic barrier in order to assess the actual quality of the recommender system.]]>

Recommender systems are crucial components of most commercial websites to keep users satisfied and to increase revenue. Thus, a lot of effort is made to improve recommendation accuracy. But when is the best possible performance of the recommender reached? The magic barrier, refers to some unknown level of prediction accuracy a recommender system can attain. The magic barrier reveals whether there is still room for improving prediction accuracy or indicates that further improvement is meaningless. In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies - noise. In a case study with a commercial movie recommender, we investigate the inconsistencies of the user ratings and estimate the magic barrier in order to assess the actual quality of the recommender system.]]>
Fri, 20 Jul 2012 14:14:04 GMT /slideshow/users-and-noise-estimating-the-magic-barrier-of-recommender-systems/13708791 alansaid@slideshare.net(alansaid) Users and Noise: The Magic Barrier of Recommender Systems alansaid Recommender systems are crucial components of most commercial websites to keep users satisfied and to increase revenue. Thus, a lot of effort is made to improve recommendation accuracy. But when is the best possible performance of the recommender reached? The magic barrier, refers to some unknown level of prediction accuracy a recommender system can attain. The magic barrier reveals whether there is still room for improving prediction accuracy or indicates that further improvement is meaningless. In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies - noise. In a case study with a commercial movie recommender, we investigate the inconsistencies of the user ratings and estimate the magic barrier in order to assess the actual quality of the recommender system. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/magicbarrier-120720141405-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Recommender systems are crucial components of most commercial websites to keep users satisfied and to increase revenue. Thus, a lot of effort is made to improve recommendation accuracy. But when is the best possible performance of the recommender reached? The magic barrier, refers to some unknown level of prediction accuracy a recommender system can attain. The magic barrier reveals whether there is still room for improving prediction accuracy or indicates that further improvement is meaningless. In this work, we present a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies - noise. In a case study with a commercial movie recommender, we investigate the inconsistencies of the user ratings and estimate the magic barrier in order to assess the actual quality of the recommender system.
Users and Noise: The Magic Barrier of Recommender Systems from Alan Said
]]>
1380 4 https://cdn.slidesharecdn.com/ss_thumbnails/magicbarrier-120720141405-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Analyzing Weighting Schemes in Collaborative Filtering: Cold Start, Post Cold Start and Power Users /slideshow/analyzing-weighting-schemes-in-collaborative-filtering-cold-start-post-cold-start-and-power-users-12224804/12224804 similarities-120330102340-phpapp02
Paper presented at ACM SAC 2012, TRECK track. Abstract: ]]>

Paper presented at ACM SAC 2012, TRECK track. Abstract: ]]>
Fri, 30 Mar 2012 10:23:39 GMT /slideshow/analyzing-weighting-schemes-in-collaborative-filtering-cold-start-post-cold-start-and-power-users-12224804/12224804 alansaid@slideshare.net(alansaid) Analyzing Weighting Schemes in Collaborative Filtering: Cold Start, Post Cold Start and Power Users alansaid Paper presented at ACM SAC 2012, TRECK track. Abstract: <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/similarities-120330102340-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Paper presented at ACM SAC 2012, TRECK track. Abstract:
Analyzing Weighting Schemes in Collaborative Filtering: Cold Start, Post Cold Start and Power Users from Alan Said
]]>
1152 3 https://cdn.slidesharecdn.com/ss_thumbnails/similarities-120330102340-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
CaRR 2012 Opening Presentation /slideshow/carr-2012-opening-presentation/11560026 pres-120214041100-phpapp02
The opening slides of the 2nd workshop on Context-awareness in Retrieval and Recommendation]]>

The opening slides of the 2nd workshop on Context-awareness in Retrieval and Recommendation]]>
Tue, 14 Feb 2012 04:10:56 GMT /slideshow/carr-2012-opening-presentation/11560026 alansaid@slideshare.net(alansaid) CaRR 2012 Opening Presentation alansaid The opening slides of the 2nd workshop on Context-awareness in Retrieval and Recommendation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/pres-120214041100-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The opening slides of the 2nd workshop on Context-awareness in Retrieval and Recommendation
CaRR 2012 Opening Presentation from Alan Said
]]>
677 2 https://cdn.slidesharecdn.com/ss_thumbnails/pres-120214041100-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Personalizing Tags: A Folksonomy-like Approach for Recommending Movies /slideshow/personalizing-tags-a-folksonomylike-approach-for-recommending-movies/9914714 beamer-111027190500-phpapp02
Presented at HetRec 2011 http://ir.ii.uam.es/hetrec2011/]]>

Presented at HetRec 2011 http://ir.ii.uam.es/hetrec2011/]]>
Thu, 27 Oct 2011 19:05:00 GMT /slideshow/personalizing-tags-a-folksonomylike-approach-for-recommending-movies/9914714 alansaid@slideshare.net(alansaid) Personalizing Tags: A Folksonomy-like Approach for Recommending Movies alansaid Presented at HetRec 2011 http://ir.ii.uam.es/hetrec2011/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/beamer-111027190500-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presented at HetRec 2011 http://ir.ii.uam.es/hetrec2011/
Personalizing Tags: A Folksonomy-like Approach for Recommending Movies from Alan Said
]]>
1151 5 https://cdn.slidesharecdn.com/ss_thumbnails/beamer-111027190500-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Inferring Contextual User Profiles - Improving Recommender Performance /slideshow/inferring-contextual-user-proles-improving-recommender-performance/9848112 beamer-111023181448-phpapp02
Presentation at the 3rd RecSys Workshop on Context-Aware Recommender Systems www.cars-workshop.com]]>

Presentation at the 3rd RecSys Workshop on Context-Aware Recommender Systems www.cars-workshop.com]]>
Sun, 23 Oct 2011 18:14:45 GMT /slideshow/inferring-contextual-user-proles-improving-recommender-performance/9848112 alansaid@slideshare.net(alansaid) Inferring Contextual User Profiles - Improving Recommender Performance alansaid Presentation at the 3rd RecSys Workshop on Context-Aware Recommender Systems www.cars-workshop.com <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/beamer-111023181448-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at the 3rd RecSys Workshop on Context-Aware Recommender Systems www.cars-workshop.com
Inferring Contextual User Profiles - Improving Recommender Performance from Alan Said
]]>
832 1 https://cdn.slidesharecdn.com/ss_thumbnails/beamer-111023181448-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Using Social- and Pseudo-Social Networks to Improve Recommendation Quality /slideshow/using-social-and-pseudosocial-networks-to-improve-recommendation-quality/8612641 itwp-110716075206-phpapp02
Short paper presentation at the workshop on Intelligent Techniques from Web Personalization (ITWP2011) at the International Joint Conference on Artificial Intelligence - IJCAI-11, IJCAI2011]]>

Short paper presentation at the workshop on Intelligent Techniques from Web Personalization (ITWP2011) at the International Joint Conference on Artificial Intelligence - IJCAI-11, IJCAI2011]]>
Sat, 16 Jul 2011 07:52:05 GMT /slideshow/using-social-and-pseudosocial-networks-to-improve-recommendation-quality/8612641 alansaid@slideshare.net(alansaid) Using Social- and Pseudo-Social Networks to Improve Recommendation Quality alansaid Short paper presentation at the workshop on Intelligent Techniques from Web Personalization (ITWP2011) at the International Joint Conference on Artificial Intelligence - IJCAI-11, IJCAI2011 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/itwp-110716075206-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Short paper presentation at the workshop on Intelligent Techniques from Web Personalization (ITWP2011) at the International Joint Conference on Artificial Intelligence - IJCAI-11, IJCAI2011
Using Social- and Pseudo-Social Networks to Improve Recommendation Quality from Alan Said
]]>
1173 1 https://cdn.slidesharecdn.com/ss_thumbnails/itwp-110716075206-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Recommender Systems /slideshow/recommender-systems-5601280/5601280 recsysanimated-101028180257-phpapp01
Video: http://vimeo.com/16537278 An introduction to recommender systems. Given at Talis (www.talis.com) Oct. 28 2010]]>

Video: http://vimeo.com/16537278 An introduction to recommender systems. Given at Talis (www.talis.com) Oct. 28 2010]]>
Thu, 28 Oct 2010 18:01:47 GMT /slideshow/recommender-systems-5601280/5601280 alansaid@slideshare.net(alansaid) Recommender Systems alansaid Video: http://vimeo.com/16537278 An introduction to recommender systems. Given at Talis (www.talis.com) Oct. 28 2010 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/recsysanimated-101028180257-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Video: http://vimeo.com/16537278 An introduction to recommender systems. Given at Talis (www.talis.com) Oct. 28 2010
Recommender Systems from Alan Said
]]>
1637 2 https://cdn.slidesharecdn.com/ss_thumbnails/recsysanimated-101028180257-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-alansaid-48x48.jpg?cb=1699649933 alansaid.com https://cdn.slidesharecdn.com/ss_thumbnails/replication-noanim-170901070941-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/replication-of-recommender-systems-research/79344060 Replication of Recomme... https://cdn.slidesharecdn.com/ss_thumbnails/benchmark-141008115150-conversion-gate01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/comparative-recommender-system-evaluation-benchmarking-recommendation-frameworks/40028462 Comparative Recommende... https://cdn.slidesharecdn.com/ss_thumbnails/coherence-140708101031-phpapp01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/the-magic-barrier-of-recommender-systems-no-magic-just-ratings/36752676 The Magic Barrier of R...