際際滷

際際滷Share a Scribd company logo
Effects of relevant contextual features
            in the performance of a restaurant
                   recommender system


                                  卒
Blanca Vargas-Govea, Gabriel Gonzalez-Serna, Rafael Ponce-Medell卒n
                                                                脹
              cenidet - Computer Science Department
                  blanca.vargas@cenidet.edu.mx



                  CARS-2011, October 23, 2011
Outline


  1 Motivation


  2 Surfeous-the test bed


  3 Feature selection


  4 Experiments


  5 Conclusions




                            2 / 28
Context information / personalization




                                        3 / 28
Looking for a restaurant?




                            4 / 28
Unexpected conditions?




                         5 / 28
A better option




                  6 / 28
How much context is useful?




                              7 / 28
How much context is useful?




                              8 / 28
How much context is useful?




                              9 / 28
How much context is useful?




                              10 / 28
How much context is useful?




                              11 / 28
How much context is useful?




                              12 / 28
A huge amount of data can be intrusive.


A lack of information can lead the system to generate poor
                    recommendations.


     Approach: attribute selection, semantic models.




                                                             13 / 28
Prototype: Surfeous

                                         hungry


                             age: 25
            Social       likes vegan
                                                  Context
                          credit card                                    location
    delicious
                        entrepreneur


    ugh


                                                   chinese     sunny      rainy
   slow



                                                      indoor           outdoor

   yum!
                awful   superb          noisy




                                                                                    14 / 28
Surfeous: approaches

Social                                        Contextual
[Tso-Sutter et al., 2008]
                    items
             tags
                                                 Semantic web
          users




           items        user tags


  users      R      +       R Tu



             +
  item                        user-based CF   Semantic Web Rule Language
             R Ti
  tags                        item-based CF             (SWRL)


                                                                           15 / 28
Contextual models

    service (23)    environment (2)




    user (21)




                                      16 / 28
Rules and relations: examples

       user - service pro鍖le
       person(X )  hasOccupation(X , student) 
       restaurant(R)  hasCost(R, low)  select(X , R)
       user - environment pro鍖le
       person(X )  isJapanese(X , true) 
       queryPlace(X , USA)  restaurant(R) 
       isVeryClose(R, true)  select(X , R)
       environment - service pro鍖le
       currentWeather(today, rainy)  restaurant(R) 
       space(R, closed)  select(R)
       Relations
       likesFood(X , Y ) X : person, Y : cuisine-type
       currentWeather(X , Y ) X : query, Y : weather
       space(X , Y ) X : restaurant, Y : {closed, open}


                                                          17 / 28
Generating recommendations


   1                                   2                                         3                         ambiance
                                                                                                           city cuisine
                                                                                                               space
                                                                                                            accepts
                                                                                                               latitude

   Surfeous gets the user                                                            Relations are created
   location and searches for           An ontology is created                        from the attributes of the
   the closer restaurants              in execution time                             restaurant profile



   4                                       5                   Results are
                                                                                      6                  Fusion

                                                               ranked based               context-free              context
  Person(?x) ^ hasAge(?x, ?y) ^                Ranking
                                               1. ----------   on the number
  swrlb:greaterThanOrEqual(?y, 12) ^
                                               2. ----------   of context                 only-social             only-rules
  swrlb:lessThanOrEqual...
                                               3. ----------                              0%                           100%
                                                               rules that hold
       SWRL is applied to match                ...                                         The social results are
                                                               for each
                                               n. ----------                               added
       the context models                                      user query




                                                                                                                               18 / 28
Feature selection        [Guyon & Elisseeff, 2003, Yu et al., 2004]



Generalities                          Procedures

   Machine learning.
   Predictive performance.             Original
                                         set
                                                   Subset
                                                  Generation
                                                                Subset    Subset
                                                                         Evaluation


   Storage requirements.                                                             Goodness
                                                                                     of subset

                                                               No                           Yes    Result
                                                                         Stopping
   Model understanding.                                                  Criterion                Validation




   Data visualization.

  It looks for
  the minimum subset of attributes such that the resulting
  probability distribution of the data classes is as close as
  possible to the original distribution.

                                                                                                               19 / 28
Algorithm LVF (Las Vegas Filter) [Liu & Setiono, 1996]
  Input: maximum number of iterations (Max), dataset (D),
  number of attributes (N), allowable inconsistency rate (粒)
  Output: sets of M features satisfying the inconsistency crite-
  rion (Solutions)
  Solutions = 
  Cbest = N
  for i = 1 to Max do
     S = randomSet(seed); C = numOfFeatures(S)
     if C < Cbest then
        if InconCheck(S,D) < 粒 then
           Sbest = S; Cbest = C
           Solutions = S
        end if
     else if C = Cbest and InconCheck(S,D) < 粒 then
        append(Solutions, S)
     end if
  end for                                                          20 / 28
Toy example
                   space     price   franchise   smoking     RatingA    RatingB
              1      i        low        n          y           0          0
              2      i        low        n          y           1          0
              3      i        low        n          y           2          0
              4      i        low        n          y           1          1
              5      i       high        n         n            0          1
              6      i       high        n         n            1          1
              7      i       high        n         n            2          1
              8      o       high        y         n            1          1
              9      o        low        n         n            1          1
             10      o        low        n          y           2          2


subset A                                         subset B
matching instances: 1, 2, 3, 4                   matching instances: 1, 2, 3, 4
n = 4, classes = 0,1,2,1 largest = 1 (2          n = 4, classes = 0,0,0,1 largest = 0 (3
instances)                                       instances)
Inconsistency count = 4 - 2 = 2                  Inconsistency count = 4 - 3 = 1
matching instances: 5, 6, 7                      matching instances: 5, 6, 7
n = 3, classes = 0,1,2 largest = 1 (1            n = 3, classes = 1,1,1 largest = 1 (3
instances)                                       instances)
Inconsistency count = 3 - 1 = 2                  Inconsistency count = 3 - 3 = 0
Inconsistency rate = (2+2)/10 = 4/10 = 0.4       Inconsistency rate = (1+0)/10 = 1/10 = 0.1
                                                                                              21 / 28
Experiments
Data description                                   Attribute selection

   111 users.                                          Service contextual model.
   237 restaurants.                                    Input: 5,802 instances.
   1,251 ratings.                                      Instance: vector of 23
   Rating values: 0,1,2.                               attributes, class=rating.

   Rating average: 11.2                                Consistency selector
   ratings per user.                                   algorithm.
                                                       Best-鍖rst search.
                    65
                    60
                    55
                                                       Weka [Hall et al., 2009].
                    50
                    45
      Restaurants




                    40
                    35
                    30
                    25
                    20
                    15
                                                   Minimum attribute subset
                    10
                     5
                     0                             cuisine, hours, days, accepts,
                         0   5 10 15 20 25 30 35
                               Ratings
                                                   address (i.e.,78.26% from the
                                                   whole set).
                                                                                    22 / 28
Tests with Surfeous

Purposes                          Experimental setup

   to identify relevant              Leave one out.
   contextual attributes.
                                     Seven subsets: All (23), B
   to show that with the             (5), C-G (4).
   minimum attribute subset,
   the predictive performance        10 executions for each
   is at least the same as with      subset.
   the whole attribute set, and
                                     Baseline: context-free,
   to analyze the effects of         fusion (average of the
   relevant contextual               intervals between 0.1 and
   attributes.                       0.9) and context (only
                                     rules).

                                                                  23 / 28
Results: precision/recall/NDCG
               0.09                                                                                     0.35
               0.08
                                                                                                        0.30
               0.07
                                                                                                        0.25
               0.06
                                                                     type                                                                                  type
   Precision



               0.05                                                                                     0.20




                                                                                               Recall
                                                                            context.free                                                                          context.free
               0.04                                                                                     0.15
                                                                            fusion                                                                                fusion
               0.03
                                                                            context                     0.10                                                      context
               0.02
                                                                                                        0.05
               0.01


                      All   B   C    D   E          F      G                                                       All     B        C      D   E   F   G
                                subset                                                                                              subset


                                                    0.55
                                                    0.50
                                                    0.45
                                                    0.40
                                                    0.35                                                                 type
                                                    0.30
                                             NDCG




                                                                                                                                context.free
                                                    0.25
                                                                                                                                fusion
                                                    0.20
                                                    0.15                                                                        context
                                                    0.10
                                                    0.05

                                                               All     B        C      D   E            F      G
                                                                                subset




  All (23), B (cuisine, hours, days, accepts, address), C (cuisine, hours, days),
  D (hours, days, accepts, address), E(cuisine, days, accepts, address), F
  (cuisine, hours, accepts, address), G (cuisine, hours, days, accepts)
                                                                                                                                                                                 24 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.




                                                      25 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.
For recall, the majority of the subsets outperformed the
context-free performance.




                                                           25 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.
For recall, the majority of the subsets outperformed the
context-free performance.
For precision and NDCG, fusion obtained similar
performance to the context-free approach.




                                                           25 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.
For recall, the majority of the subsets outperformed the
context-free performance.
For precision and NDCG, fusion obtained similar
performance to the context-free approach.
Expected items appear in the top-5 list.




                                                           25 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.
For recall, the majority of the subsets outperformed the
context-free performance.
For precision and NDCG, fusion obtained similar
performance to the context-free approach.
Expected items appear in the top-5 list.
Results suggest that the restaurant opening times and its
type of payment are likely to be the most important factors
to make a choice.




                                                              25 / 28
Precision   Recall   NDCG
         Fusion      D          C        D
         Rules       F          C        G
Relevant attributes: hours, days, accepts, cuisine.
For recall, the majority of the subsets outperformed the
context-free performance.
For precision and NDCG, fusion obtained similar
performance to the context-free approach.
Expected items appear in the top-5 list.
Results suggest that the restaurant opening times and its
type of payment are likely to be the most important factors
to make a choice.
Although the performance achieved by the semantic rules
is low, they provide the social approach with features that
enriches the decision process (recall). A deep analysis of
the set of rules is needed.
                                                              25 / 28
Conclusions and future work


     By using a reduced subset of attributes, the systems
     performance was not degraded. Moreover, in the fusion
     approach it has been improved.

     Feature selection techniques can contribute to improve the
     ef鍖ciency of a contextual recommender system.

     Identi鍖cation of relevant contextual features facilitates a
     better understanding of the decision criteria of users.

     As part of our future work, we are extending the approach
     to the three contextual models.



                                                                   26 / 28
Effects of relevant contextual features
in the performance of a restaurant recommender system
             Blanca Vargas-Govea
        blanca.vargas@cenidet.edu.mx


        CARS-2011, October 23, 2011




                                                        27 / 28
Creative Commons licensed images


  s04 - Outdoor restaurant
  s05 - Gray umbrella
  s06 - Indoor restaurant
  s14 - Sunny
  s14 - Red umbrella
  s14 - Indoor restaurant
  s14 - Outdoor restaurant
  s14 - Crowd
  s14 - Chinese restaurant
  s14 - Persian girl
  s18 - Earth
  s18 - Waf鍖e




                                   28 / 28
Guyon, I. & Elisseeff, A. (2003).
An introduction to variable and feature selection.
Journal of Machine Learning Research, 3, 11571182.

Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten,
I. H. (2009).
The WEKA data mining software: an update.
SIGKDD Explorations Newsletter, 11, 1018.

Liu, H. & Setiono, R. (1996).
A probabilistic approach to feature selection - a 鍖lter solution.
In 13th International Conference on Machine Learning (pp. 319327).

Tso-Sutter, K. H. L., Marinho, L. B., & Schmidt-Thieme, L. (2008).
Tag-aware recommender systems by fusion of collaborative 鍖ltering
algorithms.
In Proceedings of the 2008 ACM symposium on Applied computing (pp.
19951999). New York, USA.

Yu, L., Liu, H., & Guyon, I. (2004).
Ef鍖cient feature selection via analysis of relevance and redundancy.
Journal of Machine Learning Research, 5, 12051224.

                                                                            28 / 28

More Related Content

Effects of relevant contextual features in the performance of a restaurant recommender system

  • 1. Effects of relevant contextual features in the performance of a restaurant recommender system 卒 Blanca Vargas-Govea, Gabriel Gonzalez-Serna, Rafael Ponce-Medell卒n 脹 cenidet - Computer Science Department blanca.vargas@cenidet.edu.mx CARS-2011, October 23, 2011
  • 2. Outline 1 Motivation 2 Surfeous-the test bed 3 Feature selection 4 Experiments 5 Conclusions 2 / 28
  • 3. Context information / personalization 3 / 28
  • 4. Looking for a restaurant? 4 / 28
  • 7. How much context is useful? 7 / 28
  • 8. How much context is useful? 8 / 28
  • 9. How much context is useful? 9 / 28
  • 10. How much context is useful? 10 / 28
  • 11. How much context is useful? 11 / 28
  • 12. How much context is useful? 12 / 28
  • 13. A huge amount of data can be intrusive. A lack of information can lead the system to generate poor recommendations. Approach: attribute selection, semantic models. 13 / 28
  • 14. Prototype: Surfeous hungry age: 25 Social likes vegan Context credit card location delicious entrepreneur ugh chinese sunny rainy slow indoor outdoor yum! awful superb noisy 14 / 28
  • 15. Surfeous: approaches Social Contextual [Tso-Sutter et al., 2008] items tags Semantic web users items user tags users R + R Tu + item user-based CF Semantic Web Rule Language R Ti tags item-based CF (SWRL) 15 / 28
  • 16. Contextual models service (23) environment (2) user (21) 16 / 28
  • 17. Rules and relations: examples user - service pro鍖le person(X ) hasOccupation(X , student) restaurant(R) hasCost(R, low) select(X , R) user - environment pro鍖le person(X ) isJapanese(X , true) queryPlace(X , USA) restaurant(R) isVeryClose(R, true) select(X , R) environment - service pro鍖le currentWeather(today, rainy) restaurant(R) space(R, closed) select(R) Relations likesFood(X , Y ) X : person, Y : cuisine-type currentWeather(X , Y ) X : query, Y : weather space(X , Y ) X : restaurant, Y : {closed, open} 17 / 28
  • 18. Generating recommendations 1 2 3 ambiance city cuisine space accepts latitude Surfeous gets the user Relations are created location and searches for An ontology is created from the attributes of the the closer restaurants in execution time restaurant profile 4 5 Results are 6 Fusion ranked based context-free context Person(?x) ^ hasAge(?x, ?y) ^ Ranking 1. ---------- on the number swrlb:greaterThanOrEqual(?y, 12) ^ 2. ---------- of context only-social only-rules swrlb:lessThanOrEqual... 3. ---------- 0% 100% rules that hold SWRL is applied to match ... The social results are for each n. ---------- added the context models user query 18 / 28
  • 19. Feature selection [Guyon & Elisseeff, 2003, Yu et al., 2004] Generalities Procedures Machine learning. Predictive performance. Original set Subset Generation Subset Subset Evaluation Storage requirements. Goodness of subset No Yes Result Stopping Model understanding. Criterion Validation Data visualization. It looks for the minimum subset of attributes such that the resulting probability distribution of the data classes is as close as possible to the original distribution. 19 / 28
  • 20. Algorithm LVF (Las Vegas Filter) [Liu & Setiono, 1996] Input: maximum number of iterations (Max), dataset (D), number of attributes (N), allowable inconsistency rate (粒) Output: sets of M features satisfying the inconsistency crite- rion (Solutions) Solutions = Cbest = N for i = 1 to Max do S = randomSet(seed); C = numOfFeatures(S) if C < Cbest then if InconCheck(S,D) < 粒 then Sbest = S; Cbest = C Solutions = S end if else if C = Cbest and InconCheck(S,D) < 粒 then append(Solutions, S) end if end for 20 / 28
  • 21. Toy example space price franchise smoking RatingA RatingB 1 i low n y 0 0 2 i low n y 1 0 3 i low n y 2 0 4 i low n y 1 1 5 i high n n 0 1 6 i high n n 1 1 7 i high n n 2 1 8 o high y n 1 1 9 o low n n 1 1 10 o low n y 2 2 subset A subset B matching instances: 1, 2, 3, 4 matching instances: 1, 2, 3, 4 n = 4, classes = 0,1,2,1 largest = 1 (2 n = 4, classes = 0,0,0,1 largest = 0 (3 instances) instances) Inconsistency count = 4 - 2 = 2 Inconsistency count = 4 - 3 = 1 matching instances: 5, 6, 7 matching instances: 5, 6, 7 n = 3, classes = 0,1,2 largest = 1 (1 n = 3, classes = 1,1,1 largest = 1 (3 instances) instances) Inconsistency count = 3 - 1 = 2 Inconsistency count = 3 - 3 = 0 Inconsistency rate = (2+2)/10 = 4/10 = 0.4 Inconsistency rate = (1+0)/10 = 1/10 = 0.1 21 / 28
  • 22. Experiments Data description Attribute selection 111 users. Service contextual model. 237 restaurants. Input: 5,802 instances. 1,251 ratings. Instance: vector of 23 Rating values: 0,1,2. attributes, class=rating. Rating average: 11.2 Consistency selector ratings per user. algorithm. Best-鍖rst search. 65 60 55 Weka [Hall et al., 2009]. 50 45 Restaurants 40 35 30 25 20 15 Minimum attribute subset 10 5 0 cuisine, hours, days, accepts, 0 5 10 15 20 25 30 35 Ratings address (i.e.,78.26% from the whole set). 22 / 28
  • 23. Tests with Surfeous Purposes Experimental setup to identify relevant Leave one out. contextual attributes. Seven subsets: All (23), B to show that with the (5), C-G (4). minimum attribute subset, the predictive performance 10 executions for each is at least the same as with subset. the whole attribute set, and Baseline: context-free, to analyze the effects of fusion (average of the relevant contextual intervals between 0.1 and attributes. 0.9) and context (only rules). 23 / 28
  • 24. Results: precision/recall/NDCG 0.09 0.35 0.08 0.30 0.07 0.25 0.06 type type Precision 0.05 0.20 Recall context.free context.free 0.04 0.15 fusion fusion 0.03 context 0.10 context 0.02 0.05 0.01 All B C D E F G All B C D E F G subset subset 0.55 0.50 0.45 0.40 0.35 type 0.30 NDCG context.free 0.25 fusion 0.20 0.15 context 0.10 0.05 All B C D E F G subset All (23), B (cuisine, hours, days, accepts, address), C (cuisine, hours, days), D (hours, days, accepts, address), E(cuisine, days, accepts, address), F (cuisine, hours, accepts, address), G (cuisine, hours, days, accepts) 24 / 28
  • 25. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. 25 / 28
  • 26. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. For recall, the majority of the subsets outperformed the context-free performance. 25 / 28
  • 27. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. For recall, the majority of the subsets outperformed the context-free performance. For precision and NDCG, fusion obtained similar performance to the context-free approach. 25 / 28
  • 28. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. For recall, the majority of the subsets outperformed the context-free performance. For precision and NDCG, fusion obtained similar performance to the context-free approach. Expected items appear in the top-5 list. 25 / 28
  • 29. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. For recall, the majority of the subsets outperformed the context-free performance. For precision and NDCG, fusion obtained similar performance to the context-free approach. Expected items appear in the top-5 list. Results suggest that the restaurant opening times and its type of payment are likely to be the most important factors to make a choice. 25 / 28
  • 30. Precision Recall NDCG Fusion D C D Rules F C G Relevant attributes: hours, days, accepts, cuisine. For recall, the majority of the subsets outperformed the context-free performance. For precision and NDCG, fusion obtained similar performance to the context-free approach. Expected items appear in the top-5 list. Results suggest that the restaurant opening times and its type of payment are likely to be the most important factors to make a choice. Although the performance achieved by the semantic rules is low, they provide the social approach with features that enriches the decision process (recall). A deep analysis of the set of rules is needed. 25 / 28
  • 31. Conclusions and future work By using a reduced subset of attributes, the systems performance was not degraded. Moreover, in the fusion approach it has been improved. Feature selection techniques can contribute to improve the ef鍖ciency of a contextual recommender system. Identi鍖cation of relevant contextual features facilitates a better understanding of the decision criteria of users. As part of our future work, we are extending the approach to the three contextual models. 26 / 28
  • 32. Effects of relevant contextual features in the performance of a restaurant recommender system Blanca Vargas-Govea blanca.vargas@cenidet.edu.mx CARS-2011, October 23, 2011 27 / 28
  • 33. Creative Commons licensed images s04 - Outdoor restaurant s05 - Gray umbrella s06 - Indoor restaurant s14 - Sunny s14 - Red umbrella s14 - Indoor restaurant s14 - Outdoor restaurant s14 - Crowd s14 - Chinese restaurant s14 - Persian girl s18 - Earth s18 - Waf鍖e 28 / 28
  • 34. Guyon, I. & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 11571182. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The WEKA data mining software: an update. SIGKDD Explorations Newsletter, 11, 1018. Liu, H. & Setiono, R. (1996). A probabilistic approach to feature selection - a 鍖lter solution. In 13th International Conference on Machine Learning (pp. 319327). Tso-Sutter, K. H. L., Marinho, L. B., & Schmidt-Thieme, L. (2008). Tag-aware recommender systems by fusion of collaborative 鍖ltering algorithms. In Proceedings of the 2008 ACM symposium on Applied computing (pp. 19951999). New York, USA. Yu, L., Liu, H., & Guyon, I. (2004). Ef鍖cient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research, 5, 12051224. 28 / 28