Crowdsourcing was used to build a dataset classifying news and non-news queries by having workers label search queries as news-related or not. Providing additional context like news headlines and search results improved labeling quality compared to a basic interface. The final dataset of over 1,000 queries was high quality, with workers agreeing and correctly identifying news queries. Validation questions helped ensure reliable labels by catching workers gaming the system.
2. Introduction What is news query classification and why would we build a dataset to examine it? Binary classification task performed by Web search enginesUp to 10% of queries may be news-related [Bar-Ilan et al, 2009]Have workers judge Web search queries as news-related or notNews-RelatedNews ResultsgunmanNon-News-RelatedWeb Search EngineUserWeb Search Results1
5. News Queries change over time for query in task return Random(Yes,No)end loop2News-related?Sea Creature?Query: Octopus?Work Cup Predications?How can we overcome these difficulties to create a high quality dataset for news query classification?
11. Dataset Construction Methodology How can we go about building a news query classification dataset?Sample Queries from the MSN May 2006 Query LogCreate Gold judgments to validate the workersPropose additional content to tackle the temporal nature of news queries and prototype interfaces to evaluate this content on a small test setCreate the final labels using the best setting and interfaceEvaluate in terms of agreementEvaluate against `experts`4
12. Dataset Construction Methodology Sampling Queries: Create 2 query sets sampled from the MSN May 2006 Query-logPoisson Sampling One for testing (testset)Fast crowdsourcing turn around timeVery low cost One for the final dataset (fullset)10x queries Only labelled once5Date Time Query 2006-05-01 00:00:08 What is May Day? 2006-05-08 14:43:42 protest in Puerto ricoTestset Queries : 91Fullset Queries : 1206
13. Dataset Construction Methodology How to check our workers are not `gaming’ the system?Gold Judgments (honey-pot)Small set (5%) of queriesCatch out bad workers early in the task`Cherry-picked` unambiguous queries Focus on news-related queriesMultiple workers per query3 workers Majority result6Date Time Query Validation2006-05-01 00:00:08 What is May Day? No2006-05-08 14:43:42 protest in Puerto rico Yes
14. Dataset Construction Methodology How to counter temporal nature of news queries? Workers need to know what the news stories of the time were . . . But likely will not remember what the main stories during May 2006 Idea: add extra information to interface News Headlines News Summaries Web Search results Prototype Interfaces Use small testset to keep costs and turn-around time low See which works the best7
15. Interfaces : BasicWhat the workers need to doClarify news-relatedness8Query and DateBinary labelling
16. Interfaces : Headline12 news headlines from the New York Times . . . Will the workers bother to read these?9
19. Interfaces : LinkSupportedLinks to three major search engines . . . Triggers a search containing the query and its dateAlso get some additional feedback from workers12
20. Dataset Construction Methodology How do we evaluate our the quality of our labels?Agreement between the three workers per query The more the workers agree, the more confident that we can be that our resulting majority label is correct Compare with `expert’ (me) judgments See how many of the queries that the workers judged news-related match the ground truth13Date Time Query Worker Expert2006-05-05 07:31:23 abcnews Yes No2006-05-08 14:43:42 protest in Puerto rico Yes Yes
26. Experimental Setup Research QuestionsHow do our interface and setting effect the quality of our labelsBaseline quality? How bad is it?How much can the honey-pot bring?How about our extra information {headlines,summaries,result rankings}Can we create a good quality dataset?Agreement?Vs ground truthtestsetfullset15
37. How is our Baseline?Validation is very important:32% of judgments were rejected Basic InterfaceAccuracy : Combined Measure (assumes that the workers labelled non-news-related queries correctly)Recall : The % of all news-related queries that the workers labelled correctlyAs expected the baseline is fairly poor, i.e. Agreement between workers per query is low (25-50%)20% of those were completed VERY quickly: Bots?Kfree : Kappa agreement assuming that workers would label randomly Precision : The % of queries labelled as news-related that agree with our ground truth. . . and new usersKfleiss : Kappa agreement assuming that workers will label according to class distributionWatch out for bursty judging
38. Adding Additional Information By providing additional news-related information does label quality increase and which is the best interface? Answer: Yes, as shown by the performance increase The LinkSuported interface provides the highest performance19. . . but putting the information with each query causes workers to just match the textWeb results provide just as much information as headlinesMore information increases performanceAgreement .We can help users by providing more informationHeadlineInlineHeadlineHeadlineSumaryLinkSupportedBasic
39. Labelling the FullSet20Link-Supported Interface We now label the fullset 1204 queries Gold Judgments LinkSupported Interface Are the resulting labels of sufficient quality?
40. High recall and agreement indicate that the labels are of high qualityPrecision . Workers finding other queries to be news-relatedRecall Workers got all of the news-related queries right!Agreement .Workers maybe learning the task?Majority of work done by 3 userstestsetfullset
41. Conclusions & Best PracticesCrowdsourcing is useful for building a news-query classification dataset
42. We are confident that our dataset is reliable since agreement is highBest PracticesOnline worker validation is paramount, catch out bots and lazy workers to improve agreement Provide workers with additional information to help improve labelling quality Workers can learn, running large single jobs may allow workers to become better at the taskQuestions?21
Editor's Notes
#6: Poisson sampling – literature says is representative