際際滷shows by User: laroyo / http://www.slideshare.net/images/logo.gif 際際滷shows by User: laroyo / Sat, 16 Dec 2023 06:12:37 GMT 際際滷Share feed for 際際滷shows by User: laroyo NeurIPS2023 Keynote: The Many Faces of Responsible AI.pdf /slideshow/neurips2023-keynote-the-many-faces-of-responsible-aipdf/264690062 neurips2023keynotethemanyfacesofresponsibleai-231216061237-01ef905f
the slides from my NeurIPS2023 keynote on the role of diversity in human annotated data quality for achieving responsible AI]]>

the slides from my NeurIPS2023 keynote on the role of diversity in human annotated data quality for achieving responsible AI]]>
Sat, 16 Dec 2023 06:12:37 GMT /slideshow/neurips2023-keynote-the-many-faces-of-responsible-aipdf/264690062 laroyo@slideshare.net(laroyo) NeurIPS2023 Keynote: The Many Faces of Responsible AI.pdf laroyo the slides from my NeurIPS2023 keynote on the role of diversity in human annotated data quality for achieving responsible AI <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/neurips2023keynotethemanyfacesofresponsibleai-231216061237-01ef905f-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> the slides from my NeurIPS2023 keynote on the role of diversity in human annotated data quality for achieving responsible AI
NeurIPS2023 Keynote: The Many Faces of Responsible AI.pdf from Lora Aroyo
]]>
1332 0 https://cdn.slidesharecdn.com/ss_thumbnails/neurips2023keynotethemanyfacesofresponsibleai-231216061237-01ef905f-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
CATS4ML Data Challenge: Crowdsourcing Adverse Test Sets for Machine Learning /slideshow/cats4ml-data-challenge-crowdsourcing-adverse-test-sets-for-machine-learning/239963886 neurips-2020boothcats4mlchallenge-201210144931
Presentation at Google Booth @ NeurIPS2020]]>

Presentation at Google Booth @ NeurIPS2020]]>
Thu, 10 Dec 2020 14:49:31 GMT /slideshow/cats4ml-data-challenge-crowdsourcing-adverse-test-sets-for-machine-learning/239963886 laroyo@slideshare.net(laroyo) CATS4ML Data Challenge: Crowdsourcing Adverse Test Sets for Machine Learning laroyo Presentation at Google Booth @ NeurIPS2020 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/neurips-2020boothcats4mlchallenge-201210144931-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at Google Booth @ NeurIPS2020
CATS4ML Data Challenge: Crowdsourcing Adverse Test Sets for Machine Learning from Lora Aroyo
]]>
593 0 https://cdn.slidesharecdn.com/ss_thumbnails/neurips-2020boothcats4mlchallenge-201210144931-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Harnessing Human Semantics at Scale (updated) /slideshow/harnessing-human-semantics-at-scale-updated/239244634 acm-wdsp13november2020-201113184542
Presentation at the ACM-W]]>

Presentation at the ACM-W]]>
Fri, 13 Nov 2020 18:45:42 GMT /slideshow/harnessing-human-semantics-at-scale-updated/239244634 laroyo@slideshare.net(laroyo) Harnessing Human Semantics at Scale (updated) laroyo Presentation at the ACM-W <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/acm-wdsp13november2020-201113184542-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at the ACM-W
Harnessing Human Semantics at Scale (updated) from Lora Aroyo
]]>
135 0 https://cdn.slidesharecdn.com/ss_thumbnails/acm-wdsp13november2020-201113184542-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data excellence: Better data for better AI /slideshow/data-excellence-better-data-for-better-ai/238521278 dataexcellencebetterdataforbetterai-odsc2020-200917090315
Keynote at ODSC2020 Conference]]>

Keynote at ODSC2020 Conference]]>
Thu, 17 Sep 2020 09:03:15 GMT /slideshow/data-excellence-better-data-for-better-ai/238521278 laroyo@slideshare.net(laroyo) Data excellence: Better data for better AI laroyo Keynote at ODSC2020 Conference <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dataexcellencebetterdataforbetterai-odsc2020-200917090315-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Keynote at ODSC2020 Conference
Data excellence: Better data for better AI from Lora Aroyo
]]>
1045 1 https://cdn.slidesharecdn.com/ss_thumbnails/dataexcellencebetterdataforbetterai-odsc2020-200917090315-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
CHIP Demonstrator presentation @ CATCH Symposium /slideshow/chip-demonstrator-presentation-catch-symposium/230971884 catch-symposium-chip-200327155951
Semantics-driven Recommendations and Personalized Museum Tour Generation ]]>

Semantics-driven Recommendations and Personalized Museum Tour Generation ]]>
Fri, 27 Mar 2020 15:59:51 GMT /slideshow/chip-demonstrator-presentation-catch-symposium/230971884 laroyo@slideshare.net(laroyo) CHIP Demonstrator presentation @ CATCH Symposium laroyo Semantics-driven Recommendations and Personalized Museum Tour Generation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/catch-symposium-chip-200327155951-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Semantics-driven Recommendations and Personalized Museum Tour Generation
CHIP Demonstrator presentation @ CATCH Symposium from Lora Aroyo
]]>
268 0 https://cdn.slidesharecdn.com/ss_thumbnails/catch-symposium-chip-200327155951-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Semantic Web Challenge: CHIP Demonstrator /laroyo/semantic-web-challenge-chip-demonstrator sw-challenge-chip-final-200327155226
Presentation at the Semantic Web Challenge of the CHIP demonstrator: Semantics-driven Recommendations and Personalized Museum Tour Generation]]>

Presentation at the Semantic Web Challenge of the CHIP demonstrator: Semantics-driven Recommendations and Personalized Museum Tour Generation]]>
Fri, 27 Mar 2020 15:52:26 GMT /laroyo/semantic-web-challenge-chip-demonstrator laroyo@slideshare.net(laroyo) Semantic Web Challenge: CHIP Demonstrator laroyo Presentation at the Semantic Web Challenge of the CHIP demonstrator: Semantics-driven Recommendations and Personalized Museum Tour Generation <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sw-challenge-chip-final-200327155226-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at the Semantic Web Challenge of the CHIP demonstrator: Semantics-driven Recommendations and Personalized Museum Tour Generation
Semantic Web Challenge: CHIP Demonstrator from Lora Aroyo
]]>
185 1 https://cdn.slidesharecdn.com/ss_thumbnails/sw-challenge-chip-final-200327155226-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
The Rijksmuseum Collection as Linked Data /slideshow/the-rijksmuseum-collection-as-linked-data/119662963 iswc2018-swj-paper-final-181017002041
Presentation at ISWC2018: http://iswc2018.semanticweb.org/sessions/the-rijksmuseum-collection-as-linked-data/ of our paper published originally in the Semantic Web Journal: http://www.semantic-web-journal.net/content/rijksmuseum-collection-linked-data-2 Many museums are currently providing online access to their collections. The state of the art research in the last decade shows that it is beneficial for institutions to provide their datasets as Linked Data in order to achieve easy cross-referencing, interlinking and integration. In this paper, we present the Rijksmuseum linked dataset (accessible at http://datahub.io/dataset/rijksmuseum), along with collection and vocabulary statistics, as well as lessons learned from the process of converting the collection to Linked Data. The version of March 2016 contains over 350,000 objects, including detailed descriptions and high-quality images released under a public domain license.]]>

Presentation at ISWC2018: http://iswc2018.semanticweb.org/sessions/the-rijksmuseum-collection-as-linked-data/ of our paper published originally in the Semantic Web Journal: http://www.semantic-web-journal.net/content/rijksmuseum-collection-linked-data-2 Many museums are currently providing online access to their collections. The state of the art research in the last decade shows that it is beneficial for institutions to provide their datasets as Linked Data in order to achieve easy cross-referencing, interlinking and integration. In this paper, we present the Rijksmuseum linked dataset (accessible at http://datahub.io/dataset/rijksmuseum), along with collection and vocabulary statistics, as well as lessons learned from the process of converting the collection to Linked Data. The version of March 2016 contains over 350,000 objects, including detailed descriptions and high-quality images released under a public domain license.]]>
Wed, 17 Oct 2018 00:20:41 GMT /slideshow/the-rijksmuseum-collection-as-linked-data/119662963 laroyo@slideshare.net(laroyo) The Rijksmuseum Collection as Linked Data laroyo Presentation at ISWC2018: http://iswc2018.semanticweb.org/sessions/the-rijksmuseum-collection-as-linked-data/ of our paper published originally in the Semantic Web Journal: http://www.semantic-web-journal.net/content/rijksmuseum-collection-linked-data-2 Many museums are currently providing online access to their collections. The state of the art research in the last decade shows that it is beneficial for institutions to provide their datasets as Linked Data in order to achieve easy cross-referencing, interlinking and integration. In this paper, we present the Rijksmuseum linked dataset (accessible at http://datahub.io/dataset/rijksmuseum), along with collection and vocabulary statistics, as well as lessons learned from the process of converting the collection to Linked Data. The version of March 2016 contains over 350,000 objects, including detailed descriptions and high-quality images released under a public domain license. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/iswc2018-swj-paper-final-181017002041-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at ISWC2018: http://iswc2018.semanticweb.org/sessions/the-rijksmuseum-collection-as-linked-data/ of our paper published originally in the Semantic Web Journal: http://www.semantic-web-journal.net/content/rijksmuseum-collection-linked-data-2 Many museums are currently providing online access to their collections. The state of the art research in the last decade shows that it is beneficial for institutions to provide their datasets as Linked Data in order to achieve easy cross-referencing, interlinking and integration. In this paper, we present the Rijksmuseum linked dataset (accessible at http://datahub.io/dataset/rijksmuseum), along with collection and vocabulary statistics, as well as lessons learned from the process of converting the collection to Linked Data. The version of March 2016 contains over 350,000 objects, including detailed descriptions and high-quality images released under a public domain license.
The Rijksmuseum Collection as Linked Data from Lora Aroyo
]]>
1917 7 https://cdn.slidesharecdn.com/ss_thumbnails/iswc2018-swj-paper-final-181017002041-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Keynote at International Conference of Art Libraries 2018 @Rijksmuseum /slideshow/keynote-at-international-conference-of-art-libraries-2018-rijksmuseum/118134235 keynoterijks2018-4october-181004120731
My keynote at the ICAL2018 Conference @ Rijksmuseum]]>

My keynote at the ICAL2018 Conference @ Rijksmuseum]]>
Thu, 04 Oct 2018 12:07:31 GMT /slideshow/keynote-at-international-conference-of-art-libraries-2018-rijksmuseum/118134235 laroyo@slideshare.net(laroyo) Keynote at International Conference of Art Libraries 2018 @Rijksmuseum laroyo My keynote at the ICAL2018 Conference @ Rijksmuseum <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/keynoterijks2018-4october-181004120731-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> My keynote at the ICAL2018 Conference @ Rijksmuseum
Keynote at International Conference of Art Libraries 2018 @Rijksmuseum from Lora Aroyo
]]>
840 3 https://cdn.slidesharecdn.com/ss_thumbnails/keynoterijks2018-4october-181004120731-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
FAIRview: Responsible Video Summarization @NYCML'18 /slideshow/fairview-responsible-video-summarization-nycml18/116341316 nycml18nycmedialabsummitfairviewworkshop-180924194846
Presentation at the NYC Media Lab (NYCML2018). There is a growing demand for news videos online, with more consumers preferring to watch the news than read or listen to it. On the publisher side, there is a growing effort to use video summarization technology in order to create easy-to-consume previews (trailers) for different types of broadcast programs. How can we measure the quality of video summaries and their potential to misinform? This workshop will inform participants about automatic video summarization algorithms and how to produce more representative video summaries. The research presented is from the FAIRview project and is supported by the Digital News Innovation Fund (DNI Fund), which is part of the Google News Initiative. ]]>

Presentation at the NYC Media Lab (NYCML2018). There is a growing demand for news videos online, with more consumers preferring to watch the news than read or listen to it. On the publisher side, there is a growing effort to use video summarization technology in order to create easy-to-consume previews (trailers) for different types of broadcast programs. How can we measure the quality of video summaries and their potential to misinform? This workshop will inform participants about automatic video summarization algorithms and how to produce more representative video summaries. The research presented is from the FAIRview project and is supported by the Digital News Innovation Fund (DNI Fund), which is part of the Google News Initiative. ]]>
Mon, 24 Sep 2018 19:48:46 GMT /slideshow/fairview-responsible-video-summarization-nycml18/116341316 laroyo@slideshare.net(laroyo) FAIRview: Responsible Video Summarization @NYCML'18 laroyo Presentation at the NYC Media Lab (NYCML2018). There is a growing demand for news videos online, with more consumers preferring to watch the news than read or listen to it. On the publisher side, there is a growing effort to use video summarization technology in order to create easy-to-consume previews (trailers) for different types of broadcast programs. How can we measure the quality of video summaries and their potential to misinform? This workshop will inform participants about automatic video summarization algorithms and how to produce more representative video summaries. The research presented is from the FAIRview project and is supported by the Digital News Innovation Fund (DNI Fund), which is part of the Google News Initiative. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/nycml18nycmedialabsummitfairviewworkshop-180924194846-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at the NYC Media Lab (NYCML2018). There is a growing demand for news videos online, with more consumers preferring to watch the news than read or listen to it. On the publisher side, there is a growing effort to use video summarization technology in order to create easy-to-consume previews (trailers) for different types of broadcast programs. How can we measure the quality of video summaries and their potential to misinform? This workshop will inform participants about automatic video summarization algorithms and how to produce more representative video summaries. The research presented is from the FAIRview project and is supported by the Digital News Innovation Fund (DNI Fund), which is part of the Google News Initiative.
FAIRview: Responsible Video Summarization @NYCML'18 from Lora Aroyo
]]>
723 1 https://cdn.slidesharecdn.com/ss_thumbnails/nycml18nycmedialabsummitfairviewworkshop-180924194846-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Understanding bias in video news & news filtering algorithms /slideshow/understanding-bias-in-video-news-news-filtering-algorithms/104955291 9-july-2018-jointdemonstratorsmeeting-180709122217
a joint demonstrator plan within the #responsible #datascience program @NLeSC @NWO https://www.linkedin.com/company/netherlands-escience-center/]]>

a joint demonstrator plan within the #responsible #datascience program @NLeSC @NWO https://www.linkedin.com/company/netherlands-escience-center/]]>
Mon, 09 Jul 2018 12:22:17 GMT /slideshow/understanding-bias-in-video-news-news-filtering-algorithms/104955291 laroyo@slideshare.net(laroyo) Understanding bias in video news & news filtering algorithms laroyo a joint demonstrator plan within the #responsible #datascience program @NLeSC @NWO https://www.linkedin.com/company/netherlands-escience-center/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/9-july-2018-jointdemonstratorsmeeting-180709122217-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> a joint demonstrator plan within the #responsible #datascience program @NLeSC @NWO https://www.linkedin.com/company/netherlands-escience-center/
Understanding bias in video news & news filtering algorithms from Lora Aroyo
]]>
371 5 https://cdn.slidesharecdn.com/ss_thumbnails/9-july-2018-jointdemonstratorsmeeting-180709122217-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
StorySourcing: Telling Stories with Humans & Machines /slideshow/storysourcing-telling-stories-with-humans-machines/104000043 keynote-narrativematters2018-180702230416
Keynote at Narrative Matters 2018 Lora Aroyo]]>

Keynote at Narrative Matters 2018 Lora Aroyo]]>
Mon, 02 Jul 2018 23:04:16 GMT /slideshow/storysourcing-telling-stories-with-humans-machines/104000043 laroyo@slideshare.net(laroyo) StorySourcing: Telling Stories with Humans & Machines laroyo Keynote at Narrative Matters 2018 Lora Aroyo <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/keynote-narrativematters2018-180702230416-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Keynote at Narrative Matters 2018 Lora Aroyo
StorySourcing: Telling Stories with Humans & Machines from Lora Aroyo
]]>
734 3 https://cdn.slidesharecdn.com/ss_thumbnails/keynote-narrativematters2018-180702230416-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data Science with Humans in the Loop /slideshow/data-science-with-humans-in-the-loop/80107288 lora-oratie-finalversion-170924185743
Video: https://youtu.be/9jlCJULSrhc --> Video with slides: https://av-media.vu.nl/VUMedia/Play/5745f2482d3f4fe7a547458393af322a1d Inaugural speech by Lora Aroyo, Vrije Universiteit Amsterdam Human-Computer Interaction chair ]]>

Video: https://youtu.be/9jlCJULSrhc --> Video with slides: https://av-media.vu.nl/VUMedia/Play/5745f2482d3f4fe7a547458393af322a1d Inaugural speech by Lora Aroyo, Vrije Universiteit Amsterdam Human-Computer Interaction chair ]]>
Sun, 24 Sep 2017 18:57:43 GMT /slideshow/data-science-with-humans-in-the-loop/80107288 laroyo@slideshare.net(laroyo) Data Science with Humans in the Loop laroyo Video: https://youtu.be/9jlCJULSrhc --> Video with slides: https://av-media.vu.nl/VUMedia/Play/5745f2482d3f4fe7a547458393af322a1d Inaugural speech by Lora Aroyo, Vrije Universiteit Amsterdam Human-Computer Interaction chair <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/lora-oratie-finalversion-170924185743-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Video: https://youtu.be/9jlCJULSrhc --&gt; Video with slides: https://av-media.vu.nl/VUMedia/Play/5745f2482d3f4fe7a547458393af322a1d Inaugural speech by Lora Aroyo, Vrije Universiteit Amsterdam Human-Computer Interaction chair
Data Science with Humans in the Loop from Lora Aroyo
]]>
3712 5 https://cdn.slidesharecdn.com/ss_thumbnails/lora-oratie-finalversion-170924185743-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Digital Humanities Benelux 2017: Keynote Lora Aroyo /slideshow/digital-humanities-benelux-2017-keynote-lora-aroyo/77547078 dhbenelux2017-keynote1-170705183246
https://dhbenelux2017.eu/programme/keynotes/lora/ ]]>

https://dhbenelux2017.eu/programme/keynotes/lora/ ]]>
Wed, 05 Jul 2017 18:32:46 GMT /slideshow/digital-humanities-benelux-2017-keynote-lora-aroyo/77547078 laroyo@slideshare.net(laroyo) Digital Humanities Benelux 2017: Keynote Lora Aroyo laroyo https://dhbenelux2017.eu/programme/keynotes/lora/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dhbenelux2017-keynote1-170705183246-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> https://dhbenelux2017.eu/programme/keynotes/lora/
Digital Humanities Benelux 2017: Keynote Lora Aroyo from Lora Aroyo
]]>
2383 4 https://cdn.slidesharecdn.com/ss_thumbnails/dhbenelux2017-keynote1-170705183246-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
DH Benelux 2017 Panel: A Pragmatic Approach to Understanding and Utilising Events in Cultural Heritage /slideshow/dh-benelux-2017-panel-a-pragmatic-approach-to-understanding-and-utilising-events-in-cultural-heritage/77533061 dhbenelux2017-panelslidesall-170705112355
Lora Aroyo, Chiel van den Akker, Marnix van Berchum, Lodewijk Petram, Gerard Kuys, Tommaso Caselli, Jacco van Ossenbruggen, Victor de Boer, Sabrina Sauer, Berber Hagedoorn]]>

Lora Aroyo, Chiel van den Akker, Marnix van Berchum, Lodewijk Petram, Gerard Kuys, Tommaso Caselli, Jacco van Ossenbruggen, Victor de Boer, Sabrina Sauer, Berber Hagedoorn]]>
Wed, 05 Jul 2017 11:23:54 GMT /slideshow/dh-benelux-2017-panel-a-pragmatic-approach-to-understanding-and-utilising-events-in-cultural-heritage/77533061 laroyo@slideshare.net(laroyo) DH Benelux 2017 Panel: A Pragmatic Approach to Understanding and Utilising Events in Cultural Heritage laroyo Lora Aroyo, Chiel van den Akker, Marnix van Berchum, Lodewijk Petram, Gerard Kuys, Tommaso Caselli, Jacco van Ossenbruggen, Victor de Boer, Sabrina Sauer, Berber Hagedoorn <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dhbenelux2017-panelslidesall-170705112355-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Lora Aroyo, Chiel van den Akker, Marnix van Berchum, Lodewijk Petram, Gerard Kuys, Tommaso Caselli, Jacco van Ossenbruggen, Victor de Boer, Sabrina Sauer, Berber Hagedoorn
DH Benelux 2017 Panel: A Pragmatic Approach to Understanding and Utilising Events in Cultural Heritage from Lora Aroyo
]]>
5568 7 https://cdn.slidesharecdn.com/ss_thumbnails/dhbenelux2017-panelslidesall-170705112355-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Crowdsourcing ambiguity aware ground truth - collective intelligence 2017 /slideshow/crowdsourcing-ambiguity-aware-ground-truth-collective-intelligence-2017-76985921/76985921 crowdsourcingambiguity-awaregroundtruth-collectiveintelligence2017-170615204530
The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to the volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, this assumption often creates issues in practice. Previous experiments we performed found that inter-annotator disagreement is usually never captured, either because the number of annotators is too small to capture the full diversity of opinion, or because the crowd data is aggregated with metrics that enforce consensus, such as majority vote. These practices create artificial data that is neither general nor reflects the ambiguity inherent in the data. To address these issues, we proposed the method for crowdsourcing ground truth by harnessing inter-annotator disagreement. We present an alternative approach for crowdsourcing ground truth data that, instead of enforcing an agreement between annotators, captures the ambiguity inherent in semantic annotation through the use of disagreement-aware metrics for aggregating crowdsourcing responses. Based on this principle, we have implemented the CrowdTruth framework for machine-human computation, that first introduced the disagreement-aware metrics and built a pipeline to process crowdsourcing data with these metrics. In this paper, we apply the CrowdTruth methodology to collect data over a set of diverse tasks: medical relation extraction, Twitter event identification, news event extraction and sound interpretation. We prove that capturing disagreement is essential for acquiring a high-quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with a majority vote, a method which enforces consensus among annotators. By applying our analysis over a set of diverse tasks we show that, even though ambiguity manifests differently depending on the task, our theory of inter-annotator disagreement as a property of ambiguity is generalizable.]]>

The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to the volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, this assumption often creates issues in practice. Previous experiments we performed found that inter-annotator disagreement is usually never captured, either because the number of annotators is too small to capture the full diversity of opinion, or because the crowd data is aggregated with metrics that enforce consensus, such as majority vote. These practices create artificial data that is neither general nor reflects the ambiguity inherent in the data. To address these issues, we proposed the method for crowdsourcing ground truth by harnessing inter-annotator disagreement. We present an alternative approach for crowdsourcing ground truth data that, instead of enforcing an agreement between annotators, captures the ambiguity inherent in semantic annotation through the use of disagreement-aware metrics for aggregating crowdsourcing responses. Based on this principle, we have implemented the CrowdTruth framework for machine-human computation, that first introduced the disagreement-aware metrics and built a pipeline to process crowdsourcing data with these metrics. In this paper, we apply the CrowdTruth methodology to collect data over a set of diverse tasks: medical relation extraction, Twitter event identification, news event extraction and sound interpretation. We prove that capturing disagreement is essential for acquiring a high-quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with a majority vote, a method which enforces consensus among annotators. By applying our analysis over a set of diverse tasks we show that, even though ambiguity manifests differently depending on the task, our theory of inter-annotator disagreement as a property of ambiguity is generalizable.]]>
Thu, 15 Jun 2017 20:45:30 GMT /slideshow/crowdsourcing-ambiguity-aware-ground-truth-collective-intelligence-2017-76985921/76985921 laroyo@slideshare.net(laroyo) Crowdsourcing ambiguity aware ground truth - collective intelligence 2017 laroyo The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to the volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, this assumption often creates issues in practice. Previous experiments we performed found that inter-annotator disagreement is usually never captured, either because the number of annotators is too small to capture the full diversity of opinion, or because the crowd data is aggregated with metrics that enforce consensus, such as majority vote. These practices create artificial data that is neither general nor reflects the ambiguity inherent in the data. To address these issues, we proposed the method for crowdsourcing ground truth by harnessing inter-annotator disagreement. We present an alternative approach for crowdsourcing ground truth data that, instead of enforcing an agreement between annotators, captures the ambiguity inherent in semantic annotation through the use of disagreement-aware metrics for aggregating crowdsourcing responses. Based on this principle, we have implemented the CrowdTruth framework for machine-human computation, that first introduced the disagreement-aware metrics and built a pipeline to process crowdsourcing data with these metrics. In this paper, we apply the CrowdTruth methodology to collect data over a set of diverse tasks: medical relation extraction, Twitter event identification, news event extraction and sound interpretation. We prove that capturing disagreement is essential for acquiring a high-quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with a majority vote, a method which enforces consensus among annotators. By applying our analysis over a set of diverse tasks we show that, even though ambiguity manifests differently depending on the task, our theory of inter-annotator disagreement as a property of ambiguity is generalizable. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/crowdsourcingambiguity-awaregroundtruth-collectiveintelligence2017-170615204530-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to the volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, this assumption often creates issues in practice. Previous experiments we performed found that inter-annotator disagreement is usually never captured, either because the number of annotators is too small to capture the full diversity of opinion, or because the crowd data is aggregated with metrics that enforce consensus, such as majority vote. These practices create artificial data that is neither general nor reflects the ambiguity inherent in the data. To address these issues, we proposed the method for crowdsourcing ground truth by harnessing inter-annotator disagreement. We present an alternative approach for crowdsourcing ground truth data that, instead of enforcing an agreement between annotators, captures the ambiguity inherent in semantic annotation through the use of disagreement-aware metrics for aggregating crowdsourcing responses. Based on this principle, we have implemented the CrowdTruth framework for machine-human computation, that first introduced the disagreement-aware metrics and built a pipeline to process crowdsourcing data with these metrics. In this paper, we apply the CrowdTruth methodology to collect data over a set of diverse tasks: medical relation extraction, Twitter event identification, news event extraction and sound interpretation. We prove that capturing disagreement is essential for acquiring a high-quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with a majority vote, a method which enforces consensus among annotators. By applying our analysis over a set of diverse tasks we show that, even though ambiguity manifests differently depending on the task, our theory of inter-annotator disagreement as a property of ambiguity is generalizable.
Crowdsourcing ambiguity aware ground truth - collective intelligence 2017 from Lora Aroyo
]]>
8799 6 https://cdn.slidesharecdn.com/ss_thumbnails/crowdsourcingambiguity-awaregroundtruth-collectiveintelligence2017-170615204530-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
My ESWC 2017 keynote: Disrupting the Semantic Comfort Zone /slideshow/my-eswc-2017-keynote-disrupting-the-semantic-comfort-zone/76559804 eswc2017keynote-final-170601125207
Ambiguity in interpreting signs is not a new idea, yet the vast majority of research in machine interpretation of signals such as speech, language, images, video, audio, etc., tend to ignore ambiguity. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth, in which each instance (a sentence, a photo, a sound clip, etc) is assigned a discrete label, or set of labels, and the machines prediction for that instance is compared to the label to determine if it is correct. This determination yields the familiar precision, recall, accuracy, and f-measure metrics, but clearly presupposes that this determination can be made. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation perspectives and encourages human annotators to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isnt doing the right thing, or the task itself is not clear. In previous work on CrowdTruth, the focus was on how the disagreement signals from low quality workers and from unclear tasks can be isolated. Recently, we observed that disagreement can also signal ambiguity. The basic hypothesis is that, if workers disagree on the correct label for an example, then it will be more di鍖cult for a machine to classify that example. The elaborate data analysis to determine if the source of the disagreement is ambiguity supports our intuition that low clarity signals ambiguity, while high clarity sentences quite obviously express one or more of the target relations. In this talk I will share the experiences and lessons learned on the path to understanding diversity in human interpretation and the ways to capture it as ground truth to enable machines to deal with such diversity.]]>

Ambiguity in interpreting signs is not a new idea, yet the vast majority of research in machine interpretation of signals such as speech, language, images, video, audio, etc., tend to ignore ambiguity. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth, in which each instance (a sentence, a photo, a sound clip, etc) is assigned a discrete label, or set of labels, and the machines prediction for that instance is compared to the label to determine if it is correct. This determination yields the familiar precision, recall, accuracy, and f-measure metrics, but clearly presupposes that this determination can be made. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation perspectives and encourages human annotators to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isnt doing the right thing, or the task itself is not clear. In previous work on CrowdTruth, the focus was on how the disagreement signals from low quality workers and from unclear tasks can be isolated. Recently, we observed that disagreement can also signal ambiguity. The basic hypothesis is that, if workers disagree on the correct label for an example, then it will be more di鍖cult for a machine to classify that example. The elaborate data analysis to determine if the source of the disagreement is ambiguity supports our intuition that low clarity signals ambiguity, while high clarity sentences quite obviously express one or more of the target relations. In this talk I will share the experiences and lessons learned on the path to understanding diversity in human interpretation and the ways to capture it as ground truth to enable machines to deal with such diversity.]]>
Thu, 01 Jun 2017 12:52:07 GMT /slideshow/my-eswc-2017-keynote-disrupting-the-semantic-comfort-zone/76559804 laroyo@slideshare.net(laroyo) My ESWC 2017 keynote: Disrupting the Semantic Comfort Zone laroyo Ambiguity in interpreting signs is not a new idea, yet the vast majority of research in machine interpretation of signals such as speech, language, images, video, audio, etc., tend to ignore ambiguity. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth, in which each instance (a sentence, a photo, a sound clip, etc) is assigned a discrete label, or set of labels, and the machines prediction for that instance is compared to the label to determine if it is correct. This determination yields the familiar precision, recall, accuracy, and f-measure metrics, but clearly presupposes that this determination can be made. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation perspectives and encourages human annotators to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isnt doing the right thing, or the task itself is not clear. In previous work on CrowdTruth, the focus was on how the disagreement signals from low quality workers and from unclear tasks can be isolated. Recently, we observed that disagreement can also signal ambiguity. The basic hypothesis is that, if workers disagree on the correct label for an example, then it will be more di鍖cult for a machine to classify that example. The elaborate data analysis to determine if the source of the disagreement is ambiguity supports our intuition that low clarity signals ambiguity, while high clarity sentences quite obviously express one or more of the target relations. In this talk I will share the experiences and lessons learned on the path to understanding diversity in human interpretation and the ways to capture it as ground truth to enable machines to deal with such diversity. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eswc2017keynote-final-170601125207-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ambiguity in interpreting signs is not a new idea, yet the vast majority of research in machine interpretation of signals such as speech, language, images, video, audio, etc., tend to ignore ambiguity. This is evidenced by the fact that metrics for quality of machine understanding rely on a ground truth, in which each instance (a sentence, a photo, a sound clip, etc) is assigned a discrete label, or set of labels, and the machines prediction for that instance is compared to the label to determine if it is correct. This determination yields the familiar precision, recall, accuracy, and f-measure metrics, but clearly presupposes that this determination can be made. CrowdTruth is a form of collective intelligence based on a vector representation that accommodates diverse interpretation perspectives and encourages human annotators to disagree with each other, in order to expose latent elements such as ambiguity and worker quality. In other words, CrowdTruth assumes that when annotators disagree on how to label an example, it is because the example is ambiguous, the worker isnt doing the right thing, or the task itself is not clear. In previous work on CrowdTruth, the focus was on how the disagreement signals from low quality workers and from unclear tasks can be isolated. Recently, we observed that disagreement can also signal ambiguity. The basic hypothesis is that, if workers disagree on the correct label for an example, then it will be more di鍖cult for a machine to classify that example. The elaborate data analysis to determine if the source of the disagreement is ambiguity supports our intuition that low clarity signals ambiguity, while high clarity sentences quite obviously express one or more of the target relations. In this talk I will share the experiences and lessons learned on the path to understanding diversity in human interpretation and the ways to capture it as ground truth to enable machines to deal with such diversity.
My ESWC 2017 keynote: Disrupting the Semantic Comfort Zone from Lora Aroyo
]]>
2767 6 https://cdn.slidesharecdn.com/ss_thumbnails/eswc2017keynote-final-170601125207-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Data Science 鐃with Human in the Loop @Faculty of Science #Leiden University /laroyo/data-science-with-human-in-the-loop-faculty-of-science-leiden-university 2017-leiden-seminars-loraaroyo-170418121908
Software systems are becoming ever more intelligent and more useful, but the way we interact with these machines too often reveals that they dont actually understand people. Knowledge Representation and Semantic Web focus on the scientific challenges involved in providing human knowledge in machine-readable form. However, we observe that various types of human knowledge cannot yet be captured by machines, especially when dealing with wide ranges of real-world tasks and contexts. The key scientific challenge is to provide an approach to capturing human knowledge in a way that is scalable and adequate to real-world needs. Human Computation has begun to scientifically study how human intelligence at scale can be used to methodologically improve machine-based knowledge and data management. My research is focusing on understanding human computation for improving how machine-based systems can acquire, capture and harness human knowledge and thus become even more intelligent. In this talk I will show how the CrowdTruth framework (http://crowdtruth.org) facilitates data collection, processing and analytics of human computation knowledge. Some project links: - http://controcurator.org/ - http://crowdtruth.org/ - http://diveproject.beeldengeluid.nl/ - http://vu-amsterdam-web-media-group.github.io/linkflows/]]>

Software systems are becoming ever more intelligent and more useful, but the way we interact with these machines too often reveals that they dont actually understand people. Knowledge Representation and Semantic Web focus on the scientific challenges involved in providing human knowledge in machine-readable form. However, we observe that various types of human knowledge cannot yet be captured by machines, especially when dealing with wide ranges of real-world tasks and contexts. The key scientific challenge is to provide an approach to capturing human knowledge in a way that is scalable and adequate to real-world needs. Human Computation has begun to scientifically study how human intelligence at scale can be used to methodologically improve machine-based knowledge and data management. My research is focusing on understanding human computation for improving how machine-based systems can acquire, capture and harness human knowledge and thus become even more intelligent. In this talk I will show how the CrowdTruth framework (http://crowdtruth.org) facilitates data collection, processing and analytics of human computation knowledge. Some project links: - http://controcurator.org/ - http://crowdtruth.org/ - http://diveproject.beeldengeluid.nl/ - http://vu-amsterdam-web-media-group.github.io/linkflows/]]>
Tue, 18 Apr 2017 12:19:08 GMT /laroyo/data-science-with-human-in-the-loop-faculty-of-science-leiden-university laroyo@slideshare.net(laroyo) Data Science 鐃with Human in the Loop @Faculty of Science #Leiden University laroyo Software systems are becoming ever more intelligent and more useful, but the way we interact with these machines too often reveals that they dont actually understand people. Knowledge Representation and Semantic Web focus on the scientific challenges involved in providing human knowledge in machine-readable form. However, we observe that various types of human knowledge cannot yet be captured by machines, especially when dealing with wide ranges of real-world tasks and contexts. The key scientific challenge is to provide an approach to capturing human knowledge in a way that is scalable and adequate to real-world needs. Human Computation has begun to scientifically study how human intelligence at scale can be used to methodologically improve machine-based knowledge and data management. My research is focusing on understanding human computation for improving how machine-based systems can acquire, capture and harness human knowledge and thus become even more intelligent. In this talk I will show how the CrowdTruth framework (http://crowdtruth.org) facilitates data collection, processing and analytics of human computation knowledge. Some project links: - http://controcurator.org/ - http://crowdtruth.org/ - http://diveproject.beeldengeluid.nl/ - http://vu-amsterdam-web-media-group.github.io/linkflows/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/2017-leiden-seminars-loraaroyo-170418121908-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Software systems are becoming ever more intelligent and more useful, but the way we interact with these machines too often reveals that they dont actually understand people. Knowledge Representation and Semantic Web focus on the scientific challenges involved in providing human knowledge in machine-readable form. However, we observe that various types of human knowledge cannot yet be captured by machines, especially when dealing with wide ranges of real-world tasks and contexts. The key scientific challenge is to provide an approach to capturing human knowledge in a way that is scalable and adequate to real-world needs. Human Computation has begun to scientifically study how human intelligence at scale can be used to methodologically improve machine-based knowledge and data management. My research is focusing on understanding human computation for improving how machine-based systems can acquire, capture and harness human knowledge and thus become even more intelligent. In this talk I will show how the CrowdTruth framework (http://crowdtruth.org) facilitates data collection, processing and analytics of human computation knowledge. Some project links: - http://controcurator.org/ - http://crowdtruth.org/ - http://diveproject.beeldengeluid.nl/ - http://vu-amsterdam-web-media-group.github.io/linkflows/
Data Science with Human in the Loop @Faculty of Science #Leiden University from Lora Aroyo
]]>
1092 5 https://cdn.slidesharecdn.com/ss_thumbnails/2017-leiden-seminars-loraaroyo-170418121908-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
SXSW2017 @NewDutchMedia Talk: Exploration is the New Search /slideshow/sxsw2017-newdutchmedia-talk-exploration-is-the-new-search/73103570 sxsw2017-loraaroyo-170313180423
This was my talk at the Dutch #MediaInnovators event at the SXSW2017, in a session together with #SoundandVision #VPRO and #VARA]]>

This was my talk at the Dutch #MediaInnovators event at the SXSW2017, in a session together with #SoundandVision #VPRO and #VARA]]>
Mon, 13 Mar 2017 18:04:23 GMT /slideshow/sxsw2017-newdutchmedia-talk-exploration-is-the-new-search/73103570 laroyo@slideshare.net(laroyo) SXSW2017 @NewDutchMedia Talk: Exploration is the New Search laroyo This was my talk at the Dutch #MediaInnovators event at the SXSW2017, in a session together with #SoundandVision #VPRO and #VARA <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sxsw2017-loraaroyo-170313180423-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This was my talk at the Dutch #MediaInnovators event at the SXSW2017, in a session together with #SoundandVision #VPRO and #VARA
SXSW2017 @NewDutchMedia Talk: Exploration is the New Search from Lora Aroyo
]]>
21834 6 https://cdn.slidesharecdn.com/ss_thumbnails/sxsw2017-loraaroyo-170313180423-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Europeana GA 2016: Harnessing Crowds, Niches & Professionals in the Digital Age /slideshow/europeana-ga-2016-harnessing-crowds-niches-professionals-in-the-digital-age/68822638 europeanaga2016-loraaroyo-final-161113153521
Presentation at the Annual Europeana meeting]]>

Presentation at the Annual Europeana meeting]]>
Sun, 13 Nov 2016 15:35:21 GMT /slideshow/europeana-ga-2016-harnessing-crowds-niches-professionals-in-the-digital-age/68822638 laroyo@slideshare.net(laroyo) Europeana GA 2016: Harnessing Crowds, Niches & Professionals in the Digital Age laroyo Presentation at the Annual Europeana meeting <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/europeanaga2016-loraaroyo-final-161113153521-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presentation at the Annual Europeana meeting
Europeana GA 2016: Harnessing Crowds, Niches & Professionals in the Digital Age from Lora Aroyo
]]>
581 3 https://cdn.slidesharecdn.com/ss_thumbnails/europeanaga2016-loraaroyo-final-161113153521-thumbnail.jpg?width=120&height=120&fit=bounds presentation 000000 http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
"Video Killed the Radio Star": From MTV to Snapchat /slideshow/video-killed-the-radio-star-from-mtv-to-snapchat-lecture-at-knowledge-media-course-2016-vu-university-amsterdam/66300174 kmlectureloraaroyo2-160922122425
guest lecture at Knowledge & Media course 2016 VU University Amsterdam]]>

guest lecture at Knowledge & Media course 2016 VU University Amsterdam]]>
Thu, 22 Sep 2016 12:24:25 GMT /slideshow/video-killed-the-radio-star-from-mtv-to-snapchat-lecture-at-knowledge-media-course-2016-vu-university-amsterdam/66300174 laroyo@slideshare.net(laroyo) "Video Killed the Radio Star": From MTV to Snapchat laroyo guest lecture at Knowledge & Media course 2016 VU University Amsterdam <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kmlectureloraaroyo2-160922122425-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> guest lecture at Knowledge &amp; Media course 2016 VU University Amsterdam
"Video Killed the Radio Star": From MTV to Snapchat from Lora Aroyo
]]>
626 5 https://cdn.slidesharecdn.com/ss_thumbnails/kmlectureloraaroyo2-160922122425-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-laroyo-48x48.jpg?cb=1702996940 Specialties: - Multimedia online - Human-computer Interaction - Crowdsourcing - Recommender systems - User and Context Modeling - User-Centered Design for Semantic Systems - e-Learning, e-Culture and Interactive TV - Intelligent Educational Systems lora-aroyo.org https://cdn.slidesharecdn.com/ss_thumbnails/neurips2023keynotethemanyfacesofresponsibleai-231216061237-01ef905f-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/neurips2023-keynote-the-many-faces-of-responsible-aipdf/264690062 NeurIPS2023 Keynote: T... https://cdn.slidesharecdn.com/ss_thumbnails/neurips-2020boothcats4mlchallenge-201210144931-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/cats4ml-data-challenge-crowdsourcing-adverse-test-sets-for-machine-learning/239963886 CATS4ML Data Challenge... https://cdn.slidesharecdn.com/ss_thumbnails/acm-wdsp13november2020-201113184542-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/harnessing-human-semantics-at-scale-updated/239244634 Harnessing Human Seman...