Overview presentation of Verifying Multimedia Use task by Christina Boididou and Stuart Middleton at MediaEval workshop, Hilversum, Amsterdam.
1 of 24
Download to read offline
More Related Content
Verifying Multimedia Use at MediaEval 2016
1. Verifying Multimedia Use at
MediaEval 2016
Christina Boididou1, Stuart E. Middleton5, Symeon
Papadopoulos1, Duc-Tien Dang-Nguyen2,3, Giulia
Boato2, Michael Riegler4 & Yiannis Kompatsiaris1
1 Information Technologies Institute (ITI), CERTH, Greece
2 University of Trento, Italy.
3 Insight Centre for Data Analytics at Dublin City University, Ireland.
4 Simula Research Lab, Norway.
5 University of Southampton IT Innovation Centre, UK.
3. Real photo
Captured in Dublins
Olympia Theatre
A photo of Eagles of Death Metal in concert
But
Mislabeled on social
media as showing
the crowd at the
Bataclan theatre
just before gunmen
began firing.
4. A TYPOLOGY OF FAKE: REPOSTING OF REAL
Photos from past events reposted as being
associated to current event
Eiffel Tower lights up in
solidarity with Pakistan
Syrian refugee girl
selling gum in Jordan
5. A TYPOLOGY OF FAKE: PHOTOSHOPPING
Digitally manipulated photos / Tampered
Sharks in New York
during Hurricane Sandy
Sikh man is a suspect of
Paris attacks
8. SUB-TASK
Given an image, return a decision
(tampered, non-tampered, unknown)
on whether the image has been
digitally modified or not.
IMAGE
MEDIAEVAL
SYSTEM
TAMPERED
NON
TAMPERED
10. GROUND TRUTH GENERATION
Multimedia cases were labeled as fake/real after
consulting online reports (articles, blogs)
Data (post) collection associated to these cases
performed using Topsy (historic events) or using
streaming and search API (real-time events)
Post set expansion: Near-duplicate image search +
journalist debunking reports + human inspection was
used to increase the number of associated posts
Crowdsourcing campaign carried out with
microWorkers platform; each worker asked to provide
three cases of multimedia misuse
14. Main task
Target class: Fake
TASK EVALUATION
Sub-task
Target class: Tampered
Classic IR metrics
Precision
Recall
F1-score -> main evaluation metric
Participants were allowed to mark a case
as unknown (expected to result in
reduced recall)
15. TASK SUBMISSIONS
10 submissions for the main task
2 submissions for the sub-task
(just one team)
3 teams submitted
(+1 the organizers)
16. TRENDS IN APPROACHES
Features being used
- Text features (most common)
- Post and user metadata
- Image forensics
- Video quality metadata
- Topics of post
- Text similarity of posts (per image case)
- Trusted sources attributed in text
- Mentioned online external sources