際際滷

際際滷Share a Scribd company logo
Bib & ethics wildgaard
BLASPHEMY
the biases and deficiencies of individual citers are
repaired to a tolerable degree by the combined
activity of the many (White, 2001), where
deficiencies are reduced to random noise (Cawkell,
1976) and references can be used on the
aggregate as an indicator of influence (Small, 1987)
Introduktion af h-indeks i 2005, f淡rte til en eksplosion
af FNI, der h脱vder at v脱re mere robust, gyldig og
sofistikeret end de 淡vrige.
FNI er 淡del脱ggende, tilskynder til en gaming
mentalitet og st淡tter universiteterne i at presse
medarbejdere til m奪lrettet at 淡ge deres indeks
og tilskynder forskningspolitik der overv奪ger
forskningsresultater p奪 det individuelle niveau
(Dahler-Larsen, 2012; Collini, 2012).
Improvements on h
hw (weights citations); hn (field comparisons); ht (id
priority articles); hpd (seniority, 10 years); hc (seniority, 4
years); Q2 (number/impact papers in h); h留 (granular h
ranks) hT (alternative h calculation)
Complimentary to h
m-quotient (h normalized for age); m (median C in h
core); A (average C in h core); R (square root of A); E
(effect papers not in h); AR (Citation intensity & age of
C); b (self citations, top papers); Rational h (distance to
higher h); Hm (multi-authorship); n (field comparison); h
sequences & matrices (field comparison); hf (field
comparison); Alternative h (multi authorship); Pure h
(multi authorship); Adapted pure-h (multi-authorship);
Dynamic h (compare peers); hmx (database comparison)
Narrative
IQP (average quality); Index Age & Productivity
(academic age on productivity & impact); %HCP (top
20%); Classification of durability (document type & field
comparison); DCI (citation age)
Replace h
g, g留 (more granular than h); mg-quotient, AWCR
(normalized for age); rational g (distance to higher g); hg
(compare h & g values); h2 (cummulative acheivement);
徴, x (compare field/seniority); ct, at(aging rate);
wu(excellent papers);  (production & impact); POP h,
AWCRpa (multi-authorship); AW (adjust highly cited
papers)
With no advisory boards, common standards or
contextual assessments, indicators are mostly
incomparable, which in fact impedes the development
of the field and makes the users of scientometric
results mistrustful.
(Vinkler, 1996).
Standarder vedr淡rende de etiske aspekter af
evaluerende bibliometri er igen nyligt blevet
foresl奪et.
(Bornmann, 2008; Bach, 2011; Furner, 2014; Hicks et
al, 2015)
Contextual bibliometrics
1994/1996
Matematiske standarder
for analyse,
pr脱sentation og
fortolkning af data
2008
Etiske standarder til
evaluering af individer
2011
Begrebsramme om etik
& bibliometri ift.
fordeling af midler
2014
DORA: brug ikke JIF i
evaluering af individer
2012
Leiden Manifesto, Metric
Tide informeret brug og
formidling
2015
Contextual
bibliometrics
2016
Bib & ethics wildgaard
Evalueringer baseret p奪 indikatorer, kan f淡re til
antagelser om en forskers produktivitet og impact,
som kan v脱re udokumenterede, og kan p奪virke
forskerens selvopfattelse.
10
FNI GIVER ET SNAP-SHOT AF
INDIVIDETS IMAGE OG CENTRALE
PERSONLIGHEDSTRK
SAMMENLIGNINGER KAN
EKSPONERE DEN ENKELTE
ALLE FORMER FOR DATA BRUGES TIL
AT GE FNI, INDIVIDETS VALIDITET
INDEN FOR DOMNET OG DERES
SELVVRD
FNI BERIGER EN EVALULERING MED
OBJEKTIVITET, REDUCERER KN,
KULTURELLE OG RACEFORDOMME
DOKUMENTERE AT MAN IKKE
KLARER SIG BEDRE END ENS
KOLLEGAER KAN SKADE
FORSKERENS SELVOPFATTELSE
FNI BIDRAGER IKKE ALTID MED
VALUE-ADDED INFORMATION;
INFORMATION KAN VRE
REDUNDANT
SUCCES DEFINERES INDEN FOR
EVALUERINGSSYSTEMET
EVALUERING
OUTPUT
DATA
MODEL
FORTOLKNING
AFSPEJLER
REALITETER
SUCCES I SYSTEM vs
SUCCES I FELTET
FEJL/MANGLER
INDEKSERING
UDFORDRINGER
24-10-2016 12
UDFORDRING #1: MODELLERNE
Forst奪 mekanismer i konstruktionen af
bibliometriske indikatorer.
Forst奪 hvad det er, vores data kan vise.
e=18
h=8
m=1
=3
CPP=18
t=4
S c (i) = 粒  (Y (now)  Y (i) + 1)  隆  |C(i)|
V脱gtet
citations score
for en artikel
Evaluerings 奪r
minus
publikations 奪r
Antal citationer
Valgbar koefficient (sat til 4), som g淡r at en
atikel fra evaluerings奪ret tildeles en faktor 4
24-10-2016 15
UDFORDRING #2: DATA
In time-keeping, in trading, in fighting, men
counted numbers; and finally, as the habit grew,
only numbers counted.
Mumford, (2010)
24/10/2016 16
R奪 publikations/citationstal,
fort脱ller os ingenting om
forskningskvaliteten.
Observationer  studerer
processer og interaktioner.
Sp淡rgeskemaer  identificerer
meninger, erkendelser og
forst奪else.
Interviews  inviterer til feedback
som man normalt ikke vil f奪.
24-10-2016 17
UDFORDRING #3: INDEKSERING
Indikatorer er i sig selv ikke neutrale!
Den st淡rste andel af indikatorer og databaser har
deres oprindelse i de nordlige lande, og afspejler
derfor karakteristika for videnskab i disse
regioner.
24/10/2016 18
Hvordan kan vi udvikle
lokale systemer for at
sammenligne, skabe &
gen-sammenligne gen-
skabe indikatorv脱rdier i
multifacetterede data, s奪 vi
undg奪r 辿n-dimensionelle
profiler?
24-10-2016 19
UDFORDRING #4: OUTPUT
Det vi ikke t脱ller, er usynligt.
遜
24/10/2016 20
Hvad er en publikation?
Hvad er en citation?
Hvad er en forsker?
24-10-2016 21
UDFORDRING #5: EVALUERING
The assessment itself is completely artificial. Its not
ranking researchers in accordance with their ability to
develop, reach their potential, and explore their creative
interests. Those things youre not testing..... its a rank
thats mostly meaningless. And the very ranking itself is
harmful. Its turning us into individuals who devote our
lives to achieving a rank. Not into doing things that are
valuable and important.
Noam Chomsky (2015)
24-10-2016 22
M淡d dem der bliver evalueret.
Unders淡g hvorfor de publicerer der hvor de g淡r.
Argumenter for relevans af indikatorer.
R奪dgiv.
Kommuniker/formidl.
CHOOSE WISELY
24-10-2016 23
Transparens
Demografi
Motivation
Mangfoldighed
benhed
Multi-dimensional research assessment matrix
(Moed, 2011)
UNIT OF
ASSESSMENT
PURPOSE OUTPUT BIBLIOMETRIC
INDICATOR
OTHER
INDICATORS
INDIVIDUAL
ALLOCATE
RESOURCES
RESEARCH
PRODUCTIVITY
PUBLICATIONS PEER REVIEW
RESEARCH
GROUPS
IMPROVE
PEFORMANCE
QUALITY,
SCHOLARLY
IMPACT
JOURNAL
CITATION
IMPACT
PATENTS,
LICENCES, SPIN
OFFS
DEPARTMENT
INCREASE
MULTI-DISCIPL.
RESEARCH
INNOVATION
& SOCIETAL
BENEFIT
ACTUAL
CITATION
IMPACT
INVITATIONS
FOR
CONFERENCES
INSTITUTION
INCREASE
REGIONAL
ENGAGEMENT
SUSTAINABIL-
ITY & SCALE
INT. CO-
AUTHORSHIP
EXTERNAL
RESEARCH
INCOME
RESEARCH
FIELD
PROMOTION,
HIRING
RESEARCH
INFRASTRUCT
CITATION
PRESTIGE
PHD
COMPLETION
RATES
Hvordan kan tilg脱ngelighed
af kontekstuel information
forbedres?
Hvordan kan vi vejlede
individer i brugen af deres
information?
Hvordan opfylder vi l淡ftet om
informeret peer review?
Hvordan kan vi influere
institutioners tilgang til
forskningsevaluering?
Contextualized bibliometrics
24-10-2016 26
Lev辿r information der kan
eksploreres.
Undg奪 for meget v脱gt p奪 det der let
kan kvalificeres/t脱lles.
V脱lg simple indikatorer frem
for komplicerede indikatorer.
V脱r kritisk. V脱r proaktiv.
CALL FOR ACTION
24/10/2016 27
Som bibliometrikere, skal vi forpligte os til
at underbygge meningsfulde sandheder.
Lorna Wildgaard, Ph.D
Lorna.Wildgaard@hum.ku.dk
REFERENCER
 Bach, J. F. (2011). On the proper use of bibliometrics to evaluate
individual researchers. Acad辿mie des sciences. Retrieved 23-6-
2015 from:
http://www.academie-
sciences.fr/activite/rapport/avis170111gb.pdf
 Bornmann, L., Mutz, R., Neuhaus, C., and Daniel, H-D. (2008b).
Citation counts for research evaluation: standards of good
practice for analyzing bibliometric data and presenting and
interpreting results. Ethics in Science and Environmental Politics,
8(1), 93-102.
 Cawkell, A. E. (1976). Understanding science by analysing its
literature. The Information Scientist, 10(1), 3-10.
24/10/2016 28
REFERENCER
 Chomsky, N. (2015) Creative by Nature, Blog post:
https://creativesystemsthinking.wordpress.com/2015/02/21/noa
m-chomsky-on-the-dangers-of-standardized-testing/
 Collini, S. (2012). Bibliometry. In What are universities for?
(pp.120-131) London: Penguin.
 Dahler-Larsen, P. (2012). The Evaluation Society. California:
Stanford University Press.
 Furner, J. (2014). The Ethics of Evaluative Bibliometrics. In
B.Cronin & C. Sugimoto (Eds.), Beyond Bibliometrics: Harnessing
Multidimensional Indicators of Scholarly Impact (pp. 85-107).
Massachusetts: MIT Press.
 Hicks, D. Wouters, P. Waltman, Ludo. de Rijcke, S., and Rafols, I.
(2015). Bibliometrics: The Leiden Manifesto for research metrics.
Nature, 520(7548), 429-431.
24/10/2016 29
REFERENCER
 Moed, H. (2011). The multi-dimensional research assessment
matrix. Research Trends, 23 (May 2011):
http://www.researchtrends.com/issue23-may-2011/the-multi-
dimensional-research-assessment-matrix/
 Mumford, L. (ed.2010). Technics and Civilization. University of
Chicago Press. p.22
 White, H. D. (2001). Authors as citers over time. Journal of the
American Society for Information Science and Technology, 52(2),
87-108.
 Small, H. G. (1987). The significance of bibliographic references.
Scientometrics, 12(5-6), 339-341.
 Vinkler, P. (1996). Some practical aspects of the standardization
of scientometric indicators. Scientometrics, 35(2), 235-245.
24/10/2016 30

More Related Content

Bib & ethics wildgaard

  • 2. BLASPHEMY the biases and deficiencies of individual citers are repaired to a tolerable degree by the combined activity of the many (White, 2001), where deficiencies are reduced to random noise (Cawkell, 1976) and references can be used on the aggregate as an indicator of influence (Small, 1987)
  • 3. Introduktion af h-indeks i 2005, f淡rte til en eksplosion af FNI, der h脱vder at v脱re mere robust, gyldig og sofistikeret end de 淡vrige. FNI er 淡del脱ggende, tilskynder til en gaming mentalitet og st淡tter universiteterne i at presse medarbejdere til m奪lrettet at 淡ge deres indeks og tilskynder forskningspolitik der overv奪ger forskningsresultater p奪 det individuelle niveau (Dahler-Larsen, 2012; Collini, 2012).
  • 4. Improvements on h hw (weights citations); hn (field comparisons); ht (id priority articles); hpd (seniority, 10 years); hc (seniority, 4 years); Q2 (number/impact papers in h); h留 (granular h ranks) hT (alternative h calculation) Complimentary to h m-quotient (h normalized for age); m (median C in h core); A (average C in h core); R (square root of A); E (effect papers not in h); AR (Citation intensity & age of C); b (self citations, top papers); Rational h (distance to higher h); Hm (multi-authorship); n (field comparison); h sequences & matrices (field comparison); hf (field comparison); Alternative h (multi authorship); Pure h (multi authorship); Adapted pure-h (multi-authorship); Dynamic h (compare peers); hmx (database comparison)
  • 5. Narrative IQP (average quality); Index Age & Productivity (academic age on productivity & impact); %HCP (top 20%); Classification of durability (document type & field comparison); DCI (citation age) Replace h g, g留 (more granular than h); mg-quotient, AWCR (normalized for age); rational g (distance to higher g); hg (compare h & g values); h2 (cummulative acheivement); 徴, x (compare field/seniority); ct, at(aging rate); wu(excellent papers); (production & impact); POP h, AWCRpa (multi-authorship); AW (adjust highly cited papers)
  • 6. With no advisory boards, common standards or contextual assessments, indicators are mostly incomparable, which in fact impedes the development of the field and makes the users of scientometric results mistrustful. (Vinkler, 1996). Standarder vedr淡rende de etiske aspekter af evaluerende bibliometri er igen nyligt blevet foresl奪et. (Bornmann, 2008; Bach, 2011; Furner, 2014; Hicks et al, 2015)
  • 7. Contextual bibliometrics 1994/1996 Matematiske standarder for analyse, pr脱sentation og fortolkning af data 2008 Etiske standarder til evaluering af individer 2011 Begrebsramme om etik & bibliometri ift. fordeling af midler 2014 DORA: brug ikke JIF i evaluering af individer 2012 Leiden Manifesto, Metric Tide informeret brug og formidling 2015 Contextual bibliometrics 2016
  • 9. Evalueringer baseret p奪 indikatorer, kan f淡re til antagelser om en forskers produktivitet og impact, som kan v脱re udokumenterede, og kan p奪virke forskerens selvopfattelse.
  • 10. 10 FNI GIVER ET SNAP-SHOT AF INDIVIDETS IMAGE OG CENTRALE PERSONLIGHEDSTRK SAMMENLIGNINGER KAN EKSPONERE DEN ENKELTE ALLE FORMER FOR DATA BRUGES TIL AT GE FNI, INDIVIDETS VALIDITET INDEN FOR DOMNET OG DERES SELVVRD FNI BERIGER EN EVALULERING MED OBJEKTIVITET, REDUCERER KN, KULTURELLE OG RACEFORDOMME DOKUMENTERE AT MAN IKKE KLARER SIG BEDRE END ENS KOLLEGAER KAN SKADE FORSKERENS SELVOPFATTELSE FNI BIDRAGER IKKE ALTID MED VALUE-ADDED INFORMATION; INFORMATION KAN VRE REDUNDANT SUCCES DEFINERES INDEN FOR EVALUERINGSSYSTEMET
  • 11. EVALUERING OUTPUT DATA MODEL FORTOLKNING AFSPEJLER REALITETER SUCCES I SYSTEM vs SUCCES I FELTET FEJL/MANGLER INDEKSERING UDFORDRINGER
  • 12. 24-10-2016 12 UDFORDRING #1: MODELLERNE Forst奪 mekanismer i konstruktionen af bibliometriske indikatorer. Forst奪 hvad det er, vores data kan vise.
  • 14. S c (i) = 粒 (Y (now) Y (i) + 1) 隆 |C(i)| V脱gtet citations score for en artikel Evaluerings 奪r minus publikations 奪r Antal citationer Valgbar koefficient (sat til 4), som g淡r at en atikel fra evaluerings奪ret tildeles en faktor 4
  • 15. 24-10-2016 15 UDFORDRING #2: DATA In time-keeping, in trading, in fighting, men counted numbers; and finally, as the habit grew, only numbers counted. Mumford, (2010)
  • 16. 24/10/2016 16 R奪 publikations/citationstal, fort脱ller os ingenting om forskningskvaliteten. Observationer studerer processer og interaktioner. Sp淡rgeskemaer identificerer meninger, erkendelser og forst奪else. Interviews inviterer til feedback som man normalt ikke vil f奪.
  • 17. 24-10-2016 17 UDFORDRING #3: INDEKSERING Indikatorer er i sig selv ikke neutrale! Den st淡rste andel af indikatorer og databaser har deres oprindelse i de nordlige lande, og afspejler derfor karakteristika for videnskab i disse regioner.
  • 18. 24/10/2016 18 Hvordan kan vi udvikle lokale systemer for at sammenligne, skabe & gen-sammenligne gen- skabe indikatorv脱rdier i multifacetterede data, s奪 vi undg奪r 辿n-dimensionelle profiler?
  • 19. 24-10-2016 19 UDFORDRING #4: OUTPUT Det vi ikke t脱ller, er usynligt.
  • 20. 遜 24/10/2016 20 Hvad er en publikation? Hvad er en citation? Hvad er en forsker?
  • 21. 24-10-2016 21 UDFORDRING #5: EVALUERING The assessment itself is completely artificial. Its not ranking researchers in accordance with their ability to develop, reach their potential, and explore their creative interests. Those things youre not testing..... its a rank thats mostly meaningless. And the very ranking itself is harmful. Its turning us into individuals who devote our lives to achieving a rank. Not into doing things that are valuable and important. Noam Chomsky (2015)
  • 22. 24-10-2016 22 M淡d dem der bliver evalueret. Unders淡g hvorfor de publicerer der hvor de g淡r. Argumenter for relevans af indikatorer. R奪dgiv. Kommuniker/formidl.
  • 24. Multi-dimensional research assessment matrix (Moed, 2011) UNIT OF ASSESSMENT PURPOSE OUTPUT BIBLIOMETRIC INDICATOR OTHER INDICATORS INDIVIDUAL ALLOCATE RESOURCES RESEARCH PRODUCTIVITY PUBLICATIONS PEER REVIEW RESEARCH GROUPS IMPROVE PEFORMANCE QUALITY, SCHOLARLY IMPACT JOURNAL CITATION IMPACT PATENTS, LICENCES, SPIN OFFS DEPARTMENT INCREASE MULTI-DISCIPL. RESEARCH INNOVATION & SOCIETAL BENEFIT ACTUAL CITATION IMPACT INVITATIONS FOR CONFERENCES INSTITUTION INCREASE REGIONAL ENGAGEMENT SUSTAINABIL- ITY & SCALE INT. CO- AUTHORSHIP EXTERNAL RESEARCH INCOME RESEARCH FIELD PROMOTION, HIRING RESEARCH INFRASTRUCT CITATION PRESTIGE PHD COMPLETION RATES
  • 25. Hvordan kan tilg脱ngelighed af kontekstuel information forbedres? Hvordan kan vi vejlede individer i brugen af deres information? Hvordan opfylder vi l淡ftet om informeret peer review? Hvordan kan vi influere institutioners tilgang til forskningsevaluering?
  • 26. Contextualized bibliometrics 24-10-2016 26 Lev辿r information der kan eksploreres. Undg奪 for meget v脱gt p奪 det der let kan kvalificeres/t脱lles. V脱lg simple indikatorer frem for komplicerede indikatorer. V脱r kritisk. V脱r proaktiv. CALL FOR ACTION
  • 27. 24/10/2016 27 Som bibliometrikere, skal vi forpligte os til at underbygge meningsfulde sandheder. Lorna Wildgaard, Ph.D Lorna.Wildgaard@hum.ku.dk
  • 28. REFERENCER Bach, J. F. (2011). On the proper use of bibliometrics to evaluate individual researchers. Acad辿mie des sciences. Retrieved 23-6- 2015 from: http://www.academie- sciences.fr/activite/rapport/avis170111gb.pdf Bornmann, L., Mutz, R., Neuhaus, C., and Daniel, H-D. (2008b). Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8(1), 93-102. Cawkell, A. E. (1976). Understanding science by analysing its literature. The Information Scientist, 10(1), 3-10. 24/10/2016 28
  • 29. REFERENCER Chomsky, N. (2015) Creative by Nature, Blog post: https://creativesystemsthinking.wordpress.com/2015/02/21/noa m-chomsky-on-the-dangers-of-standardized-testing/ Collini, S. (2012). Bibliometry. In What are universities for? (pp.120-131) London: Penguin. Dahler-Larsen, P. (2012). The Evaluation Society. California: Stanford University Press. Furner, J. (2014). The Ethics of Evaluative Bibliometrics. In B.Cronin & C. Sugimoto (Eds.), Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact (pp. 85-107). Massachusetts: MIT Press. Hicks, D. Wouters, P. Waltman, Ludo. de Rijcke, S., and Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. 24/10/2016 29
  • 30. REFERENCER Moed, H. (2011). The multi-dimensional research assessment matrix. Research Trends, 23 (May 2011): http://www.researchtrends.com/issue23-may-2011/the-multi- dimensional-research-assessment-matrix/ Mumford, L. (ed.2010). Technics and Civilization. University of Chicago Press. p.22 White, H. D. (2001). Authors as citers over time. Journal of the American Society for Information Science and Technology, 52(2), 87-108. Small, H. G. (1987). The significance of bibliographic references. Scientometrics, 12(5-6), 339-341. Vinkler, P. (1996). Some practical aspects of the standardization of scientometric indicators. Scientometrics, 35(2), 235-245. 24/10/2016 30