Literature DB >> 35522678

Becoming metrics literate: An analysis of brief videos that teach about the h-index.

Lauren A Maggio1, Alyssa Jeffrey2,3, Stefanie Haustein2,3, Anita Samuel1.   

Abstract

INTRODUCTION: Academia uses scholarly metrics, such as the h-index, to make hiring, promotion, and funding decisions. These high-stakes decisions require that those using scholarly metrics be able to recognize, interpret, critically assess and effectively and ethically use them. This study aimed to characterize educational videos about the h-index to understand available resources and provide recommendations for future educational initiatives.
METHODS: The authors analyzed videos on the h-index posted to YouTube. Videos were identified by searching YouTube and were screened by two authors. To code the videos the authors created a coding sheet, which assessed content and presentation style with a focus on the videos' educational quality based on Cognitive Load Theory. Two authors coded each video independently with discrepancies resolved by group consensus.
RESULTS: Thirty-one videos met inclusion criteria. Twenty-one videos (68%) were screencasts and seven used a "talking head" approach. Twenty-six videos defined the h-index (83%) and provided examples of how to calculate and find it. The importance of the h-index in high-stakes decisions was raised in 14 (45%) videos. Sixteen videos (52%) described caveats about using the h-index, with potential disadvantages to early researchers the most prevalent (n = 7; 23%). All videos incorporated various educational approaches with potential impact on viewer cognitive load. A minority of videos (n = 10; 32%) displayed professional production quality. DISCUSSION: The videos featured content with potential to enhance viewers' metrics literacies such that many defined the h-index and described its calculation, providing viewers with skills to recognize and interpret the metric. However, less than half described the h-index as an author quality indicator, which has been contested, and caveats about h-index use were inconsistently presented, suggesting room for improvement. While most videos integrated practices to facilitate balancing viewers' cognitive load, few (32%) were of professional production quality. Some videos missed opportunities to adopt particular practices that could benefit learning.

Entities:  

Mesh:

Year:  2022        PMID: 35522678      PMCID: PMC9075661          DOI: 10.1371/journal.pone.0268110

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

In academia, citation and publication metrics, such as the journal impact factor and h-index, are used to make critical decisions about individuals, including those decisions that govern hiring, promotion, retention and funding. The importance of quantitative metrics has created a pressure to publish, leading to a range of adverse effects and scientific misconduct, including duplication of publications, gratuitous self-citations, ‘citation cartels,’ self-plagiarism and so-called ‘salami publishing’ [1-4]. The quantification and oversimplification of research output and impact is harming scholarly communities in all disciplines [5]. These high stakes create an imperative that academics, including administrators, have a robust understanding of how metrics are derived as well as basic knowledge about their strengths and weaknesses. In essence, academic decision makers must be metrics literate. Unfortunately, this does not seem to be the case. Multiple studies document metrics misuse [6-12]. In 2020, Ioannidis and Boyack advocated for training in metrics, pointing out that “the lack of training in what they [metrics] mean and what they can and cannot tell us may be the greatest threat associated with them” [13]. Metrics literacies, which are related to the concept of being metrics wise [14], are an integrated set of competencies, dispositions and knowledge that empower individuals to recognize, interpret, critically assess and effectively and ethically use scholarly metrics. We argue that metrics literacies are essential among academics and administrators and that short videos are most suitable to educate them [14-16]. Thus, in this manuscript we attempt to characterize, one educational approach, the use of short online videos, as an approach for training in order to provide recommendations for future educational initiatives. Scholarly metrics, or more specifically bibliometric indicators, are quantitative statistical measures based on publication and citation counts. Used carefully and as a complement to qualitative evaluation approaches such as peer review, they can, to a certain extent, inform about research productivity, collaboration and impact of individual authors, teams, universities and even countries [17,18]. Unfortunately, the application of bibliometric indicators can be largely characterized by inappropriate use of simplistic and limited indicators in the context of hiring, promotion and funding. The most popular metrics are not those carefully constructed by bibliometric experts, but those that are easily available. The journal impact factor and the h-index are the most popular bibliometric indicators, which despite their known flaws and limitations, are still heavily used and can have significant career implications.

About the h-index

The h-index was created in 2005 by physicist Jorge Hirsch as a “simple and useful way to characterize the scientific output of a researcher” [19], trying to combine the two dimensions of scientific productivity (i.e., number of publications) and impact (i.e., number of citations). Mathematically, it is a simple index defined as the largest number of h papers with at least h number of citations, meaning that a researcher with an h-index of 15 has published at least 15 papers which have received at least 15 citations each. Hirsch [19] claimed that the h-index is an “estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions”, but due to the arbitrary, methodologically questionable [20], combination of the publications and citations, it lacks a clear concept. In addition, the h-index has a range of flawed properties which lead to inconsistent results [18,21,22]. The most striking inconsistency appears in the context of absolute performance improvement: if two authors publish exactly the same number of additional publications with exactly the same amount of citations, the ranking of these authors relative to each other should remain the same. However, the h-index does not behave that way [22]. It produces inconsistent results insofar as it could either increase or stay the same for both authors but could also, counterintuitively, increase for one author, while it stays the same for the other [18]. Due to these inconsistent properties it “cannot be considered an appropriate indicator” [22]. Created to assess a scientist’s career, the h-index is a time- and size-dependent indicator, meaning that it depends on the duration of each scientist’s career and their total number of publications [18,23,24]. By definition it therefore disadvantages early career researchers [25,26]. Similar to the journal impact factor, the h-index also lacks field normalization [26,27], which means that it does not account for differences in publication and citation practices across disciplines [28]. Therefore, the h-index should not be used to compare researchers from different fields [19,26,28]. Additionally like most other bibliometric indicators, the h-index can vary depending on which database is used based on publications and citations indexed and how the database identifies citations [20,26]. For example, an author’s h-index reported in Google Scholar is almost always larger than that reported by the Web of Science because Google Scholar covers more publications than Web of Science. Despite its range of serious shortcomings, the h-index frequently informs the granting of tenure and promotion, as well as allocating academic prizes and grants [10,25,29]. While some universities ask applicants to provide their h-indices, others set h-index thresholds to short-list candidates for the appointment of different academic ranks [6,11,30,31]. Even if not used officially, individual evaluators might consult the h-index behind the scenes [9]. When acting as evaluators, researchers, often under time pressure, feel that they need an objective measure to compare applicants [10]. Thus, there is a need for training and related educational resources to ensure that this and other scholarly metrics is well understood and utilized appropriately [13,15,16].

Educational approaches

There have been some attempts to address the lack of metrics literacies in the wider academic community. The San Francisco Declaration on Research Assessment (DORA) includes recommendations for improving the evaluation of research output by refraining from using the journal impact factor to evaluate individuals [32], while the Leiden Manifesto [33] lists ten principles to guide the use of bibliometrics in research evaluation. The Metric Tide [34] introduces a framework for responsible metrics. Other efforts include books and guides, such as Measuring Research published in Oxford University Press’s What Everyone Needs to Know series [17], which provides an overview of scholarly metrics in an accessible manner. Similarly, Becoming Metric-Wise: A Bibliometric Guide for Researchers [18] seeks to increase the knowledge about bibliometric methods and indicators among researchers. The Metrics Toolkit, takes a similar step towards communicating scholarly metrics to a general academic audience by providing “evidence-based information about research metrics across disciplines” [35]. These initiatives represent a valuable first step toward educating the broader academic community on the appropriate use of scholarly metrics. However, they are all published as text-heavy articles, monographs, or websites, which require hours of reading—one of the slowest and most laborious means for acquiring and retaining knowledge [36]. Videos, on the other hand, are perceived by learners as more engaging [37] and easily accessible [38]. Thus, we have decided to focus our energies on video as a more efficient and effective online education format. Videos are effective educational tools [39-41] that encourage a multi-sensory learning experience [38]. Videos have long been used for educational purposes [42] and their use has been demonstrated to increase student motivation by approximately 70% [43]. Videos are effective in conveying complex information. By showing educational content instead of simply telling it [44], videos can spark learner interest [45-48], as well as improve their comprehension and retention of the material [48,49]. These benefits may be particularly valuable for teaching complex topics (such as scholarly metrics), as they can provide guidance and clarity and increase learners’ engagement [48,49]. For example, video lectures and podcasts can raise learners’ interaction with academic content, allow them to set their own pace of instruction and improve their learning experience [50-53]. Video use for educational purposes has exploded since the arrival of YouTube, a freely-available, web-based video platform. A recent survey found that 51% of adults in the United States use YouTube to learn [54]. In today’s educational landscape, learners are increasingly spending time on mobile devices, creating a demand for content that is mobile-accessible and tasks that can be performed on-the-go [52]. As a free platform, YouTube removes barriers to entry; anyone can be a content creator or consumer. With more than 2 billion active users and approximately 500 minutes of content uploaded every minute [55] YouTube provides accessible content on various platforms including mobile devices, which encourages learning on-the-go and provides just-in-time learning opportunities. However, little is known about how the h-index and other bibliometric indicators have been covered on this platform, raising questions about the characteristics of these videos, including what content is covered and educational techniques utilized. Thus, in this study, we aimed to characterize the freely available YouTube videos focused on the h-index in order to understand the state of the available resources and to provide practitioners with practical findings to optimize the creation of future videos.

Methods

We conducted an analysis of publicly-accessible videos on the h-index posted to YouTube. As this study did not involve humans, we did not submit this research to an ethics board. On August 26, 2021 we identified 274 videos by searching YouTube via Google.ca (“h index” OR “hirsch index” OR “h-index” site:youtube.com). The three results pages were downloaded as HTML for future access. These 274 videos became our initial data set. Each of the 274 videos was independently viewed by at least two reviewers and considered in relation to our inclusion and exclusion criteria. We included videos about the h-index. We considered a video to be about the h-index if the presenters described the metric, such as providing a description of how the h-index is calculated (e.g., A scientist has index h if h of their Np papers have at least h citations each and the other (Np–h) papers have ≤h citations each.). We excluded videos that did not describe the metric, but instead either just mentioned it as a generic citation metric or focused on tasks like how to write code to calculate an h-index. We also excluded results that were only a YouTube playlist, but did not contain content about the metric. For feasibility, we excluded videos not presented in English. Based on research by Guo et al. [56], who found that median engagement time with videos is about six minutes and viewers watch less than halfway through videos that are longer than nine minutes, we also excluded videos over 10 minutes in duration. We did not limit the date range of our search, however, the h-index was first proposed in 2005 [19], so all retrieved videos were published after 2005. To determine inclusion, all 274 videos were independently viewed by at least two authors. All discrepancies were resolved by group consensus. To capture video characteristics, we collaboratively created a codebook based on our experience and training as educators as well as the literature on video education [52,57-60]. To determine the videos’ educational quality, we anchored our evaluation on the related work by Young and colleagues [61]. Young et al. [61] considered the efficacy of educational videos for adult learners based on how the videos increased or decreased a viewer’s cognitive load thus influencing their learning. Cognitive load is described as the amount of information that an individual can hold at one time in their working memory, which is a limited resource [62]. If a learner’s working memory is overloaded, then learning is negatively impacted [63]. Researchers have proposed that educators use Cognitive Load Theory (CLT), which is an instructional theory, to guide educational design in order to optimize learners’ cognitive load [64]. CLT, which was introduced by Sweller in 1988, posits that there are three types of cognitive load: intrinsic, extraneous, and germane load that must be accounted for and balanced when designing instruction [62]. In our coding, we identified elements within the videos with the potential to impact each type of cognitive load. To make judgements about the production quality of the videos, which we classified broadly as amateur or professional, we relied upon video characteristics as described by industry [65]. The codebook and data resulting from this study and related codes are freely available on Zenodo [66]. The coding tool was operationalized in Google Sheets. AJ, AS, and LM conducted three pilot rounds to test the efficacy of the coding sheet on three separate sets of videos. Following each pilot round, the coders discussed the fitness of the coding sheet for the task and made any necessary updates. After finalizing the coding sheet, each video was coded in duplicate by two authors with each coder completing their task independently. Discrepancies between two coders were resolved by group consensus by all authors, with SH acting as a tiebreaker, as needed. As noted, we were provided access to 274 videos, however, as our aim was to understand the characteristics of educational videos about the h-index instead of characterizing all available videos on this topic, we did not aim to comprehensively identify each video about the h-index. Instead, we worked towards data sufficiency, which is the point at which we could derive a clear and coherent understanding of key characteristics of the videos and could identify no additional nuances or insights [67]. After each round of coding, we discussed via video conference if we felt that we had reached data sufficiency. We felt that we reached data sufficiency at 24 videos. However, we reviewed an additional seven videos for certainty, which did not yield any new insights Our final dataset therefore contained 31 videos. Data from our coding was compiled in Google Sheets and described using descriptive statistics.

Results

We analyzed 31 videos and report their characteristics, content, and factors that have potential to impact a viewer’s cognitive load.

Video characteristics

Videos were posted to YouTube between 2011–2021 with a majority (81%) posted between 2019–2021. On average, videos were 5.12 minutes in duration (range = 0.47–9.59; SD = 2.89). Comments had been posted to 20 of the videos with the caveat that some of the videos did not allow for comments. The number of views the videos had received ranged from 10 to 20,503 (AVG = 2749; SD = 4618). The most viewed video (20,503 views) was posted by Curtin Library in 2013 called the H-Index Explained (Video 2). See Table 1 for a listing of all videos and key characteristics.
Table 1

Characteristics of YouTube videos on the h-index (n = 31).

Video #Unique IdentifierTitleVideo DurationYear posted# viewsVideo typeVideo topicVideo quality
1 1What is the h-index?3:40201813069talking headh-indexAmateur
2 2 H-Index Explained 6:14201320521screencasth-indexProfessional
3 3 EP 05: h-index | 2-MIN METRICS SERIES 2:402020410animationh-indexProfessional
4 4 What is an H-index? | How To Research 2:12202149talking head; screencast; animationh-indexProfessional
5 5 How to Find out the H-Index of an Author | Author-Level Metrics | Journal Publications & Citations 1:39202156screencasth-index; other metricsProfessional
6 7 What is H INDEX 8:032020487screencasth-index; other metricsAmateur
7 8 h Index 9:2620206107screencasth-index; other metricsAmateur
8 10 What is h-index 2:23202127talking headh-indexProfessional
9 11 h - Index: What is h- Index? How To Calculate h- index? 6:34202011620screencasth-index; other metricsAmateur
10 12 H - Index in Google Scholars: Why h-index is Important for Your Research Career? 7:2620194605screencasth-index; other metricsAmateur
11 14 Understanding Scopus h-index - How does it increase? 6:3820201,705screencasth-indexAmateur
12 17 What is h index and i10 index | phd | Milton Joe 8:5520203,187screencasth-index; other metricsAmateur
13 20 What is h-index 5:5520201,601talking head; screencasth-indexAmateur
14 22 Citations, i10-index, h-index and m-index 8:412020441screencasth-index; other metricsAmateur
15 23 Scopus Tip: How to increase your H index and citation in Scopus (English Version) 8:3020201,724screencasth-indexAmateur
16 26 Citation INDEX | How To Calculate i-10 Index and H-Index 2:3220201,888animationh-index; other metricsAmateur
17 30 Why the h-index cannot be compared across disciplines 4:282020291interviewh-indexAmateur
18 33 h index | how to calculate h-index | h-index of an Author | Journal | University or Country h index 3:262021689screencasth-indexProfessional
19 37 Google scholar citation| h-index and i10 index| Progress with Prof.Mahamani 9:342020483talking head; screencasth-index; other metricsAmateur
20 40 Hirch index (h-index) | Citation Index (Notes Included) 3:08202147screencasth-indexAmateur
21 44 What is the h-index? in 5 Min ONLY 8:182021227Screencast; scribingh-index; other metricsAmateur
22 47 h-index 1:28201947screencasth-indexProfessional
23 56 Limitations of the h-index for early career researchers 1:2020115,482interviewh-indexProfessional
24 58 Importance of h index - @Dr. Anand Nayyar - Learning with Chandan 5:122020157interviewh-index; other metricsAmateur
25 67 h index and i10 index I Research Indices I Research to Publication I Dr.V.M.M.Thilak 4:58202128talking headh-index; other metricsAmateur
26 74 Evaluating h-Index: metric to evaluate authors rank 9:1820162,604screencasth-index; other metricsAmateur
27 90 H index 0:4720173,150screencasth-indexAmateur
28 101 How To Calculate h index i10 and i20 index 3:362020202talking headh-index; other metricsAmateur
29 103 H index (English) 2:37202115animationh-indexProfessional
30 148 Impact Factor of Non SCI Journals H-Index Half Life JCR IF 2:05202035screencasth-index; other metricsAmateur
31 156 An Introduction to Bibliometrics 9:5920154,256screencasth-index; other metricsProfessional
The majority of videos (n = 21; 68%) were presented as screencasts (i.e., digital recordings of a computer screen usually accompanied with descriptive audio). For those using screencasts, 11 videos (35%) presented voice-over presentation slides and seven videos were recordings of the presenter navigating online resources, such as a database or library website (23%). Seven videos used a “talking head” approach (n = 7; 23%) in which the pictured presenter spoke directly to the viewer. Other formats included animation (n = 4; 13%), interviews (n = 3; 10%) and scribing (e.g., overview of a hand drawing/writing) (n = 1; 3%). Twenty-six (84%) of the videos incorporated human presence. This presence included active depictions of human presenters (e.g., a presenter writing on a whiteboard; a presenter conducting an interview) (n = 7; 23%); still images, such as an academic headshot (n = 17; 55%); or voice-over audio by the presenter (n = 26; 84%). When addressing viewers, 19 (61%) presenters used the second-person pronoun or addressed the viewer as “we”. Seventeen presenters (55%) identified themselves in the videos; however, their roles were not always explicitly stated. Roles included faculty members (n = 10; 32%) and self-described researchers (n = 4; 13%) and three videos that included names but did not have information on who they were. Two videos (6.5%) were posted by library YouTube accounts, but it was unclear if the presenters were librarians. The production quality of the videos could broadly be classified as amateur or professional. A majority of the videos (n = 21; 68%) displayed an amateurish production quality evidenced by poor camera angles, audio quality, and lack of editing. The presenters spoke in an unscripted “off-the-cuff” style which was characterized by repetitions and filler words. For example, “Actually because h-index—just to give you a uh, uhm, historical overview of h-index -, previously, uhhh, researcher they. . .” (Video 6). When there was no audio, the slides were text heavy and difficult to read (Video 30). The videos deemed to be of professional production quality (n = 10; 32%) incorporated professional images and animations along with scripted dialogue that flowed smoothly.

Video content

Content in the 31 included videos varied. Most videos defined the h-index (n = 26; 83%) but only 39% (n = 12) mentioned Jorge Hirsch, the inventor of the h-index. To exemplify the calculation of h-index, 22 videos (72%) utilized fictional examples. For example, one presenter stated: “an h-index of 12 would mean that out of all the publications by a group or person, 12 articles would have received at least 12 citations each” (Video 1). However, 14 presenters (45%) utilized examples of their own h-index or that of another named researcher. Some videos featured content on how to find an h-index (n = 10; 32%). The majority of videos (n = 19; 61%) mentioned that viewers could locate the h-index in Google Scholar, while 45% mentioned Scopus (n = 14) and 39% Web of Science (n = 12). See Table 2 for a summary of the videos’ content. A few videos addressed scholarly metrics broadly (e.g., impact factor, i-10 index) (n = 8; 25%) while 10% of videos specifically mentioned how someone can increase their h-index (n = 3).
Table 2

A summary of the content of short YouTube videos on the h-index (n = 31).

ContentCount (%)
Defines the h-index26 (84)
Provides a fictional example of the h-index22 (71)
Mentions other impact indicators16 (52)
Provides a real example of the h-index14 (45)
Refers to Jorge Hirsch12 (37)
Resources for locating h-index 22 (71)
Google Scholar19 (61)
Scopus14 (45)
Web of Science12 (39)
Describes cautions to the h-index 16 (52)
Disciplinary differences12 (39)
Database differences9 (29)
Disadvantage to early career researchers7 (23)
Self-citation inflation3 (10)
Author order negated3 (10)
Non-English language publications1 (3)
Most videos (n = 28; 90%) described the h-index as an author-level metric. However, five videos (16%) noted that the h-index could also be used as an indicator to describe journals, while three videos (10%) mentioned its applicability to groups of authors, such as researchers based in specific universities or countries. Fifteen videos (49%) described the h-index as indicative of the importance, quality or impact of an author or set of publications. For example, one presenter notes “To determine how productive and impactful a researcher is, we have something called the h-index” (Video 4). Another presenter stated: “My professor says a good researcher, his h-index should be equal to his age. . .These people are highly cited and it means they are a good researcher. My h-index is 12 and I am 34. This means I am not a good researcher” (Video 11). Of note, while we did not code videos for accuracy, this example stood out as an example of an inaccurate statement. The importance of the h-index in recruitment, tenure, and promotion decisions was raised in 45% (n = 14) of the videos. For example, the “h-index is increasingly being used in the critical assessment of faculty for tenure and promotion alongside other forms of evaluation” (Video 31). When explaining that he includes the h-index on his CV, one researcher noted: “It is a very good factor [the h-index] to impress someone” (Video 11). To this end, three videos (10%) proposed strategies for a researcher to raise their h-index. For example, one presenter described a step-by-step process of identifying and emailing researchers copies of their articles in a bid to increase citations and ultimately their h-index (Video 23). As we noted above, the h-index has several issues which make it a problematic metric to use in research evaluation. Over half of the videos (n = 16; 52%) raised caveats about using the h-index, especially for high-stakes situations, such as hiring and promotion and tenure decisions. For example, seven videos (23%) cautioned that using the h-index could disadvantage early career researchers as they may have published fewer articles and the citations to those articles have had less time to accrue. Twelve videos (39%) warned against comparing scholars in different disciplines as disciplines can have different citation traditions. Nine videos (29%) described that the h-index value for the same author might differ, depending on the database used. Several videos mentioned that citation practices such as self-citations can inflate an h-index (n = 3; 10%) and that the h-index does not account for author order, such that an author’s h-index is influenced equally by articles in which they were a middle author as those in which they were first author. A single video (Video 1) noted that authors publishing in languages other than English were disadvantaged in the h-index calculations, which is due to non-English publications having lower citation rates and being largely excluded from citation databases such as Web of Science and Scopus.

Cognitive load

We identified elements and approaches in the videos with potential to impact a viewer’s cognitive load, which can have implications for their ability to learn from the videos. We organized these factors into the three types of cognitive load: extraneous, intrinsic and germane. When designing instructional strategies based on CLT, the aim is to balance these three types of cognitive load in order to ensure that a learner’s limited cognitive capacity is not overwhelmed [63]. See Table 3 for a summary of CLT types in the videos.
Table 3

Cognitive load factors identified in short YouTube videos on the h-index (n = 31).

Count (%)Example
Extraneous Cognitive Load
Extraneous elements17 (55)• Background noise (Video 11, 26)• Poor audio quality (Video 25)• Oral cues “go to next slide” (Video 19)
Directly addressing viewers (Personalization)13 (42)• “First of all, you must log in…” (Video 15)• “I hope you can see there may be…” (Video 31)
Signaling6 (19)• Use of magnification to focus on important text (Video 3, 6)• Use of animated arrow to highlight important text (Video 20)
Intrinsic Cognitive Load
Timing of content22 (71)Providing sufficient time to read text on screen before proceeding to next screen
Segmenting7 (23)• “Let’s see how to determine both …” (Video 5)• “Now we’ll see the advantages and the disadvantages.” (Video 20)• “The other quantitative metrics used for …” (Video 25)
Additional resources5 (16)• Offering assistance “Reach out to me at RiverwindsConsulting.com with your questions.” (Video 1)• Providing links at the end of the video for further information “for further understanding one may refer to the following link …” (Video 7)
Germane Cognitive Load
Signaling28 (90)“In this example, you can see that…” (Video 29)
Dual channel (Modality)22 (71)Animated text with voice-over explanation (Video 3,4)
Interactive learning elements (Active processing)1 (3)Posed a question to the viewer “Now let me see if you can solve this exercise. I will give you 10 seconds.” (Video 5)
Extraneous cognitive load is a result of how content is presented to a viewer in such a way that it requires their cognitive processing, but does not contribute to their learning (e.g., a non-productive distraction). Almost all videos (n = 30; 97%) focused their content exclusively on the topic at hand and did not include distractors, which can help reduce extraneous cognitive load. One exception was a video that began with a cake cutting ceremony to celebrate a milestone for the presenter (Video 19). This was unrelated to the topic of the h-index and took 1:38 minute at the beginning of the 9:34 minute video. This may distract the viewer in regards to thinking about how the cake is relevant to the h-index. We observed extraneous elements in 17 videos (55%) which we assume were unintentional, but may still contribute to extraneous load, such as background noise (e.g., birds chirping, construction sounds, a ticking clock) and poor audio quality (Videos 7 and 30). Six videos (19%) integrated signaling, which is the inclusion of cues to highlight important information (e.g., highlighting words, ‘pay attention to…’) and can decrease extraneous cognitive load by focusing the viewer on what the presenter feels is important. Additionally, 13 presenters (42%) used a personalized approach by directly addressing the viewer (e.g., “You can see that he has an h-index of 53”(Video 3)), which can decrease extraneous cognitive load as it enables the viewer to immediately consider the content from their own perspective. Intrinsic load is considered essential to learning and is associated with the inherent difficulty of a task or content, such that the more complex or difficult the content is, the higher the cognitive load is for the learner. While the inherent difficulty of a task or content cannot be changed, the burden of this type of intrinsic cognitive load can be reduced if content is broken down or segmented for the learner. We observed that in seven videos (23%) presenters clearly segmented their content by including clear transitions between the content. However, 39% (n = 12) presented content in a single continuous block, which can increase intrinsic load. The other presenters (n = 12; 39%) moved between topics without clearly delineating the segments. We also observed that 22 of the videos (71%) incorporated text and they all provided viewers adequate time to comfortably read the presented text, which can lessen intrinsic load. Additionally, five videos (16%) referenced additional resources for viewers interested in learning more, which can help decrease the load put forth in a single video. Germane load refers to the cognitive resources needed to facilitate learning [62]. Germane load can be increased by presenting content multimodally which uses dual channels (e.g., content is simultaneously presented using an image [visual channel] and the presenter’s narration [oral channel]). Most videos (71%; n = 22) used a dual channel approach by featuring words on the screen plus audio elements. In a minority of videos (6%; n = 2), the video employed a simulated/computer-generated voice to present the content (Video 5, 18) which can increase germane load [68]. Three videos (10%) did not include audio or text-to-speech conversion tools, which is suboptimal for enhancing germane load. Some of the videos (19%; n = 6) did not include images and focused only on the presenters. For example, two videos presented a researcher talking directly at the camera without any other images or text to explain the concept. (i.e., a talking head video) (Videos 1, 4). Although the use of active learning can increase germane load, only a single video took this approach by integrating a quiz that challenged viewers to calculate the h-index based on presented data (Video, 5).

Discussion

The use of scholarly metrics, including the h-index, for making high stakes decisions, such as granting promotion and funding, requires that the individuals using them are metrics literate, such that they can recognize, interpret, critically assess and effectively and ethically use scholarly metrics. In our discussion, we first focus on how the videos presented the h-index as a concept and consider this presentation within the context of metrics literacies. Then, we discuss the videos with respect to CLT with the intention of highlighting potential best practices for the creation of future videos on the h-index and metrics education more broadly. The YouTube videos we analyzed have some content with potential to enhance a viewer’s metrics literacies. To begin, most of the videos defined the h-index and described how it is derived, which provides viewers with skills to recognize and interpret the metric at a basic level. However, nearly half of the videos state that the h-index indicates an author’s importance and the impact and/or quality of their publications. Although this aligns with Hirsch’s original description of the metric, researchers, including Hirsch himself, have pointed to several important caveats to temper this claim [19,22,26]. In the analyzed videos, we observed that nearly half of presenters raised at least one caveat, such as differences in how databases calculate the metric; bias against early career researchers and scholars not publishing in English; and differences across disciplines. However, these caveats were inconsistently presented across the videos, in that some videos raised only a single concern, and 48% did not raise caveats at all. Additionally, none of the videos raised identified concerns such as gender bias, which stems from identified biases in citation practices [69] and the way in which the metric behaves inconsistently [22]. Thus, while the majority of currently available videos introduce viewers to the h-index, there is room for improvement in terms of providing comprehensive content for viewers to enable them to critically assess and effectively and ethically use this scholarly metric. For example, future video creators might consider integrating into their videos brief case studies or creating personas of scholars from a variety of fields and backgrounds to demonstrate how some of these caveats can impact individuals [70]. Several videos described the importance of a high h-index for hiring, promotion, and funding decisions and provided strategies for researchers to boost their metric. This finding aligns with recent research that found that the use of metrics in review, promotion and tenure evaluations is widely encouraged and that they are often inappropriately portrayed as measures of ‘quality’ or ‘prestige’ [71]. The importance of the h-index along with other quantitative metrics, has in this way created a pressure to publish, entailing adverse effects or even leading to scientific misconduct [1-4,72]. While these videos are accurate–the h-index is indeed used in these high-stakes decisions–this messaging misaligns with recent initiatives. For example, DORA, endorsed by over 20,000 signatories since 2012, advocates that scholarly metrics not be used as a surrogate marker of quality for a researcher’s career [32]. Thus, while it is important that individuals be made aware of the current (mis-)uses of the h-index, this awareness should be balanced with a critical assessment of the indicator and its aptness to assess researchers’ careers. While any quantitative metric should only be used in addition and to complement qualitative assessments such as peer review, there are bibliometric indicators (e.g., percentile ranks, field-normalized citation rates) that are more suitable to assess research productivity and impact than the h-index. While the existence of these h-index videos is encouraging, it is important to consider how they are presented. In this analysis, we considered potential effectiveness in relation to CLT, which posits that to optimize learning materials they should be presented in such a way as to balance extraneous, intrinsic and germane cognitive load [62,64]. Across the videos, we identified pedagogical practices with the potential to impact all three cognitive load types. For example, most videos were presented using a dual channel approach, such that textual, audio, and visual components were utilized to limit germane load. Additionally, most videos were tightly focused on the h-index with limited irrelevant elements, thus avoiding extraneous load, which can lower viewers’ comprehension of the content [62]. To add to the focused nature of these presentations, the majority of videos included a human presence. Specifically, incorporating a human presence, a so-called focaliser, can create authenticity and connection between the educator and the learner [46]. We would encourage future video creators to add this type of focaliser as this can increase focus and enable smoother transitions into new topics, making learners feel that they are guided through complex material by a relatable human. While many of the videos integrated practices to facilitate balancing viewers’ cognitive load, some videos did not take advantage of particular practices that future video makers could leverage to benefit viewers. For example, only a single video (Video 5) utilized an active learning/processing approach, which can have a positive impact on germane load [61]. In this particular video, the presenter embedded a brief quiz about halfway into the video. Future video creators should consider how they can further integrate active learning approaches, such as providing viewers opportunities to pause the video to calculate the h-index or to self-reflect on ways in which the h-index can appropriately be used in research evaluation. Additionally, future video creators could consider the integration of knowledge pre-tests at the start of a video which can be used to activate a learner’s previous knowledge and provide them a sense of what they need to learn. Video creators may also want to consider being explicit about transitions and the delineation of segments, which can lessen intrinsic cognitive load. For example, as a presenter transitions from the definition of the h-index to where to find the metric, they could include a transition image or verbally announce the transition. Video producers might also consider providing a content overview so that going into the video the viewer is aware of the upcoming segments.

Limitations

Our study should be considered in light of its limitations. Our inclusion criteria focused on videos posted to YouTube. It is possible that had we explored alternate platforms, such as Vimeo or TikTok or had scanned library websites, we may have identified additional videos, not posted on YouTube. However, as this was not an attempt to comprehensively characterize videos on the h-index and we feel that we reached data sufficiency, we propose that this is an adequate base for future researchers to further investigate alternate platforms. We limited our sample to videos that were less than 10 minutes in duration. It is possible had we included videos of greater length that we may have uncovered data points, however, in light of current research on video duration and effectiveness [55], we feel that this design decision was warranted. We did not include videos that were not presented in English. This may limit the usefulness of our findings to researchers who do not speak English. Additionally, we did not code the videos for accuracy. Future studies might examine this aspect, while keeping in mind that it is possible that information perceived as inaccurate at the time of coding might have been accurate at the time of a video’s posting. Lastly, this study focused on content provided by videos, however, there exist other multimedia resources that convey information about the h-index, such as infographics and podcasts. Future researchers should consider also examining these types of resources.

Conclusion

Given the prolific use in high-stakes decisions affecting academic careers, we argue that education about scholarly metrics and their limitations is of utmost importance. Online videos, which are often more efficient, effective and engaging than text, are a promising format to teach researchers and research administrators about metrics. This analysis of short videos about the h-index on YouTube demonstrated that, while many presenters make use of practices to balance cognitive load, they often lack critical discussions about the shortcomings of the indicator. We did not find any videos through our search that fulfilled the aim of our project, of producing high quality, thoughtfully designed, videos discussing the inherent problems of the h-index and its widespread use. We therefore argue that in order to make researchers and research administrators metrics literate, videos need to be produced by involving experts on content as well as online education and video production. 25 Feb 2022
PONE-D-22-02394
Becoming metrics literate: An analysis of brief videos that teach about the h-index
PLOS ONE Dear Dr. Maggio, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 11 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Lutz Bornmann Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf  and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. Thank you for stating the following in your manuscript: “Funding was provided by the Social Sciences and Humanities Research Council of Canada (SSHRC) Insight Grant #435-2021-0108 "Metrics Literacies: Improving the understanding and appropriate use of scholarly metrics in academia”” Please note that funding information should not appear in other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: “SH and Aj were supported by funds from the Social Sciences and Humanities Research Council of Canada (SSHRC) Insight Grant #435-2021-0108 "Metrics Literacies: Improving the understanding and appropriate use of scholarly metrics in academia” (https://www.sshrc-crsh.gc.ca/) The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I highly appreciate this new approach to study information dispersion/education, in particular in the field of bibliometrics. Some remarks though p. 3 the authors’ definition of the h-index does not seem right. They say that a researcher with an h-index of 15 has published at least 15 papers which have received at least 15 citations each. An author having published at least 17 papers which have received at least 17 citations also meets this requirement. I think the authors should add that h is the largest number meeting their requirement. p.4 among the problems with the h-index they should mention (already here) the dependence on the database, especially the difference between a Google h-index and a WoS h-index. p.11 Although difficult I think that the authors should mention factual errors in these videos, or make a remark when they include statements which are clearly without any nuance or unethical, such as “My professor says...“. Also the statement that publishing in languages other than English is bad for one’s h-index, is besides the point. Publishing in other languages leads to less citations, and in this way influences the h-index. p.15 A similar remark is valid when talking about gender bias. The h-index is not gender biased as such; citations are. Some minor remarks p.2 abstract. Line (-2). I would say a minority (32%) was of professional quality. p. 6 line 3. I think that the word ‘thus’ should be removed. p. 8 video 2. Wrong date A final remark. The authors introduce the notion of “becoming metrics literate” (fine), but although mentioned, they do not state the difference with the term “metric-wise”. In my understanding, becoming metric-wise means becoming metric literate AND being able to use it to one’s advantage. I also note that the term “metric-wise” was discussed in the article mentioned by the authors, but actually introduced in (2015) “Metric-wiseness. JASIST, 66(11), 2389-2389. Reviewer #2: This interesting article studies how videos on YouTbe explain and discuss the h-index. After a period in which the emtrics community was mostly oriented inwards (e.g., towards developing more sophisticated indicators), it is a welcome change to see increasing attention to the use of metrics by researchers, practitioners, and administrators. The present paper fits into the latter paradigm. Overall, I think this paper is very well done, with sufficient attention to various aspects that make a video more or less suitable to learn about the h-index and its limitations. Similar studies could be done on other indicators and platforms, and I expect that that will happen in the coming years. The paper is clearly structured and well-written. I have only some fairly minor suggestions for the authors. The paper presents solid arguments in favor of studying videos in particular. However, I'd like to point out that there are more options beyond mere text on the one hand and video on the other. For instance, the infographic developed by CWTS (https://leidenmadtrics.nl/articles/halt-the-h-index) presents some dreawbacks of the h-index in a static yet visual way. Some additional information on the process that led to the final 31 videos would be appropriate. It is not clear to me if these were the first 31 videos presented by YouTube/Google, or if they constitute a truly random sample out of all 274. Table 1 shows that they represent a mixture of older and more recent videos, as well as more and less popular ones (which suggests some kind of randomization) but some more details shoud be provided. Can you also describe your findings after reviewing the last 7 videos? Did those yield any new insights or were they all 'more of the same'? I would change the definition of the h-index on p. 3 to read that it is THE LARGEST NUMBER h such that there are h number of papers with at least h citations. (Strictly speaking, someone with h-index 15 also has 10 papers with 10 or more citations but in fact they have even more). It would be helpful if the authors could provide some recommendations for future videos on the h-index or other bibliometric indicators: which formats work well? What are issues to be aware of? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
25 Mar 2022 We have uploaded our response to reviewers in our cover letter and as a separate document. Submitted filename: Response to Reviewers.docx Click here for additional data file. 25 Apr 2022 Becoming metrics literate: An analysis of brief videos that teach about the h-index PONE-D-22-02394R1 Dear Dr. Maggio, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Lutz Bornmann Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 29 Apr 2022 PONE-D-22-02394R1 Becoming metrics literate: An analysis of brief videos that teach about the h-index Dear Dr. Maggio: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Lutz Bornmann Academic Editor PLOS ONE
  18 in total

1.  Blended Learning Improves Science Education.

Authors:  Brent R Stockwell; Melissa S Stockwell; Michael Cennamo; Elise Jiang
Journal:  Cell       Date:  2015-08-27       Impact factor: 41.582

2.  The state of h index research. Is the h index the ideal way to measure research performance?

Authors:  Lutz Bornmann; Hans-Dieter Daniel
Journal:  EMBO Rep       Date:  2008-12-12       Impact factor: 8.807

Review 3.  Cognitive load theory in health professional education: design principles and strategies.

Authors:  Jeroen J G van Merriënboer; John Sweller
Journal:  Med Educ       Date:  2010-01       Impact factor: 6.251

4.  Bibliometrics: The Leiden Manifesto for research metrics.

Authors:  Diana Hicks; Paul Wouters; Ludo Waltman; Sarah de Rijcke; Ismael Rafols
Journal:  Nature       Date:  2015-04-23       Impact factor: 49.962

5.  Publication pressure and scientific misconduct in medical scientists.

Authors:  Joeri K Tijdink; Reinout Verbeke; Yvo M Smulders
Journal:  J Empir Res Hum Res Ethics       Date:  2014-10-02       Impact factor: 1.742

6.  Citation metrics for appraising scientists: misuse, gaming and proper use.

Authors:  John Pa Ioannidis; Kevin W Boyack
Journal:  Med J Aust       Date:  2020-02-04       Impact factor: 7.738

7.  Becoming metrics literate: An analysis of brief videos that teach about the h-index.

Authors:  Lauren A Maggio; Alyssa Jeffrey; Stefanie Haustein; Anita Samuel
Journal:  PLoS One       Date:  2022-05-06       Impact factor: 3.752

8.  The Slavery of the h-index-Measuring the Unmeasurable.

Authors:  Grzegorz Kreiner
Journal:  Front Hum Neurosci       Date:  2016-11-02       Impact factor: 3.169

9.  Reflections around 'the cautionary use' of the h-index: response to Teixeira da Silva and Dobránszki.

Authors:  Rodrigo Costas; Thomas Franssen
Journal:  Scientometrics       Date:  2018-03-09       Impact factor: 3.238

10.  Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia.

Authors:  Colin A Chapman; Júlio César Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter J Fashing; Jan Gogarten; Songtao Guo; Claire A Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr Stenseth
Journal:  Proc Biol Sci       Date:  2019-12-04       Impact factor: 5.349

View more
  1 in total

1.  Becoming metrics literate: An analysis of brief videos that teach about the h-index.

Authors:  Lauren A Maggio; Alyssa Jeffrey; Stefanie Haustein; Anita Samuel
Journal:  PLoS One       Date:  2022-05-06       Impact factor: 3.752

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.