| Literature DB >> 33012984 |
Bence Ferdinandy1, Ángel Manuel Guerrero-Higueras2, Éva Verderber3, Francisco Javier Rodríguez-Lera2, Ádám Miklósi1,4.
Abstract
Machine learning algorithms are becoming more and more useful in many fields of science, including many areas where computational methods are rarely used. High-performance Computing (HPC) is the most powerful solution to get the best results using these algorithms. HPC requires various skills to use. Acquiring this knowledge might be intimidating and take a long time for a researcher with small or no background in information and communications technologies (ICTs), even if the benefits of such knowledge is evident for the researcher. In this work, we aim to assess how a specific method of introducing HPC to such researchers enables them to start using HPC. We gave talks to two groups of non-ICT researchers that introduced basic concepts focusing on the necessary practical steps needed to use HPC on a specific cluster. We also offered hands-on trainings for one of the groups which aimed to guide participants through the first steps of using HPC. Participants filled out questionnaires partly based on Kirkpatrick's training evaluation model before and after the talk, and after the hands-on training. We found that the talk increased participants' self-reported likelihood of using HPC in their future research, but this was not significant for the group where participation was voluntary. On the contrary, very few researchers participated in the hands-on training, and for these participants neither the talk, nor the hands-on training changed their self-reported likelihood of using HPC in their future research. We argue that our findings show that academia and researchers would benefit from an environment that not only expects researchers to train themselves, but provides structural support for acquiring new skills.Entities:
Keywords: Education; HPC; Professional development; Supercomputing
Year: 2020 PMID: 33012984 PMCID: PMC7521077 DOI: 10.1007/s11227-020-03438-0
Source DB: PubMed Journal: J Supercomput ISSN: 0920-8542 Impact factor: 2.474
Fig. 1Background of participants of the talk. Questions were rated from 1 (lowest skill or confidence) to 4 (highest skill or confidence), median values are reported in parenthesis. a Rate your computer skills (3). b Rate your programming skills (2). c Rate your confidence in using supercomputing (1). d Expert scoring of the answers to: describe what supercomputing (high-performance computing) is (3)
Kirkpatrick’s four levels of evaluation model applied to study goals [13]
| Level | Description |
|---|---|
| Level 1: reaction | The degree to which participants find the training favourable, engaging and relevant to their research |
| Level 2: learning | The degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation in the training |
| Level 3: behaviour | The degree to which participants apply what they learned during training during their normal research routines |
| Level 4: results | The degree to which targeted program outcomes occur and contribute to better research capabilities on departmental level |
Fig. 2Level 1 evaluation of the introductory talks. Questions were rated from 1 (strongly disagree) to 4 (strongly agree), median values are reported in parenthesis. Mann–Whitney-U tests were carried out to test for differences between the two talks. The only difference was found in responsiveness (d), where the online lecture was found to be slightly less responsive (, ). a The topics presented were what you expected of the presentation (3). b The presentation met your needs (3). c The presentation was of adequate length for the topics presented (4). d The presenter was responsive to the participants (4). e The presenter was knowledgeable in all topics presented (4). f The presenter provided adequate visual aids (4). g The presenter’s style and delivery was effective (4). h Would recommend this presentation to other colleagues (4)
Fig. 3Level 1 evaluation of the hands-on training. Questions were rated from 1 (strongly disagree) to 4 (strongly agree), median values are reported in parenthesis. a The topics presented were what you expected of the workshop (4). b The workshop met your needs (3.5). c The workshop was of adequate length for the topics presented (3.5). d The presenter was responsive to the participants (4). e The presenter was knowledgeable in all topics presented (4). f The presenter provided adequate visual aids (4). g The presenter’s style and delivery was effective (4). h Would recommend this workshop to other colleagues (4)
Fig. 4Comparison of expert scoring and self-assessment indicates increase in both groups in understanding after the introductory talk. No change is seen after the hands-on training ( is the subset of people attending the hands-on training)
Fig. 5For participation in the talk increased their self-reported probability of using supercomputing in their research (median from 2 to 3), while for there was no significant increase (median of 3). Question was rated on a scale of 1 (certainly not) to 4 (certainly yes)