| Literature DB >> 32735228 |
Tehmina Gladman1, Grace Tylee2, Steve Gallagher3, Jonathan Mair4, Sarah C Rennie1, Rebecca Grainger1.
Abstract
BACKGROUND: To realize the potential for mobile learning in clinical skills acquisition, medical students and their teachers should be able to evaluate the value of an app to support student learning of clinical skills. To our knowledge, there is currently no rubric for evaluation of quality or value that is specific for apps to support medical student learning. Such a rubric might assist students to be more confident in using apps to support their learning.Entities:
Keywords: digital learning; health occupations students; just-in-time learning; medical education; mhealth; mobile apps; mobile learning; mobile phone; questionnaire design; rubric; self-directed learning
Mesh:
Year: 2020 PMID: 32735228 PMCID: PMC7428912 DOI: 10.2196/18015
Source DB: PubMed Journal: JMIR Mhealth Uhealth ISSN: 2291-5222 Impact factor: 4.773
Figure 1Process for developing the Mobile App Rubric for Learning. MARuL: Mobile App Rubric for Learning.
Outcome of nominal group votes.
| Term | Initial ranking | First round of vote | Second round of voting | Third round of voting |
| Satisfaction | Yes | Keep | —a | — |
| Ease of use | Yes | Keep | — | — |
| Perceived usefulness | Yes | Keep | — | — |
| Information quality | Yes | Keep | — | — |
| Functionality | Yes | Keep | — | — |
| Engagement | Yes | Keep | — | — |
| In line with professional standards | Yes | Keep | — | — |
| Relevance to course | Yes | Keep | — | — |
| Credibility of developers | Yes | Keep | — | — |
| Privacy of information | Yes | No decision | Discard | — |
| Cost | Yes | Keep | — | — |
| Advantage of using app | Yes | Keep | — | — |
| Efficiency | Yes | Keep | — | — |
| Instructional features | Yes | No decision | Keep | — |
| Capacity to generate learning | Yes | Keep | — | — |
| Aesthetics | Yes | No decision | Keep | — |
| Quantity of information | Yes | No decision | Keep | — |
| User ratings | Yes | No decision | Discard | — |
| Intention to reuse | Yes | Keep | — | — |
| Technical specifications | Yes | Keep | — | — |
| Feedback | Yes | No decision | Keep | — |
| Pedagogy | Yes | Keep | — | — |
| Perceived enjoyment | Yes | No decision | No decision | Discard |
| Perceived importance | Yes | No decision | Keep | — |
| Subjective quality | Yes | No decision | Keep | — |
| Sharing | Yes | No decision | No decision | No decision-Discard |
| Motivation | Yes | No decision | No decision | No decision-Discard |
| Transparent | Yes | No decision | Keep | — |
| User experience | Yes | Keep | — | — |
| Purpose | Yes | No decision | Keep | — |
| Self-directedness | Yes | No decision | No decision | No decision-Discard |
| Playfulness | Yes | No decision | Discard | — |
| Lack of ads | Yes | Keep | — | — |
| Differentiation | Yes | No decision | Keep | — |
| User interactivity | Yes | No decision | Keep | — |
| Product description | No | — | — | — |
aThe decision taken at each round of voting is shown. The voting round where a Keep or Discard decision is made ends the decision making for that item.
Reliability statistics for the initial version of the Mobile App Rubric for Learning after review of 10 apps.
| Rubric categories | Intraclass correlation coefficient scorea | Cronbach αb | Pearson rc | |||
|
|
|
| Teaching and learning | User centered | Professional | Usability |
| Teaching and learning | 0.85 | .89 | 1.00 | 0.91 | 0.83 | 0.72 |
| User centered | 0.78 | .96 | N/Ad | 1.00 | 0.72 | 0.71 |
| Professional | 0.71 | .87 | N/A | N/A | 1.00 | 0.49 |
| Usability | 0.71 | .78 | N/A | N/A | N/A | 1.00 |
aColumn 1 presents the interrater reliability scores for each category.
bColumn 2 presents the interitem consistency for each category.
cColumns 3 to 6 present the correlations between categories presented in the top right half of the table only.
dN/A: not applicable.
Figure 2Rubric development process. MARuL: Mobile App Rubric for Learning.
Item statistics by category for final version of the Mobile App Rubric for Learning.
| Category and item | Cronbach α | Values, mean (SD) | |
|
|
|
| |
|
| Purpose | .75 | 2.33 (1.35) |
|
| Pedagogy | .83 | 1.97 (1.27) |
|
| Generates learning | .90 | 1.61 (1.05) |
|
| Quantity of information | .85 | 1.80 (1.39) |
|
| Relevance to study | .88 | 1.90 (1.04) |
|
| Instructional features | .76 | 1.15 (1.21) |
|
| User interactivity | .57 | 1.29 (0.95) |
|
| Feedback | .40 | 0.76 (0.88) |
|
| Efficiency | .93 | 1.42 (1.16) |
|
|
|
| |
|
| Subjective quality | .93 | 1.24 (1.13) |
|
| Satisfaction | .95 | 1.51 (1.23) |
|
| Perceived usefulness | .92 | 1.65 (1.15) |
|
| Perceived importance | .90 | 1.47 (1.03) |
|
| User Experience | .81 | 2.18 (1.00) |
|
| Intention to reuse | .94 | 1.33 (1.22) |
|
| Engagement | .91 | 1.52 (1.12) |
|
|
|
| |
|
| In line with standards | .79 | 2.46 (1.02) |
|
| Credibility | .82 | 2.94 (1.60) |
|
| Information quality | .83 | 1.43 (1.54) |
|
|
|
| |
|
| Aesthetics | .83 | 2.47 (0.93) |
|
| Functionality | .70 | 3.01 (0.94) |
|
| Differentiation | .76 | 1.40 (0.87) |
|
| Ease of use | .70 | 2.92 (0.80) |
|
| Advertisements | .36 | 3.73 (0.86) |
|
| Technical specifications | .61 | 1.23 (0.99) |
|
| Advantage of app | .87 | 1.71 (1.07) |