| Literature DB >> 36004835 |
Lyndon Lim1, Seo Hong Lim1, Rebekah Wei Ying Lim1.
Abstract
Tertiary educational institutions have continually implemented various educational technologies to support student learning. An example is adaptive learning systems, within which learners take ownership of their learning experience and accelerate future learning. Despite the importance of considering learner satisfaction in the development of such systems given how it has been widely acknowledged as an indication of the success of e-learning systems, research in the area of adaptive learning technologies for education has concentrated more on tailoring instruction to implement personalised learning. A review of instruments measuring learner satisfaction of virtual learning environments found that the learner satisfaction questionnaire (LSQ) that was developed to measure learner satisfaction of e-learning systems, and preliminarily validated by means of exploratory factor analysis, was most suitable for adaptation. This study sought to adapt and validate the LSQ for the purpose of measuring learner satisfaction of an adaptive learning system developed in-house. A total of 121 tertiary students recruited via availability sampling took part in this study. Hierarchical confirmatory factor analysis was performed as part of the validation. Results presented the adapted LSQ as a 14-item instrument that can be readily deployed on a broad scale basis. The adapted LSQ also yielded valid and reliable satisfaction scores both at the subscale as well as the overall scale level. Practical implications are discussed, noting that such scores could inform the further development and refinement of AdLeS or similar systems, with the view of benefiting students.Entities:
Keywords: learner satisfaction; measurement; tertiary education; validation
Year: 2022 PMID: 36004835 PMCID: PMC9404708 DOI: 10.3390/bs12080264
Source DB: PubMed Journal: Behav Sci (Basel) ISSN: 2076-328X
Studies of antecedents of learner satisfaction in chronological order.
| Author(s) | Participants | Antecedents of Learner Satisfaction | Learner Satisfaction Measure |
|---|---|---|---|
| [ | 243 higher education students |
Learners’ prior background towards online platforms Learners’ experience Learners’ interaction with instructor throughout the learning experience Students’ autonomy | Of the 27-item questionnaire across five subscales, one five-item subscale was used to measure learner satisfaction. Items were not reported though entire measurement model was validated by means of structural equation modelling. |
| [ | 44 students completed the activity as part of an online business management module |
Multisensorial experience Olfaction effect Airflow effect | Learner satisfaction was reflected by single items (e.g., 55.6% of participants satisfied with the platform used to deliver content). |
| [ | 120 undergraduate students using an online information-based system |
System quality Service quality High sociability quality | Four-item subscale (e.g., I like working with the platform; I find the platform useful for collaborative learning) out of a 48-item questionnaire was used to measure learner satisfaction. Validation of questionnaire was by means of structural equation modelling. |
| [ | 38 postgraduate students and 9 instructors with e-learning experience |
Information quality The fit between task and technology System quality Utility value Usefulness | Meta-analysis via the Fuzzy Decision Making Trial and Evaluation |
| [ | 153 students using an e-learning platform (i.e., Moodle) |
System quality Course quality Service quality Instructor quality Perceived usefulness Learner satisfaction | 25-item questionnaire across six subscales with one subscale of three items on learner satisfaction. Validation was limited to exploratory factor analysis. |
| [ | 295 undergraduates using an e-learning system |
Learners’ perceived usefulness Learners’ perceived ease of use Learners’ satisfaction affects continued intention to use | 35-item questionnaire across 12 subscales with one subscale of two items on learner satisfaction. Validation of questionnaire was by means of structural equation modelling. |
| [ | 115 students using virtual and digital learnings |
Diverse, motivational, and clear supportive learning material Easy to use and re-use learning content | 24-item questionnaire across five subscales. Validation of questionnaire was limited to a reliability measure (i.e., Cronbach’s alpha). |
| [ | 600 undergraduates using an e-learning system (i.e., Moodle) |
Learner dimension Instructor’s dimension Course dimension Technology dimension Design dimension Environment dimension | 132-item questionnaire across six subscales. Validation of questionnaire was limited to principal components and parallel analyses. |
| [ | 221 participants from undergraduate and graduate online classes |
Learner–content interaction Learner–instructor interaction | Five-item subscale on learner satisfaction focussed on satisfaction about course (e.g., this course contributed to my educational development; in the future, I would be willing to take a fully online course again). |
| [ | Post graduate participants from six online courses with class sizes averaging around 35 students |
Instructor’s presence in areas of feedback Quality of teaching Facilitation of productive discourse | Learner satisfaction data (11 items) was collected using the university’s standardised course evaluation system. Validation of learner satisfaction measure was not observed. |
| [ | 2196 participants using an e-learning system |
Clarity of the course structure Pace of learning Opportunities for self-regulated learning and collaborative learning Tasks results | Learner satisfaction was reflected by learner expectations and assessment of course outcomes. Validation of learner satisfaction measure was not observed. |
| [ | 212 participants from blended e-learning course |
Performance expectations Learning climate Learning satisfaction Interaction Content feature System functionality Computer self-efficacy | 21-item questionnaire across seven subscales with one subscale of four items on learner satisfaction. Validation of questionnaire was validated via confirmatory factor analysis. |
| [ | 116 adult learners using an e-learning system |
Learner interface Learning community Content Personalisation | 17-item questionnaire across four subscales. Validation of questionnaire was limited to exploratory factor analysis. |
Participants.
| Course/Semester/Year | Number of Enrolled Students | Number of Students Who Volunteered and Completed the LSQa |
|---|---|---|
| Calculus I/2/2021 | 93 | 80 |
| Calculus II/2/2021 | 56 | 41 |
CFA goodness-of-fit indicators.
| Model | x2 | x2 |
| x2 | CFI | RMSEA | SRMR | AIC | SBC |
|---|---|---|---|---|---|---|---|---|---|
| One-factor | 160.91 * | − | 77 | 2.09 | 0.87 | 0.10 | 0.08 | 216.91 | 295.19 |
| Correlated 3-factor | 111.72 | 74 | 1.74 | 0.94 | 0.07 | 0.07 | 173.72 | 260.39 | |
| Second-order 3-factor | 111.72 | 74 | 1.74 | 0.94 | 0.07 | 0.07 | 173.72 | 260.39 | |
| Bifactor 3-factor | 96.71 | 56 | 1.73 | 0.64 | 0.08 | 0.07 | 194.71 | 331.70 |
Note. x2 = chi-squared statistic; x2 is computed relative to the previous non-rejected model; df = degrees of freedom; CFI = comparative fit index; RMSEA = root mean square error of approximation; SRMR = standardised root mean square; AIC = Akaike information criterion; SBC = Schwarz Bayesian Criterion.* p < 0.001.
Standardised loadings, average variance extracted, and construct reliability coefficients of the three-factor first-order model.
| Construct and Items | Standardised Loading | Average Variance Extracted | Construct Reliability |
|---|---|---|---|
| LI (F1) | 0.64 | 0.98 | |
| LI1 | 0.85 | ||
| LI2 | 0.85 | ||
| LI3 | 0.65 | ||
| LI4 | 0.82 | ||
| CONT (F2) | 0.61 | 0.99 | |
| CONT1 | 0.73 | ||
| CONT2 | 0.85 | ||
| CONT3 | 0.76 | ||
| CONT4 | 0.79 | ||
| CONT5 | 0.80 | ||
| CONT6 | 0.75 | ||
| PERS (F3) | 0.54 | 0.97 | |
| PERS1 | 0.75 | ||
| PERS2 | 0.81 | ||
| PERS3 | 0.59 | ||
| PERS4 | 0.78 |
Note. LI = learner interface; CONT = content; PERS = personalisation.
Distinctiveness of sub-constructs of the three-factor first-order model.
| Construct | LI | CONT | PERS |
|---|---|---|---|
| LI |
| ||
| CONT | 0.81 |
| |
| PERS | 0.66 | 0.81 |
|
Note. * refers to the square root of average variance extracted; LI = learner interface; CONT = content; PERS = personalization.
Standardised loadings, average variance extracted, and construct reliability coefficients of the three-factor second-order model.
| Construct and Items | Standardised Loading | Average Variance Extracted | Construct Reliability |
|---|---|---|---|
| LS | 0.77 | 0.98 | |
| LI | 0.81 | ||
| CONT | 1.00 | ||
| PERS | 0.81 | ||
| LI | 0.64 | 0.98 | |
| LI1 | 0.85 | ||
| LI2 | 0.85 | ||
| LI3 | 0.82 | ||
| LI4 | 0.65 | ||
| CONT | 0.61 | 0.99 | |
| CONT1 | 0.73 | ||
| CONT2 | 0.85 | ||
| CONT3 | 0.76 | ||
| CONT4 | 0.75 | ||
| CONT5 | 0.80 | ||
| CONT6 | 0.79 | ||
| PERS | 0.54 | 0.97 | |
| PERS1 | 0.75 | ||
| PERS2 | 0.81 | ||
| PERS3 | 0.78 | ||
| PERS4 | 0.59 |
Note. LI = learner interface; CONT = content; PERS = personalisation.
Skewness and kurtosis values of LSQa items.
| Variable | Kurtosis | Skewness |
|---|---|---|
| Q_1 | 0.7414892 | −0.8999211 |
| Q_2 | 1.8786178 | −1.0495770 |
| Q_3 | 1.0670180 | −0.8986991 |
| Q_4 | −0.4323551 | −0.4749700 |
| Q_5 | 0.4405646 | −0.5534692 |
| Q_6 | −0.6011248 | −0.2631777 |
| Q_7 | −0.8418520 | −0.2865288 |
| Q_8 | 0.2607167 | −0.6221956 |
| Q_9 | −0.4444183 | −0.5042582 |
| Q_10 | −0.3667205 | −0.3052960 |
| Q_11 | 0.3347129 | −0.6470117 |
| Q_12 | 0.1587396 | −0.5971343 |
| Q_13 | 0.6893963 | −0.6702294 |
| Q_14 | 0.1228709 | −0.7080154 |