Literature DB >> 34263192

Developing and Evaluating the Psychometric Properties of the Pediatric Nursing Competency Scale for Nursing Students.

İlknur Bektaş1, Murat Bektaş1, Dijle Ayar1.   

Abstract

AIM: This study aims to develop and evaluate the psychometric properties of the Pediatric Nursing Competency Scale for nursing students.
METHODS: This study was conducted with 318 nursing students, including third-year students enrolled in a pediatric nursing course and fourth-year students completing a pediatric nursing internship. Factor analysis, Cronbach's alpha, item-total score analysis, and known-groups comparison were used to assess the research data. In total, 16 items were eliminated from the scale on the basis of experts' recommendations.
RESULTS: The scale consisted of 39 items and 8 sub-scales. The 8 sub-scales exhibited 66.4% of the total variance. Both exploratory factor analysis and confirmatory factor analysis (CFA) revealed that all factor loads were greater than 0.40. The CFA also revealed that all of the fit indices were greater than 0.85, and the root mean square error of approximation was less than 0.08. Cronbach's alpha was 0.96 for the entire scale, and greater than 0.80 for all sub-scales.
CONCLUSION: The Pediatric Nursing Competency Scale for nursing students was found to be valid and reliable.
Copyright © 2020 Florence Nightingale Journal of Nursing.

Entities:  

Keywords:  Competency; nursing student; pediatric nursing; pediatric nursing course

Year:  2020        PMID: 34263192      PMCID: PMC8152165          DOI: 10.5152/FNJN.2020.19065

Source DB:  PubMed          Journal:  Florence Nightingale J Nurs        ISSN: 2687-6442


INTRODUCTION

The competencies of a nurse include possession of the theoretical and clinical knowledge that is necessary to solve attitudinal and behavioral problems in patients (Axley, 2008; Franklin & Melville, 2015; Wu, Enskär, Lee, & Wang 2015). However, the roles and responsibilities expected of nurses change along with the changing health care system and, therefore, the definition of competence is always being challenged (Tilley, 2008). Owing to its ambiguous definition, many standards have been defined to determine how adequacy is assessed. These standards also constitute the primary objectives of nursing education (Zasadny & Bull, 2015). In addition, different and evolving definitions of competence contribute to discrepancies in students’ descriptions and evaluations of competence. In this study, competence for students is considered as “having the necessary theoretical knowledge of the students, using this knowledge in the clinical field and developing their psychomotor skills” (Beasley, Farmer, Ard, & Nunn-Ellison, 2018; Beogo, Rojas, Gagnon, & Liu, 2016; Burke, Kelly, Byrne, Chiardha, McNicholas, & Montgomery, 2016; Burns & Grove, 2009; Carey, Chick, Kent, & Latour, 2018; Fastré, Van der Klink, & Van Merriënboer, 2010; Yanhua & Watson, 2011). In nursing education, the assessment of the competence of students is as important as its definition. The assessment of nursing students occurs during theoretical courses and clinical applications, using written exams, practice exams, and clinical practice (Bektaş & Kudubeş, 2014; Nurumal, Aung, & Ismail, 2016). Additionally, a few books are available for this purpose. The books contain forms that allow students to evaluate their competencies. Students are expected to evaluate their skills, diagnose problems, and develop solutions in clinical practice in these forms (Erdemir, Altun-Yılmaz, Geçkil, Yıldırım, Karataş, & Yener, 2016; Savaşer & Yıldız, 2009). However, the use of examinations, practices, and books in the assessment of competence may not provide a sufficient indication of the student’s awareness of their own qualifications. For this reason, multidimensional assessments of nurses, mentors, and peers, as well as self-assessment of the student, should be made in addition to the assessment of the instructor. Self-assessment of the student, however, is mostly unpracticed (Beasley et al., 2018; Beogo et al., 2016; Burke et al., 2016; Burns & Grove, 2009; Carey et al., 2018; Fastré et al., 2010; Yanhua & Watson, 2011). In fact, students’ assessment of their own competencies significantly contributes to identification of the student’s deficiencies. The instructor’s consideration of both the student’s self-perception and his or her own observations contribute to the development of nursing education. Student self-assessment of competence faces unique challenges in a field such as pediatric nursing, where practicing invasive procedures on children poses potential ethical and legal problems; where the roles and responsibilities that nursing students are able to take on are limited; where difficulties communicating with children of varying age groups and comprehension levels complicate care; and where the fear of hurting or causing children to feel pain causes emotional and mental strain for nurses. In addition, the need for intensive theoretical knowledge, the special complications involved in applying this information to the practical care of the pediatric patient, and the difficulties in the application and calculation of pediatric medicines make it difficult for both the student and instructor to assess the student’s competence in pediatric nursing (Al-Qaaydeh, Lassche, & MacIntosh, 2012; Kajander-Unkuri, Meretoja, Katajisto, Saarikoski, Salminen, Suhonen, et al. 2014; Lassche, Al-Qaaydeh, MacIntosh, & Black, 2013; Özyazıcıoğlu, Aydın, Sürenler, Çinar, Yılmaz, Arkan, et al., 2018). Therefore, there is a need for objective assessment tools to assess competences for both instructors and students. However, when the scholarship so far conducted in this field is examined, the competence of the student is mostly evaluated by the instructors. The assessment tools used in these studies are generally the instruments that measure the information and have limitations in the field of practice. Therefore, there is a need for tools that assess the competence of the student through the student’s personal assessment of their own skills (Lassche et al., 2013; Lee & Lin, 2013; Mirlashari, Warnock, & Jahanbani, 2017). In current research, it is emphasized that, especially in the assessment of competence, valid and reliable tools that consider area-specific competencies rather than general competence should be developed (Franklin & Melville, 2015; Nehrir, Vanaki, Mokhtari Nouri, Khademolhosseini & Ebadi, 2016). Considering current research, there is no measurement tool that allows nursing students to assess their own pediatric nursing competencies. The aim of this study is to develop the pediatric nursing competence scale for nursing students and to determine the required psychometric properties to fill this gap.

Research Questions

Is the Pediatric Nursing Competency Scale for nursing students valid and reliable?

METHOD

Study Design

This study is a methodological research.

Sample

This study was carried out from January to March 2018 at one of the Faculty of Nursing In the west of Turkey. Relevant literature reports the preferred sample participant numbers for scale development studies: up to 100 is inadequate, up to 200 is adequate, up to 300 is good, up to 500 is very good, and up to 1000 participants is excellent (Şencan, 2005; Karagöz, 2016). This study sample consisted of 342 nursing students, including 300 third-year nursing students and 42 nursing students in a pediatric nursing internship during the 2017–2018 academic year. Of these, 328 students voluntarily agreed to participate in the study and completed the necessary forms. Statisticians suggest that a preliminary test should be performed on a sample of 10–20 people with similar characteristics of the study sample but should not be included in the study sample (Şencan, 2005; Karagöz, 2016). For that reason, 10 students involved in the preliminary test were excluded from the study sample. The study sample consisted of 318 nursing students, indicating a sampling rate of 92.9%. Sampling inclusion criteria: Enrolment in a pediatric nursing course or in a pediatric nursing internship. Voluntary participation in the study.

Data Collection

The research data were collected from December 2017 to February 2018, using a demographic data collection form and the pediatric nursing competency scale for nursing students. Data were collected following completion of the pediatric nursing course for the third-year nursing students and the three-month pediatric nursing internship for the fourth-year students.

Sociodemographic Data Collection Form

This form consisted of sociodemographic questions pertaining to age, gender, level of income, perceived academic achievements, place of residence, and the number of pediatric health and disease courses completed.

Pediatric Nursing Competency Scale for Nursing Students

After studying the available research and receiving relevant faculty input, researchers created a pool of 55 items for nursing students to evaluate their pediatric nursing competencies. The preliminary scale was a Likert-type scale where a score of 1=I definitely disagree, 2= I disagree, 3= undecided, 4= I agree, and 5= I definitely agree. Analysis of the validity and reliability of the preliminary scale was carried out, outlined in the steps described in the following sections.

Study Steps

The pediatric nursing competency scale for nursing students including the validity and reliability analyses was developed using the following steps.

Item pool phase

A comprehensive examination of the variable to be measured should be made before designating it a scale item. Items should cover all of the intellectual, emotional, and operational items of the experiences that may be related to the variable or dimension to be measured. As a result, items should represent and account for all dimensions of the variable to be measured (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005). As a result of an extensive review of available scholarship in this field, conducted by the researchers involved in this study, 55 items were created for the draft scale.

Expert opinion phase

It was recommended to use at least three expert opinions to determine content validity of scales (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005). A total of 12 experts were interviewed for the scale, including 10 faculty members from the Pediatric Nursing Department and 2 faculty members from the Psychiatric Nursing Department. Experts were given the draft form of the scale and were asked to score between 1 and 4 (1=not appropriate at all and 4=completely appropriate) for evaluating suitability of the scale items. Scores were assessed using the content validity index.

Preliminary test phase

On the basis of statisticians’ suggestions, it was recommended to test the draft scale on a sample of 10–20 people with similar characteristics to those being measured, but not included in the study sample (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005). The draft scale, after recommended revision from experts, was administered to 10 students with similar characteristics to those included in the study sample. The draft form of the scale was finalized after correcting unclear items.

Validity And Reliability of the Scale

Reliability of the scale

The Pearson correlation analysis was used for an item-total score analysis of the scale and sub-scales, and inappropriate items were removed from the scale. The Cronbach’s alpha coefficient was calculated to determine internal consistency of the scale and sub-scales (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005).

Validity of the scale

The exploratory factor analysis (EFA) was used to determine the item-factor relationship. The confirmatory factor analysis (CFA) was used to calculate whether the items and sub-scales explain the original structure of the scale (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005). Data were collected from January to March 2018 at the nursing faculty. Authors provided information about the research to eligible students after lectures. Students who chose to participate in the study were asked to complete the scale and return it to the research team.

Data Analysis

In the process of data analysis, percentage and mean were used for descriptive statistics. The content validity index was used to evaluate the compatibility of expert opinions. The Pearson correlation analysis was conducted to analyze the item-total score of the scale and sub-scales, while the Cronbach’s alpha coefficient analysis was performed to determine internal consistency of the scale and sub-scales. The EFA determined the item-factor relationship and the CFA determined whether the items and sub-scales explain the specific structure of the scale. The Shapiro-Wilk normality test was made to determine whether the data show normal distribution for the t test and one-way analysis of variance (ANOVA). The t test and one-way ANOVA were made for the known-groups comparison. The Scheffe test was used for a post-hoc analysis. The Pearson correlation analysis determined the relationship between scale factors, for stability of scale, and to evaluate the relationship between the mean scores from the scale (Çam & Baysan-Arabacı, 2010; Karagöz, 2016; Nunnally & Bernstein, 2010; Rattray & Jones, 2007; Şencan, 2005). In the evaluation of the data, the margin of error was accepted as p=0.05.

Ethical Consideration

The Faculty of Nursing approved this study. Furthermore, written approval was obtained from the Ethics Committee (IRB: 2017/29-14) at University. Subsequently, participants gave written and verbal consent following explanation of the study.

RESULTS

Of the participants, 86.8% (n=276) were third-year nursing students, 13.2% (n=42) were fourth-year students, 78.6% (n=250) were female, and 21.4% (n=68) were male. Additionally, 3.8% (n=12) had low academic success, 86.8% (n=276) had average academic success, and 9.4% (n=30) had high academic success. The mean age of the students was 21.26±1.06 years.

Content Validity

In total, 16 items with a content validity index less than 0.80 were removed from the preliminary scale following approval by a panel of 12 experts. The remaining 39 items were found to have an item-content validity index (I-CVI) between 0.89 and 1.00, and the content validity index for the entire scale (S-CVI) was determined to be 0.98.

Exploratory Factor Analysis

As a result of the EFA, the Kaiser-Meyer-Olkin (KMO) coefficient was 0.922, and the Bartlett test was X2=7843.161 and p<0.001. EFA was performed by using principal components extraction with varimax rotation. The eigenvalue was accepted as 1 and above in determining the number of factors in the conducted EFA. The scale consisted of eight sub-scales and explained 66.4% of the total variance. The sub-scales of competency perceptions for “content,” “physical examination,” “nutrition,” and “drug and fluid administration” were found to explain 38.4%, 7.0%, 4.9%, and 4.1% of the total variance, respectively. Comparatively, competency perceptions for “complex care,” “interaction with child/family,” “growth/development,” and “pain/fever management” were found to explain 3.4%, 3.1%, 2.8%, and 2.7% of the total variance, respectively. The factor loads of the items in the sub-scales of competency perceptions for “content,” “physical examination,” “nutrition,” and “drug and fluid administration” were found to vary between 0.45 and 0.76, 0.50 and 0.78, 0.67 and 0.81, and 0.49 and 0.76, respectively. The factor loads for competency perceptions for “complex care,” “interaction with child/family,” “growth/development,” and “pain/fever management” sub-scales were found to vary between 0.65 and 0.72, 0.61 and 0.68, 0.47 and 0.72, and 0.61 and 0.74, respectively (Table 1).
Table 1

Explanatory factor analysis of scale

ItemsContentPhysical examinationNutritionDrug and fluid administrationComplex careInteraction with child/familyGrowth/developmentPain/fever management
m530.76
m500.76
m550.71
m490.71
m510.70
m520.69
m540.63
m430.48
m420.45
m60.78
m50.75
m40.75
m70.60
m20.53
m10.50
m380.81
m390.80
m370.68
m400.67
m230.76
m220.65
m240.59
m180.56
m200.52
m250.49
m470.72
m450.72
m460.65
m300.68
m260.65
m320.61
m280.61
m160.72
m170.69
m150.63
m120.47
m110.74
m100.71
m90.61

m=Item

Confirmatory Factor Analysis

The CFA indicated the following: X2=1387.42, df=662, X2/df=2.09, root mean square error of approximation (RMSEA)=0.059, Goodness of Fit Index (GFI)=0.85, Comparative Fit Index (CFI)=0.98, Incremental Fit Index (IFI)=0.98, Normed Fit Index (NFI)=0.96, Trucker-lewis Index (TLI)=0.97, Relative Fit Index (RFI)=0.95. The factor loads of the items in the sub-scales of competency perceptions for “content,” “physical examination,” “nutrition,” and “drug and fluid administration” were found to vary between 0.49 and 0.84, 0.51 and 0.64, 0.54 and 0.79, and 0.54 and 0.82, respectively. The factor loads of the items in the sub-scales of competency perceptions for “complex care,” “interaction with child/family,” “growth/development,” and “pain/fever management” were found to vary between 0.88 and 0.93, 0.48 and 0.58, 0.55 and 0.80, and 0.69 and 0.82, respectively (Figure 1).
Figure 1

Confirmatory factor analysis of the scale

Reliability Analysis

The Cronbach’s alpha was found to be 0.96 for the entire scale. The Cronbach’s alpha values of the sub-scales of competency perceptions for “content,” “physical examination,” “nutrition,” and “drug and fluid administration” were found to be 0.92, 0.87, 0.88, and 0.81, respectively. The Cronbach’s alpha values of the sub-scales of competency perceptions for “complex care,” “interaction with child/family,” “growth/development,” and “pain/fever management” were found to be 0.80, 0.80, 0.81, and 0.80, respectively. As a result of the split-half analysis of the scale, the Cronbach’s alpha values of the first and second halves were found to be 0.92 and 0.93, respectively. The correlation coefficient between the two halves was determined to be 0.79. The Spearman-Brown and Guttmann Split-Half coefficients were both 0.88 (Table 2).
Table 2

Reliability analysis of scale and sub-scale scores (n=318)

SubscaleCronbach αM ± SDMin-Max
Content0.9237.21±6.409–45
Physical examination0.8725.29±3.796–30
Nutrition0.8817.06±2.834–20
Drug and fluid administration0.8122.67±4.696–30
Complex care0.8010.69±3.093–15
Interaction with child/family0.8017.09±2.354–20
Growth/development0.8115.31±3.264–20
Pain/fever management0.8011.53±2.593–15
Total0.96156.87±22.5539–195

M: Mean; SD: Standard deviation; Min: Minimum; Max: Maximum

The item-scale total score correlations were found to vary between 0.52 and 0.73. The item sub-scale total score correlations for “content,” “physical examination,” “nutrition,” and “drug and fluid administration” were found to vary between 0.65 and 0.84, 0.73 and 0.84, 0.81 and 0.91, and 0.65 and 0.78, respectively. The item sub-scale total score correlations of the sub-scales of competency perceptions for “complex care,” “interaction with child/family,” “growth/development,” and “pain/fever management” were found to vary between 0.84 and 0.86, 0.76 and 0.83, 0.68 and 0.87, and 0.81 and 0.86, respectively (Table 3).
Table 3

Correlation between the items-total score and items-subscale total scores

Sub-scalesItemsItem-total score correlations (r)*Item-subscale score correlations (r)*
Content42. I can plan educations about health promotion for children and families0.600.65
43. I can give planned education on health promotion to children and families.0.640.70
49. The education and practices I received increased my competence in planning nursing care for children of different age groups0.700.81
50. The education and practices I received increased my competence in providing nursing care to children of different age groups.0.710.84
51. The education and practices I received reduced my fears of giving care to children.0.700.81
52. The education and practices I received improved my clinical decision-making skills.0.730.82
53. The education and practices I received increased my awareness of events that could affect child health.0.690.83
54. The education and practices I received increased my sensitivity to children’s rights.0.690.78
55. The education and practices I received developed the ability to care for children in line with ethical principles.0.710.83
Physical examination1. I can do a physical examination of the child0.660.77
2. In the physical examination of the child, I can define deviations from the normal.0.630.73
4. I can measure the child’s blood pressure.0.590.79
5. I can measure the child’s fever with the right technique appropriate to his/her age.0.620.81
6. I can evaluate the child’s pulse.0.630.84
7. I can evaluate the breathing of the child.0.620.75
Nutrition37. I can cooperate with the mother to breastfeed a newborn baby.0.600.81
38. I can give breastfeeding education to the mother.0.540.88
39. I can assess whether mother breastfeeds with the right technique.0.620.91
40. I can tell the mother the principles of transition to solid food.0.640.83
Drug and fluid administration18. I can calculate the daily fluid requirement of the child0.580.65
20. I can plan the care of the child with fluid-electrolyte loss0.570.67
22. I know the age-specific drug administration differences in children.0.570.71
23. I can calculate the child-specific dose of a prescribed drug.0.530.78
24. I can evaluate the effects of the administered drug on the child.0.620.76
25. I can evaluate the interactions of drugs used in children.0.580.72
Complex care45. I can plan the care of a child with multiple (complex) health problems.0.610.84
46. I can manage the care of a child with a chronic disease.0.650.86
47. I can manage the care of a disabled child.0.580.84
Interaction with child/family26. I can use age-specific communication techniques when communicating with the child.0.570.76
28. I can collect information from the caregiver to determine the child’s care needs.0.590.81
30. I can work with her/his family/caregiver to administer the child’s care.0.560.83
32. I can meet the play needs of a hospitalized child.0.520.75
Growth/development12. I can evaluate the growth and development characteristics of the child as age-specific.0.570.68
15. I can tell the family the characteristics of growth and development appropriate to the age of the child.0.640.78
16. I can plan the care of the child with growth and development delay.0.590.87
17. I can give care to child with growth and development delay.0.590.83
Pain/fever management9. I can give care to a child with a high fever0.650.84
10. I can assess the pain of children appropriate his/her age.0.570.81
11. I can give care to a child who has pain.0.610.86

Correlation, significant at <0.001 level

Known-Groups Comparison

The grades and perceived academic success level of the students were used for known-groups comparisons. The mean competency perception score was 154.58±21.96 for the third-year nursing students participating in a pediatric nursing course and 172.74±15.87 for the fourth-year pediatric nursing intern students. The difference between the students’ mean scores was statistically significant according to their classes (t=5.150, p<0.001). On the competency scale, the mean scores of the students who perceived their academic success as low, medium, or high were 136.67±26.82, 156.62±21.10, and 167.23±19.32, respectively. The variations between the students’ mean scores were statistically significant according to their perceived academic success (F=8.370, p<0.001). In the post-hoc analysis, it was found that the competency scores of students who exhibited a low level of academic success were lower than those of students who exhibited medium or high levels of academic success. Additionally, competency scores of students who exhibited a medium level of academic success were lower than those of students who exhibited a high level of academic success (p<0.01).

DISCUSSION

The results of this study, determined by exploratory and confirmatory factor analyses, support the construct validity of the scale and confirm the scale as a valid tool. Both I-CVI and S-CVI values should be above 0.80 for assuming the presence of compatibility between expert opinions (Çam & Baysan-Arabacı, 2010; Hayran & Hayran, 2011; Polit, Beck & Owen, 2007; Şencan, 2005). The expert recommendations were applied to the 55-item preliminary scale used in the study. Then, the revised scale was reviewed by experts, and 16 items with a content validity index less than 0.80 were removed. For the remaining 39 items, both I-CVI and S-CVI levels were found to be higher than 0.80. The I-CVI and S-CVI results indicated agreement among the experts, the scale adequately measured the subject, and the scale’s content validity was ensured. In this study, the Bartlett’s sphericity test value was found to be p<0.05 and the KMO value was found to be greater than 0.60. These results indicated that the data were adequate and appropriate to perform factor analysis. As a result of the EFA, the eight-factor scale explained more than 60% of the total variance, suggesting a relatively high total variance on the scale. Current research in this area suggests that the variance should be between 40% and 60%, suggesting that the higher the total variance, the stronger the structural validity of the scale (Çam & Baysan-Arabacı, 2010; Hayran & Hayran, 2011; Polit et al., 2007; Şencan, 2005). This result of the study indicates that the scale is structurally strong, supporting the validity of the scale. Current scholarship also emphasizes that the items with a factor load less than 0.40 should be removed from the scale (DeVellis, 2012; Hayran & Hayran, 2011; Jonhson & Christensen, 2014). As a result of the EFA, the factor loads of all items in the sub-scales were found to be greater than 0.45. These results indicate that the scale has a strong factor structure. As a result of the CFA in this study, the factor loads of the eight sub-scales were found to be greater than 0.40, the fit indices (GFI, NFI, CFI, and IFI) were found to be greater than 0.90, and the RMSEA was found to be less than 0.080. The division of the chi-square value by the degree of freedom was found to be less than 5; therefore, a strong relationship was found between the scale and its sub-scales. Other studies emphasize that the model fit indices should be higher than 0.85, the X2/df ratio should be less than 5, and the RMSEA should be less than 0.08 (Karagöz, 2016). The CFA results found in this study are in accordance with the criteria stated in documented sources. The CFA results have shown that the data were compatible with the model and validated the eight-factor structure. The results also indicated that the sub-scales were related to the scale, and that the items of each sub-scale adequately identified their own factors. The results of the validity analysis in this study showed that the scale accurately and adequately measures the level of pediatric nursing competency of nursing students. The Cronbach’s alpha coefficient indicates whether scale items measure the same property and whether they are relevant to the subject being measured. A coefficient value between 0.80 and 1.00 indicates that the scale is highly reliable (Çam & Baysan-Arabacı, 2010; Hayran & Hayran, 2011; Polit et al., 2007; Şencan, 2005). These results of the study show that Cronbach’s alpha values of the scale and its sub-scales were greater than 0.80, indicating a high degree of reliability. The values obtained from the study show that the scale items adequately measure the subject in question and are related to the scale. The results also indicate that the scale had a high level of reliability (Çam & Baysan-Arabacı, 2010; Hayran & Hayran, 2011; Polit et al., 2007; Şencan, 2005). As a result of the split-half analysis of the scale, the Cronbach’s alpha values of the first and second halves were found to be higher than 0.80. Therefore, a strong and significant relationship was determined between the two halves. Both Spearman-Brown and Guttmann Split-Half coefficients were found to be above 0.80. Again, the results prove that the scale has a high level of reliability (Çam & Baysan-Arabacı, 2010; Hayran & Hayran, 2011; Polit et al., 2007; Şencan, 2005). The item total score analysis explains the relationship between the scores from the scale items and the total score of the scale (DeVellis, 2012; Hayran & Hayran, 2011; Jonhson & Christensen, 2014). The Pearson product-moment correlation coefficient is calculated in this analysis. This value should be positive and greater than 0.20 (Şencan, 2005). In this study, both item total score and item sub-scale total score correlation coefficients were found to be positive and higher than 0.20. These results show that all items of the scale had a high level of correlation between the total score of the scale and the total score of its sub-scales. The results also indicate a high reliability for both the scale and its sub-scales. One recommended method for testing reliability and validity of scales in the documented research on this subject is the known-groups comparison (DeVellis, 2012; Hayran & Hayran, 2011; Jonhson & Christensen, 2014). In this study, the students’ grades and perceived academic success levels were used for making the known-groups comparison. A statistically significant difference was found between students’ mean scale total scores according to their grades and perceived academic success levels (p<0.05). In the post-hoc analysis, it was found that the competency scores of the fourth-year students were significantly higher than those of the third-year students, and the students who had a high academic success level also had significantly higher competency scores than the students who had low academic success levels (p<0.05). These results showed that the scale has a good power of discrimination, can measure qualities to a reliable degree, and can distinguish between the known groups. These results also support that the scale is both a reliable and valid measurement. The results of the reliability analysis in this study indicate that the information provided by the scale was stable, the results were free of errors, and the same results would be obtained in a second measurement for the same purpose. These results show that the scale can accurately measure the level of pediatric nursing competence of nursing students.

Study Limitations

Despite its documented statistical strengths, there are some limitations of this study. The first limitation of this study was the selection of a convenience sampling. Therefore, the generalizability of the scale may be limited. The second limitation is that the cutoff point cannot be calculated because of the absence of the gold standard. Therefore, students’ competency levels could not be separated into low or high categories.

CONCLUSION AND RECOMMENDATIONS

The scale used for this study is shown to be highly valid and reliable. It would therefore be beneficial to educators to use the scale together with objective evaluation techniques in the pediatric nursing course to evaluate the students’ competencies. By using this scale, it is possible to increase the competency level of students by defining the areas in which students perceive inadequacy and the reasons why they perceive these inadequacies. Additionally, the pediatric nursing curriculum and clinical practices can be improved by using the scale. Furthermore, the results of this scale indicate that student competencies can be strengthened by identifying the initiatives that will increase their competency in clinical practices and laboratory applications. Students, therefore, would have the opportunity to participate in the assessment of their own competencies, which can lead to overall academic satisfaction. To increase the generalizability of the scale, it is recommended to offer the scale to nursing students in other universities. It is additionally recommended to plan longitudinal studies with the scale to assess the nursing students’ competencies during the pediatric nursing course and internship program. It is recommended to develop a web-based pediatric nursing competency program and evaluate the effectiveness of the program by the scale. Additionally, cross-cultural adaptations of the scale should be implemented in order to conduct international studies. To compare the pediatric nursing competency levels of students at different levels of education, Master’s degree students should be evaluated using the scale. As such, studying and understanding the strengths of the scale when used to evaluate nursing students will help in the execution of similar studies, suggesting that a similar scale will pose the same benefits for other groups of students.
  17 in total

1.  Exploratory factor analysis of the pediatric nursing student clinical comfort and worry assessment tool.

Authors:  Sharifa Al-Qaaydeh; Madeline Lassche; Christopher I Macintosh
Journal:  J Pediatr Nurs       Date:  2012-06-15       Impact factor: 2.145

2.  Identifying changes in comfort and worry among pediatric nursing students following clinical rotations.

Authors:  Madeline Lassche; Sharifa Al-Qaaydeh; Christopher I Macintosh; Melissa Black
Journal:  J Pediatr Nurs       Date:  2012-01-31       Impact factor: 2.145

3.  Competency assessment tools: An exploration of the pedagogical issues facing competency assessment for nurses in the clinical environment.

Authors:  Natasha Franklin; Paula Melville
Journal:  Collegian       Date:  2015       Impact factor: 2.573

Review 4.  Is the CVI an acceptable indicator of content validity? Appraisal and recommendations.

Authors:  Denise F Polit; Cheryl Tatano Beck; Steven V Owen
Journal:  Res Nurs Health       Date:  2007-08       Impact factor: 2.228

Review 5.  Competency in nursing: a concept analysis.

Authors:  Donna D Scott Tilley
Journal:  J Contin Educ Nurs       Date:  2008-02       Impact factor: 1.224

6.  Self-assessed level of competence of graduating nursing students and factors related to it.

Authors:  Satu Kajander-Unkuri; Riitta Meretoja; Jouko Katajisto; Mikko Saarikoski; Leena Salminen; Riitta Suhonen; Helena Leino-Kilpi
Journal:  Nurse Educ Today       Date:  2013-08-26       Impact factor: 3.442

7.  The effectiveness of an e-learning program on pediatric medication safety for undergraduate students: a pretest-post-test intervention study.

Authors:  Tzu-Ying Lee; Fang-Yi Lin
Journal:  Nurse Educ Today       Date:  2013-02-21       Impact factor: 3.442

8.  Preceptors' experiences of using a competence assessment tool to assess undergraduate nursing students.

Authors:  Eimear Burke; Marcella Kelly; Evelyn Byrne; Toni Ui Chiardha; Miriam Mc Nicholas; Adrienne Montgomery
Journal:  Nurse Educ Pract       Date:  2016-02-02       Impact factor: 2.281

9.  Evaluation of students' knowledge about paediatric dosage calculations.

Authors:  Nurcan Özyazıcıoğlu; Ayla İrem Aydın; Semra Sürenler; Hava Gökdere Çinar; Dilek Yılmaz; Burcu Arkan; Gülseren Çıtak Tunç
Journal:  Nurse Educ Pract       Date:  2017-09-19       Impact factor: 2.281

10.  The effects of performance-based assessment criteria on student performance and self-assessment skills.

Authors:  Greet Mia Jos Fastré; Marcel R van der Klink; Jeroen J G van Merriënboer
Journal:  Adv Health Sci Educ Theory Pract       Date:  2010-01-07       Impact factor: 3.853

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.