Literature DB >> 31426767

Psychometric assessment of the Post- Secondary Student Stressors Index (PSSI).

Brooke Linden1, Heather Stuart2.   

Abstract

BACKGROUND: Previous research has linked excessive stress among post-secondary students to poor academic performance and poor mental health. Despite attempts to ameliorate mental health challenges at post-secondary institutions, there exists a gap in the evaluation of the specific sources of stress for students within the post-secondary setting.
METHODS: The goal of this study was to develop a new instrument to better assess the sources of post-secondary student stress. Over the course of two years, the Post-Secondary Student Stressors Index (PSSI) was created in collaboration with post-secondary students as co-developers and subject matter experts. In this study, we used a combination of individual cognitive interviews (n = 11), an online consensus survey modeled after a traditional Delphi method (n = 65), and an online pre- (n = 535) and post-test (n = 350) survey to psychometrically evaluate the PSSI using samples of students from Ontario, Canada. We collected four types of evidence for validity, including: content evidence, response processes evidence, internal structure evidence, and relations to other variables. The test-retest reliability of the instrument was also evaluated.
RESULTS: The PSSI demonstrated strong psychometric properties. Content validation and response processes evidence was derived from active student involvement throughout the development and refinement of the tool. Exploratory factor analysis suggested that the structure of the PSSI reflects the internal structure of an index, rather than a scale, as expected. Test-retest reliability of the instrument was comparable to existing, established instruments. Finally, the PSSI demonstrated good relationships with like measures of stress, distress, and resilience, in the hypothesized directions.
CONCLUSIONS: The PSSI is a 46-item inventory that will allow post-secondary institutions to pinpoint the most severe and frequently occurring stressors on their campus. This knowledge will facilitate appropriate targeting of priority areas, and help institutions to better align their mental health promotion and mental illness prevention programming with the needs of their campus.

Entities:  

Keywords:  Health promotion; Mental health; Post-secondary; Psychometrics; Scale development; Stress

Mesh:

Year:  2019        PMID: 31426767      PMCID: PMC6700802          DOI: 10.1186/s12889-019-7472-z

Source DB:  PubMed          Journal:  BMC Public Health        ISSN: 1471-2458            Impact factor:   3.295


Background

Over the past decade, chronic stress and mental health problems among Canadian post-secondary students have become a main focus of attention. Research has linked excessive stress among post-secondary students to a number of negative outcomes, including deteriorated mental health [1-3] and poor academic performance [4]. The most recent Canadian edition of the National College Health Assessment II revealed a substantial prevalence of both stress and common mental illnesses, such as anxiety and depression (formally diagnosed, or self-reported through the use of screening tools) [4]. Prevalence estimates for self-reported symptoms of anxiety and depression increased between the 2013 and 2016 iterations of the survey [4]. While many post-secondary institutions have attempted to ameliorate these issues by increasing on-campus treatment options, few have managed to develop effective upstream services, such as mental health promotion and mental illness prevention [5, 6]. In fact, in a recent survey completed by representatives of post-secondary institutions across Canada, only 70% believed that students were well-informed about mental health issues and available services on campus, while almost all representatives indicated they thought their campus could benefit from expanding mental health promotion and outreach activities [6, 7]. Existing mental health promotion and mental illness prevention may be improved with better targeting of the main issues faced by students, but the ability to do so hinges on an improved understanding of student-specific stress. Improving upstream approaches targeting mental health promotion and stress reduction may help to alleviate the burden of mental health problems among student populations, as well as the demand currently placed on overtaxed campus treatment services. Existing instruments designed to evaluate post-secondary student stress can be improved. Few have involved a diverse sample of students in the development process (e.g., engaging only students in a particular year, level, or program of study), while others have been developed too narrowly (e.g., items developed based solely on literature, little consideration for student input) or too broadly (e.g., including stress-related items not relevant to, or modifiable by the post-secondary institution). This research details the development and preliminary validation of the Post-Secondary Student Stressors Index (PSSI), an index of student-specific stressors developed and comprehensively validated using a ‘for-students-by-students’ approach.

Methods

The PSSI was developed and validated with students, for students through a series of steps. The initial pool of items was developed by students through the use of an online survey and focus group interviews in 2018, the results of which are detailed elsewhere [8]. For each stressor on the instrument, respondents were asked to indicate the severity of stress experienced, and the frequency with which they experienced it. Response options ranged from a scale of 1 (‘not stressful’ and ‘rarely’) to 4 (‘very stressful’ and ‘almost always’), with higher ratings indicating a greater level of stress resulting from an item. An additional option to indicate ‘N/A’ was also available in the event that a stressor didn’t happen or was otherwise not applicable. In this study, we used a number of methods to refine the preliminary index and collect evidence assessing validity and reliability. Validity is described as a process by which we determine the degree of confidence we can place on the inferences we make about people based on their scores on an instrument [9]. Reliability refers to the consistency of test scores within a particular population. According to the Standards for Educational and Psychological Testing (“the Standards”), validation of an instrument requires the accumulation of evidence from five sources: content; response processes; internal structure; relations to other variables; and test consequences [10]. This article reports the collection of the first four types of validity evidence for the PSSI. Analyses were completed using R, Version 3.4.1. This research received ethics clearance from Queen’s University’s Health Sciences and Affiliated Teaching Hospitals Research Ethics Board.

Response processes evidence

Response process validation evidence provides empirical evidence of the extent to which participants’ responses to the items on an instrument align with the construct under study [10]. We collected response processes evidence from 11 individual cognitive interviews conducted by the first author using a think-aloud technique with verbal probing [11]. Participants were asked to complete the preliminary version of the PSSI on the interviewer’s desktop computer, explaining their interpretation of questions and the reasoning behind their response selections. Note that the interviewer must be physically present during cognitive interviews in order to pick up on non-verbal communications, including body language and facial expressions. Notes were taken electronically during the interviews, and immediately reviewed afterward. Problems identified by participants were recorded using problem codes developed by Willis [12], described in Table 1. Following each interview, corrective action was taken. This “waterfall” method of correction ensured that every interview provided maximum value to the improvement of the instrument. The only exception was with items recommended for removal; these were carefully considered after all interviews had been completed.
Table 1

Cognitive Interviewing Problem Codes

Problem CodeDescriptionCorrective Action
[1] ClarityRespondent was confused by the item wording, or felt that it was ambiguous.Change in wording
[2] RelevanceRespondent felt that the item addressed an issue not relevant to student stress.Change in wording Delete the item
[3] RecallRespondent had trouble recalling or remembering information required for answer.Change in wording Change response categories
[4] RedundancyRespondent felt the item did not add value in the context of other items.Delete the item
[5] BiasRespondent felt the wording of the item encouraged them to respond in a certain way.Change in wording Change order of items
[6] ComprehensiveRespondent felt there was an indicator missing from the list.Add an item
Cognitive Interviewing Problem Codes

Content evidence

Content evidence refers to the degree to which the items on an instrument represent the area of interest [10]. In this study, content evidence was collected through the use of an online consensus survey modeled after a traditional Delphi method [13]. In the context of this study, we considered our subject matter “experts” to be post-secondary students, given their lived experience with stress in the post-secondary setting. Participants were provided with the PSSI and asked to rate the relevance and clarity of each item on two adjectival scales, anchored by 1 (not at all relevant/clear) and 4 (very relevant/clear) [14]. Finally, participants were also invited to add any stressors they felt were missing from the PSSI. We used responses to compute the content validity index for each item (I-CVI), calculated by dividing the number of respondents who rated each item as a 3 (somewhat relevant/clear) or 4 (very relevant/clear) by the total number of respondents [15]. The I-CVI expresses the proportion of agreement on the relevancy and clarity of each item, and lies between 0 and 1 [16]. Based on recommendations in the literature, items with relevance I-CVIs of 0.7 or greater were retained [15, 17]. Clarity ratings were used to assess whether further revision of items was required for readability and comprehension. Content validity indices for the scales in their entirety (S-CVIs) were calculated by taking the average of the relevance I-CVIs for retained items only.

Internal structure evidence and relations to other variables

A pilot test of the PSSI was conducted among a random sample of over 500 students at an Ontario university in the winter of 2019 to facilitate the collection of internal structure evidence and examine test scores’ relations to other variables. An online survey was developed, comprising the PSSI, eight demographic questions, and three validated scales evaluating like constructs. The 10-item Perceived Stress Scale (PSS-10) is a brief scale designed to measure general, or “global” stress [18]. The Kessler Psychological Distress Scale (K10) is a 10-item scale designed to measure general distress [19]. Both the PSS-10 and K10 have demonstrated consistent psychometric properties in a number of samples and settings and have previously been used among post-secondary populations. The 10-item Connor-Davidson Resiliency Scale (CD-RISC) is a scale designed to measure resilience, conceptualized by the authors as a measure of “stress coping ability” [20, 21]. This 10-item version of the scale was created using samples of undergraduate students, and has shown strong psychometric properties [21]. A second survey was sent to participants who completed the first survey approximately two weeks later containing only the PSSI and the PSS-10. Responses on each survey were matched using anonymous, unique identifiers in order to evaluate the test-retest reliability of test scores. Responses to the first online survey were used to assess the internal structure of the PSSI. Internal structure evidence refers to the degree to which the relationships among items in the instrument are consistent with what is expected of the construct under study [10]. The PSSI was designed to be an index, rather than a scale so individual stress items were conceptualized as causal indicators (e.g., “causes” of stress), rather than effect indicators (e.g., “effects” of stress), as would be the case with a scale [14, 22]. As there is no assumption about the homogeneity of items within an index, we used an exploratory factor analysis to determine whether the PSSI had the dimensionality of an index, as hypothesized. In other words, we hypothesized that a factor analysis would reveal no clear “groupings” of stressors loading on distinct factors. Additionally, we expected some items to be correlated, and others not, as is indicative of an index [22]. To assess the test-retest reliability of the instrument, we used the matched responses on the first and second surveys, examining correlations between scores on the PSSI across the two-week interval. We hypothesized that students’ stress levels would remain fairly static over this two-week period, and would consider a correlation coefficient of 0.7 to indicate good test-retest reliability [9]. Relations to other variables evidence refers to whether the test scores from the instrument correlate significantly, and in the direction expected, with like and unlike constructs measured by existing, valid instruments [10]. We assessed the PSSI test scores’ relationships to other variables, investigating correlations between the scores on the PSS-10, K10, and CD-RISC. As our goal with the PSSI was to develop an instrument to evaluate student stress specific to the post-secondary environment, we hypothesized that the PSSI would have a positive, moderate correlation with the PSS-10. A very strong correlation with a global stress measure would indicate that the PSSI had been measured too generally. Similarly, we hypothesized that the PSSI would have a positive, moderate correlation with the K-10, a global measure of distress, as the constructs of stress and distress are closely related [23]. Finally, we expected that participants with higher scores on the PSSI would score lower on the CD-RISC (e.g., be less resilient), resulting in a negative correlation.

Results

Eleven cognitive interviews were conducted over a two-month period. Participants were almost evenly split between the undergraduate (45%) and graduate (55%) levels of study, were mostly female (63.6%), and studied in a variety of different areas. Problem codes were used to identify issues with the instrument. Table 2 shows the number of problem codes marked in each category for each participant.
Table 2

Number of Problems Identified by Cognitive Interview Participants

IntervieweeClarityRelevanceRecallRedundancyBiasComprehensiveResponse Categories
1 (PHD1)5111
2 (UG3)412
3 (UG3)21
4 (MA1)21
5 (UG3)21
6 (UG2)2
7 (PHD2)111
8 (MA2)21
9 (MA2)2
10 (UG4)1
11 (PHD2)0

Note. UG undergraduate, MA Master’s level graduate student, PHD PhD level graduate student. Number indicates year of study

Number of Problems Identified by Cognitive Interview Participants Note. UG undergraduate, MA Master’s level graduate student, PHD PhD level graduate student. Number indicates year of study Examples of interview responses are included in Table 3. No problems with recall or bias were identified by any of the participants. The majority of the problems identified were regarding clarity, with participants asking for clarification on items, and suggesting rewordings to make items more explicit. For example, one participant explained that the item “completing assignments” should be clarified to emphasize having to manage multiple assignments simultaneously. As a result, this item was changed to “having multiple assignments due around the same time.” Both redundancies were identified among stressors within the learning environment theme. Students suggested that “asking my professor a question,” “asking my professor to remark something,” and “asking my professor for clarification on a grade” were largely addressing the same stressor: interaction with faculty. Therefore, these items were collapsed into a single item. Three items were added as per participants’ recommendations to ensure comprehensiveness of the index. Students questioned the relevance of some items within the personal stressors theme to one’s experience as a student (e.g., “worrying about my personal appearance”), though as these problems were rarely identified, the items were ultimately left in for the subsequent phase of testing. Finally, several students struggled with the response options used to evaluate frequency. While these options were initially more specific (e.g., a few times per year, per month, per week), they were altered to be more general (e.g., rarely, sometimes, regularly) in response to participants’ feedback. Additionally, the highest frequency response option was changed from “always” to “almost always” to dissuade participants from shying away from the extreme option.
Table 3

Examples of Cognitive Interview Responses

ItemDiscussion
Completing assignmentsInterviewer: I noticed you paused … what were you thinking when you read that item?
Respondent: Well […] “completing assignments” is stressful in the sense that you have to do it, but if the assignment is very clear, then it’s not as stressful. Having a bunch of assignments all piling up [at the same time] … now that is stressful.
Lack of guidance from professorInterviewer: Can you tell me what you were thinking about when you responded to that item?
Respondent: If I ask for clarification on something and they say “just do what’s in the syllabus” then it’s like… okay, why do I even bother?
Waiting to receive grades/marksInterviewer: I see you went back and changed your answer on that item … can you tell me why?
Respondent: At first I thought, yeah… pretty stressful, [but it’s] only stressful if the next assignment is due soon and I don’t have the [other] grade back yet … I need to know where I’m at, you know?
Pressure to succeedInterview: [Participant scoffed while answering ‘extremely stressful’] What were you thinking when you responded to that item?
Respondent: What if I can’t get a job after my PhD? Was it all a waste of time? A lot of my friends, my [significant other], think that I’m going to be successful just because I’m doing my PhD… but what if I fail? I’m letting so many people down. I think about that every day.
Feeling like my peers are smarter than I amInterviewer: What made you answer that way?
Respondent: […] when you hear someone contribute to conversation and you’re like… ‘Oh my God, how did they think of that? I don’t belong here.’
Examples of Cognitive Interview Responses Using a snowball sampling approach, we were successful in recruiting a sample of 65 post-secondary students to serve as our panel of participants (Table 4). Our goal at this stage was to recruit a demographically varied sample of students across provinces and areas of study in order to gain a broader perspective on student stress. The majority of participants were female (76.6%) university students (95.7%) from Ontario (79.5%), at the undergraduate (30.4%) or master’s degree level (34.8%), with an average age of 24 years (SD = 3.5).
Table 4

Demographic Characteristics of Consensus Survey Sample (n = 47)

VariableNumber nPercent %
Province
 Ontario3579.5
 Quebec613.6
 Nova Scotia12.3
 Alberta12.3
 British Columbia12.3
Type of Post-Secondary Institution
 University4595.7
 College24.3
Sex
 Female3676.6
 Male1123.4
Age Group
 19–21 years1123.4
 22–25 years2042.6
 26–29 years1532.0
 30+ years12.0
Level of Study
 1st – 2nd year Undergraduate510.8
 3rd – 4th year Undergraduate919.6
 Masters Student1634.8
 Doctoral Student613.0
 Professional Program1021.7
Program of Study
 Health Science1225.5
 Social Science817.0
 Medicine612.7
 Nursing48.5
 Education36.4
 Speech-Language Pathology36.4
 Business36.4
 Engineering24.3
 Science24.3
 Other48.5

Note. Valid percents reported. Missing data on demographic variables (n = 18)

Demographic Characteristics of Consensus Survey Sample (n = 47) Note. Valid percents reported. Missing data on demographic variables (n = 18) A total of 38 items demonstrated I-CVIs < 0.7, the recommended cut off for retention [15, 17]. All items met the cutoff for clarity. While some items’ relevancy ratings did not meet the I-CVI cutoff, we carefully considered these, choosing to retain those that we considered were important to the overall comprehensiveness of the instrument. For example, some items were retained due to being prevalent in focus groups held with students during the item pool development phase of this program of research (e.g., worrying about getting into a new program after graduation, feeling pressured to socialize). Others were retained if we thought the item CVI might have been higher had the sample we used contained more students to whom the item applied (e.g., having to take student loans, and working on one’s thesis). Items addressing sexual harassment and instances of discrimination on campus were also retained despite falling below the threshold, as we considered these were important potential student experiences of campus culture that every institution should seek to monitor. Table 5 displays the retained items along with their relevance and clarity content validity indices, organized by domain of stress.
Table 5

Content Validity Indices for PSSI Items, by Theme

ItemRelevanceI-CVIClarityI-CVI
Academic
 a) Preparing for exams0.990.95
 b) Writing exams0.920.97
 c) Writing multiple exams on the same day0.890.98
 d) Final exams worth 50% or more0.860.95
 e) Heavily weighted assignments0.790.89
 f) Having multiple assignments due around the same time0.940.98
 g) Managing my academic workload0.820.83
 h) Maintaining a high GPA0.910.94
 i) Working on my thesis0.69*0.84
 j) Performing well at my professional placement (i.e., practicum)0.780.88
Learning Environment
 a) Poor communication from professor0.820.80
 b) Unclear expectations from professor0.890.93
 c) Lack of guidance from professor0.800.83
 d) Meeting my thesis/placement supervisor’s expectations0.800.98
 e) Lack of mentoring from my thesis/placement supervisor0.840.94
Campus Culture
 a) Adjusting to the post-secondary lifestyle0.750.88
 b) Adjusting to my program0.69*0.92
 c) Academic competition among my peers0.711.00
 d) Feeling like I’m not working hard enough0.870.96
 e) Feeling like my peers are smarter than I am0.871.00
 f) Pressure to succeed0.980.98
 g) Sexual harassment on campus0.62*0.94
 h) Discrimination on campus0.64*0.94
Interpersonal
 a) Making new friends0.800.98
 b) Maintaining friendships0.800.96
 c) Networking with the ‘right’ people0.760.92
 d) Feeling pressured to socialize0.66*0.94
 e) Balancing social life with academics0.880.98
 f) Comparing myself to others0.860.98
 g) Comparing my life to others on social media0.760.96
 h) Meeting other peoples’ expectations of me0.861.00
 i) Meeting my own expectations0.980.98
Personal
 a) Getting enough sleep0.870.96
 b) Getting enough exercise0.830.98
 c) Making sure that I eat healthy0.790.98
 d) Having to prepare meals for myself0.701.00
 e) Balancing working at my job with academics0.790.98
 f) Balancing my extracurriculars with academics0.830.94
 g) Feeling guilty about taking time for my hobbies/interests0.810.98
 h) Having to take student loans0.68*1.00
 i) Worrying about paying off debt0.720.98
 j) Worrying about getting a job after graduation0.831.00
 k) Worrying about getting into a new program after graduation0.68*0.98

Note. * Indicates item was retained despite I-CVI < 0.7

Content Validity Indices for PSSI Items, by Theme Note. * Indicates item was retained despite I-CVI < 0.7

Relations to other variables

A total of 535 participants completed the initial pilot test survey, representing a response rate of 11%. Most participants were female (74.0%), single (64.9%), lived off campus with roommates (62.1%), self-reported their GPA to be between 80 and 89% (41.7%), and studied full-time (92.1%), and at the undergraduate level (65.5%). The majority of participants were between the ages of 19 and 21 years (63.7%), with an overall average age of 24.5 years (SD = 7.0). International students made up about 9% of the sample. Of the 535 students, a total of 350 completed the second survey (response rate 65%) with a similar demographic breakdown (Table 6).
Table 6

Demographic Characteristics of Pilot Test Sample

VariableSurvey 1 (n = 535)Survey 2 (n = 350)
Number nPercent %Number nPercent %
Sex
 Female39674.027077.1
 Male13224.77621.7
 Non-Binary30.620.6
 Prefer not to answer40.720.6
Age Group
 18–21 years26449.317750.6
 22–25 years13024.38323.7
 26–29 years6512.14412.6
 30+ years7614.24613.1
Relationship Status
 Single34764.923466.9
 In a Relationship11120.76618.8
 Married or Common-law6812.74212.0
 Separated, Divorced, or Widowed20.320.6
 Prefer not to answer71.361.7
Living Arrangement
 Campus residence hall356.4226.3
 Other on-campus housing132.461.7
 Off campus with roommates33262.122464.0
 Off campus alone6912.94813.7
 Off campus with family8315.54713.4
 Prefer not to answer30.630.9
Level of Study
 1st - 2nd year Undergraduate18734.911232.0
 3rd - 5th year Undergraduate15428.810931.1
 Graduate (Masters)7213.54713.4
 Graduate (Doctoral)539.93710.6
 Professional Program478.8308.6
 Other213.9144.0
 Prefer not to answer10.210.2
Student Status
 Full-time49392.132492.6
 Part-time326.0216.0
 Othera81.541.1
 Prefer not to answer20.410.3
International Student
 No48690.832492.6
 Yes489.0257.1
 Prefer not to answer10.210.3
Approximate GPA
 90–100%10319.36919.7
 80–89%22341.714541.4
 70–79%14727.59828.0
 60–69%326.0`174.9
 0–59%40.720.6
 Prefer not to answer264.9195.4

Note. Valid percents reported

a “Other” category includes non-degree seekers, online students, medical residents

Demographic Characteristics of Pilot Test Sample Note. Valid percents reported a “Other” category includes non-degree seekers, online students, medical residents To calculate “scores” on the PSSI, we dichotomized responses to reflect whether participants experienced some level of stress response to stressor (coded as 1) or did not experience or did not find the experience stressful (coded as 0). For each respondent, we then summed the number of stressors experienced to derive an absolute count, which was treated as a “score”. We used the non-parametric Spearman’s rho to calculate correlations between the PSSI scores and those on other instruments, as the data were not normally distributed. Table 7 and Fig. 1 show the correlations between each of the instruments tested. As hypothesized, the PSSI demonstrated a positive, moderate correlation with both the PSS-10 and K-10. As expected, the PSSI also demonstrated a negative correlation with the CD-RISC, indicating that as the number of stressors experienced increased, participants’ resilience scores decreased [20]. While the correlation coefficient was relatively modest, this is not surprising given the subjectivity associated with stressful experiences, as well as the individual nature of psychological resilience [24, 25].
Table 7

Test-Retest Reliability Correlation Coefficients for PSSI and PSS-10

Total Sample r s 95% CI n
 PSSI0.78(0.74, 0.82)348
 PSS-100.79(0.74, 0.83)349
Extreme Events Removed
 PSSI0.78(0.73, 0.82)273
 PSS-100.79(0.74, 0.83)278
Extreme Events Only
 PSSI0.76(0.64, 0.84)71
 PSS-100.67(0.52, 0.78)71

Note. Spearman’s rho (rs) correlation coefficients for non-parametric data reported

Fig. 1

Graphical Depiction of the Correlational Relationships between Instruments

Test-Retest Reliability Correlation Coefficients for PSSI and PSS-10 Note. Spearman’s rho (rs) correlation coefficients for non-parametric data reported Graphical Depiction of the Correlational Relationships between Instruments

Internal structure evidence

Results of an exploratory factor analysis (Table 8) supported our hypothesis that the PSSI would take the internal structure of an index, rather than a scale. That is to say that no clear groups of items emerged as viable subscales, making it appropriate to treat each item as an individual causal indicator of our underlying construct of post-secondary student stress.
Table 8

Factor Loadings for Items in PSSI

Component
Item 1234567891011
Academics
 1.774
 2.773
 3.673
 4.762
 5.527.450
 6.490.509
 7.354.501
 8.450.320
 9.453.490.355
 10.837
 11.551
Learning Environment
 12.788
 13.782
 14.778
 15.352
 16.902
 17.862
Campus Culture
 18.668
 19.740
 20.586
 21.535.514
 22.582.479
 23.597.327
 24.807
 25.816
Interpersonal
 26.767
 27.782
 28.324.567
 29.744
 30.306.501.344
 31.738.319
 32.584.334
 33.676.327
 34.699
Personal
 35.584
 36.713
 37.736
 38.642
 39.327.340
 40.388.367.336
 41.494.332
 42.894
 43.309.915
 44.316.481
 45.747
 46.428
Eigenvalue12.13.32.61.91.81.61.41.41.21.21.1

Note. KMO Statistic > 0.7 for all items

Barlett’s Test of Sphericity p < 0.001

Principal Components Analysis (Varimax rotation with Kaiser normalization)

Factor Loadings for Items in PSSI Note. KMO Statistic > 0.7 for all items Barlett’s Test of Sphericity p < 0.001 Principal Components Analysis (Varimax rotation with Kaiser normalization) With respect to test-retest reliability, a total of 365 students completed the second survey, and 350 responses were successfully matched using unique identifiers. Respondents completed the first iteration of the survey over the course of a four-week period. Invitations to complete the second survey were staggered in order to ensure that at least two weeks had passed between responses. As described above, we summed the absolute count of stressors experienced at each time point and examined the correlation between the average PSSI “scores” for the total sample. We sought the recommended minimum reliability coefficient of 0.7 [9]. The PSSI demonstrated strong test-retest reliability (rs = 0.78; 95% CI 0.74, 0.82) comparable to that of the Perceived Stress Scale (PSS-10, rs = 0.79; 95% CI 0.74, 0.83), which has been demonstrated to have consistent test-retest reliability averaging around 0.7 for a two-week interval [26]. While we hypothesized that students’ stress levels were likely to remain fairly static over a two-week period, we added a variable to the second survey in order to account for the possibility of a significantly stressful event producing a change in PSSI scores. Respondents were asked whether an event had occurred that caused them extreme stress since they submitted their last survey. Removing all respondents who indicated ‘yes’ (21.4%) from the dataset, we repeated our correlation analysis (n = 273), which revealed the test-retest reliability of the PSSI to be largely unchanged (rs = 0.78; 95% CI 0.73, 0.82). Finally, we repeated the correlation analysis among only those who experienced an extremely stressful events (n = 71). Here, we saw a slight decrease in the correlation for the PSSI scores (rs = 0.76; 95% CI 0.64, 0.84). Table 7 depicts all tests conducted for test-retest reliability.

Discussion

By emphasizing mental health promotion and mental illness prevention, post-secondary institutions can provide students with the skills and resources needed to navigate stressors inherent to the post-secondary experience and thrive in the face of challenge. In order to develop and deliver efficacious promotion and prevention programming, however, there must first be a valid and reliable method of assessing post-secondary students’ exposure to stress. The Post-Secondary Student Stressors Index was created to fill this gap. Unlike many of its predecessors, the PSSI was developed for students based on student input. We engaged four diverse samples of Canadian post-secondary students in the development, refinement, and testing of the instrument using a combination of qualitative and quantitative data collection approaches. Engaging students throughout the process facilitated the collection of robust content evidence for validity. Students were treated as subject matter experts, who were invested in helping us develop a holistic tool that accurately reflected student experiences of stress. In addition to content evidence, we collected response processes evidence by conducting one-on-one interviews with students to help us refine the tool and ensure that any item ambiguity and comprehension issues were addressed prior to moving forward to the testing stage. Next, we conducted a pilot test to examine the internal structure of the tool as well as its relations to other variables. As expected, exploratory factor analysis revealed the PSSI to have the internal structure of an index, rather than a scale. That is to say that items on the PSSI did not “group” together, or load on distinct factors, as would be expected of a scale. The PSSI also demonstrated strong test-retest reliability over a two-week period, comparable to established measures like the PSS-10. Finally, the PSSI demonstrated good relationships with like measures of stress, distress, and resilience, in the hypothesized directions. As desired, the PSSI had a moderate correlation with the PSS-10, indicating that not only were we were successful in creating a tool that adequately measured stress, it also measured a specific type of stress (e.g., the stress experienced by students) rather than a more general assessment of stress. Similarly, the PSSI demonstrated a moderate correlation with the K10. Finally, the PSSI demonstrated a weak, negative correlation with the CD-RISC, indicating that students who experienced more stressors had a lower resilience score. The PSSI is an inventory composed of 46 stressors across five major domains within the post-secondary setting: academics, the learning environment, campus culture, the interpersonal, and the personal. The tool evaluates each stressor by both severity and frequency. This dual-assessment approach will allow institutions to easily determine the priority areas (e.g., the most severe and frequently occurring stressors) for improvement on their campus, allowing for the unique targeting of mental health promotion and mental illness prevention programming.

Limitations

Despite its strengths, there are some limitations to this study. Item pool development was conducted at a single, mid-sized Ontario university. During the item pool refinement stage, we intended to collect a sample of students from various regions in Canada in order to garner more regional representation in the development of the instrument. Despite our attempts to engage post-secondary students from across Canada using a snowball sampling approach for the consensus survey, we were only able to engage a handful of students from outside of Ontario. The pilot testing of the instrument was also conducted at one Ontario university. As a result, it is possible that this tool may be missing stressors that are prevalent in other regions of the country. Future research should continue to work towards the validation of this instrument among samples of students in different regions of the country and a more demographically varied sample in order to test and improve upon its generalizability.

Conclusions

The PSSI was developed to fill the gap left by existing instruments that have unsuccessfully attempted to evaluate post-secondary student stress. Where previous instruments have been developed without taking students’ experiences into consideration, have demonstrated weak psychometrics, or have not collected sufficient evidence for validity, the PSSI excels. Overall, the PSSI demonstrated strong psychometric properties, suggesting that it is an effective tool for assessing post-secondary students’ exposure to stress. The ability to not only identify the sources of student stress (e.g., stressors), but also quantify the severity and frequency with which these stressors are experienced will provide institutions with a roadmap for the development of upstream approaches that can effectively target the sources of student stress and mental health problems on their campus.
  19 in total

1.  Being inconsistent about consistency: when coefficient alpha does and doesn't matter.

Authors:  David L Streiner
Journal:  J Pers Assess       Date:  2003-06

2.  Screening for serious mental illness in the general population.

Authors:  Ronald C Kessler; Peggy R Barker; Lisa J Colpe; Joan F Epstein; Joseph C Gfroerer; Eva Hiripi; Mary J Howes; Sharon-Lise T Normand; Ronald W Manderscheid; Ellen E Walters; Alan M Zaslavsky
Journal:  Arch Gen Psychiatry       Date:  2003-02

3.  The Delphi technique in health sciences education research.

Authors:  Marietjie R de Villiers; Pierre J T de Villiers; Athol P Kent
Journal:  Med Teach       Date:  2005-11       Impact factor: 3.650

4.  Psychological resilience, positive emotions, and successful adaptation to stress in later life.

Authors:  Anthony D Ong; C S Bergeman; Toni L Bisconti; Kimberly A Wallace
Journal:  J Pers Soc Psychol       Date:  2006-10

Review 5.  Is the CVI an acceptable indicator of content validity? Appraisal and recommendations.

Authors:  Denise F Polit; Cheryl Tatano Beck; Steven V Owen
Journal:  Res Nurs Health       Date:  2007-08       Impact factor: 2.228

6.  Psychometric analysis and refinement of the Connor-davidson Resilience Scale (CD-RISC): Validation of a 10-item measure of resilience.

Authors:  Laura Campbell-Sills; Murray B Stein
Journal:  J Trauma Stress       Date:  2007-12

Review 7.  Mental health of young people: a global public-health challenge.

Authors:  Vikram Patel; Alan J Flisher; Sarah Hetrick; Patrick McGorry
Journal:  Lancet       Date:  2007-04-14       Impact factor: 79.321

8.  Development of a new resilience scale: the Connor-Davidson Resilience Scale (CD-RISC).

Authors:  Kathryn M Connor; Jonathan R T Davidson
Journal:  Depress Anxiety       Date:  2003       Impact factor: 6.505

9.  Prevalence and correlates of self-injury among university students.

Authors:  Sarah Elizabeth Gollust; Daniel Eisenberg; Ezra Golberstein
Journal:  J Am Coll Health       Date:  2008 Mar-Apr

10.  Prevalence and correlates of depression, anxiety, and suicidality among university students.

Authors:  Daniel Eisenberg; Sarah E Gollust; Ezra Golberstein; Jennifer L Hefner
Journal:  Am J Orthopsychiatry       Date:  2007-10
View more
  4 in total

1.  Cross-sectional trend analysis of the NCHA II survey data on Canadian post-secondary student mental health and wellbeing from 2013 to 2019.

Authors:  Brooke Linden; Randall Boyes; Heather Stuart
Journal:  BMC Public Health       Date:  2021-03-25       Impact factor: 3.295

2.  Exploration of Factors Affecting Post-Secondary Students' Stress and Academic Success: Application of the Socio-Ecological Model for Health Promotion.

Authors:  Konrad T Lisnyj; David L Pearl; Jennifer E McWhirter; Andrew Papadopoulos
Journal:  Int J Environ Res Public Health       Date:  2021-04-05       Impact factor: 3.390

3.  HEARTSMAP-U: Adapting a Psychosocial Self-Screening and Resource Navigation Support Tool for Use by Post-secondary Students.

Authors:  Punit Virk; Ravia Arora; Heather Burt; Anne Gadermann; Skye Barbic; Marna Nelson; Jana Davidson; Peter Cornish; Quynh Doan
Journal:  Front Psychiatry       Date:  2022-02-22       Impact factor: 4.157

4.  A scoping review and evaluation of instruments used to measure resilience among post-secondary students.

Authors:  Brooke Linden; Amy Ecclestone; Heather Stuart
Journal:  SSM Popul Health       Date:  2022-09-13
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.