| Literature DB >> 35564927 |
Angelo Rega1,2, Raffaele Nappo2, Roberta Simeoli1,2, Mariangela Cerasuolo3,4,5.
Abstract
While the negative impact of COVID-19 total lockdown on mental health in youth has been extensively studied, findings collected during subsequent waves of the pandemic, in which restrictive rules were more eased, are very sparse. Here, we explore perceived psychological distress during the partial lockdown of the third wave in Southern Italy in a large sample of students, focusing on age and gender differences. Also, we assessed whether attending the type of education could have a protective role on students' psychological well-being. An online survey was completed by 1064 southern Italian students (age range: 8-19 years; males = 368) from March to July 2021. The survey consists of a set of questions regarding general sociodemographic information as well as several aspects of students' psychological well-being. Psychological distress was higher in high school students compared to both elementary and middle ones. In addition, we found gender differences, but only in high school students, with females reporting higher psychological distress than males. Finally, our mediation analysis showed a mediated role of face-to-face schooling in the relationship between age and psychological distress. In conclusion, this study highlights age-related differences in psychological distress during the pandemic and the protective role of school in presence for mental health in Italian students.Entities:
Keywords: COVID-19 pandemic; psychological distress; students
Mesh:
Year: 2022 PMID: 35564927 PMCID: PMC9101009 DOI: 10.3390/ijerph19095532
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 4.614
Demographic characteristics of the sample.
| Age Groups | N | % |
|---|---|---|
| 8–10 years | 232 | 21.89 |
| 11–12 years | 171 | 16.13 |
| 13–15 years | 212 | 20.00 |
| 16–19 years | 444 | 41.89 |
| School levels | N | % |
| Elementary | 249 | 23.49 |
| Middle | 210 | 19.81 |
| High | 600 | 56.60 |
| Gender | N | % |
| Males | 368 | 34.72 |
| Females | 391 | 65.19 |
Pairwise comparisons among elementary, middle and high school students.
| Pairwise Comparisons | Mann–Whitney U | |
|---|---|---|
| Elementary-Middle | −123.96 | |
| Elementary-High | −450.02 | |
| Middle-High | −326.06 |
Notes: All p values were adjusted using Bonferroni correction.
Figure 1Psychological distress total score among elementary, middle and high school students. Significant between-group comparisons are reported. Data are presented as median. *** p < 0.001.
Figure 2Gender differences in psychological distress total score among elementary, middle and high students. Significant within-group comparisons are reported. Data are presented as median. *** p < 0.001.
Figure 3The mediation effects of type of education on the psychological distress. In the arrows are reported the β values about the effect of the age on the type of education (a), the effect of type of education on psychological distress (b), the total effect of the age on the psychological distress (c), and the direct effect of the age on the psychological distress controlling for the mediator. All p values were <0.001.
Results of the three regression analysis: (a) with “Psychological distress” as dependent variable; (b) with “Age” and “Psychological distress” as independent variables and “Type of education” as the dependent variable.
|
| Beta | t | |
| Age | 0.65 | 25.74 | <0.001 |
|
| Beta | t | |
| Age | 0.92 | 74.39 | <0.001 |
| Psychological distress | 0.65 | 24.31 | <0.001 |
Checklist for Reporting Results of Internet E-Surveys (CHERRIES).
| Checklist Item | Explanation | Page Number |
|---|---|---|
| Describe survey design | Describe target population, sample frame. Is the sample a convenience sample? (In “open” surveys this is most likely.) | Under the Methods section: |
| IRB approval | Mention whether the study has been approved by an IRB. | Under the Methods section: |
| Informed consent | Describe the informed consent process. Where were the participants told the length of time of the survey, which data were stored and where and for how long, who the investigator was, and the purpose of the study? | Under the Methods section: |
| Data protection | If any personal information was collected or stored, describe what mechanisms were used to protect unauthorized access. | Under the Methods section: |
| Development and testing | State how the survey was developed, including whether the usability and technical functionality of the electronic questionnaire had been tested before fielding the questionnaire. | Under the Methods section: |
| Open survey versus closed survey | An “open survey” is a survey open for each visitor of a site, while a closed survey is only open to a sample which the investigator knows (password-protected survey). | The survey was closed. Parents received the link to access the survey and extended it to their sons as reported under the Methods section: |
| Contact mode | Indicate whether or not the initial contact with the potential participants was made on the Internet. (Investigators may also send out questionnaires by mail and allow for Web-based data entry.) | Information about the contact mode is reported under the Methods section: “Participants and procedure” (page 3, lines 117–120) |
| Advertising the survey | How/where was the survey announced or advertised? Some examples are offline media (newspapers), or online (mailing lists—If yes, which ones?) or banner ads (Where were these banner ads posted and what did they look like?). It is important to know the wording of the announcement as it will heavily influence who chooses to participate. Ideally, the survey announcement should be published as an appendix. | The survey was advertised on research-related websites and social media groups as reported under the Methods section: “Participants and procedure” (page 3, lines 117–120) |
| Web/E-mail | State the type of e-survey (e.g., one posted on a Web site, or one sent out through e-mail). If it is an e-mail survey, were the responses entered manually into a database, or was there an automatic method for capturing responses? | Under the Methods section: |
| Context | Describe the Web site (for mailing list/newsgroup) in which the survey was posted. What is the Web site about, who is visiting it, what are visitors normally looking for? Discuss to what degree the content of the Web site could pre-select the sample or influence the results. For example, a survey about vaccination on an anti-immunization Web site will have different results from a Web survey conducted on a government Web site. | Under the Discussion section (page 9, lines 339–348) |
| Mandatory/voluntary | Was it a mandatory survey to be filled in by every visitor who wanted to enter the Web site, or was it a voluntary survey? | Participants were all volunteers as reported under the Methods section: “Participants and procedure” (page 3, lines 129–130) |
| Incentives | Were any incentives offered (e.g., monetary, prizes, or non-monetary incentives such as an offer to provide the survey results)? | There were no incentives offered for the participants |
| Time/Date | In what timeframe were the data collected? | See the Methods section: |
| Randomization of items or questionnaires | To prevent biases, items can be randomized or alternated. | Items were not randomized |
| Adaptive questioning | Use adaptive questioning (certain items, or only conditionally displayed based on responses to other items) to reduce number and complexity of the questions. | We do not use adaptive questioning |
| Number of Items | What was the number of questionnaire items per page? The number of items is an important factor for the completion rate. | Under the Methods section: |
| Number of screens (pages) | Over how many pages was the questionnaire distributed? The number of items is an important factor for the completion rate. | Under the Methods section: |
| Completeness check | It is technically possible to do consistency or completeness checks before the questionnaire is submitted. Was this done, and if “yes”, how (usually JAVAScript)? An alternative is to check for completeness after the questionnaire has been submitted (and highlight mandatory items). If this has been done, it should be reported. All items should provide a non-response option such as “not applicable” or “rather not say”, and selection of one response option should be enforced. | We check for completeness after the questionnaire has been submitted. We reported this under the Results section: |
| Review step | State whether respondents were able to review and change their answers (e.g., through a Back button or a Review step which displays a summary of the responses and asks the respondents if they are correct). | Respondents were able to review and change their answers. |
| Unique site visitor | If you provide view rates or participation rates, you need to define how you determined a unique visitor. There are different techniques available, based on IP addresses or cookies or both. | N/A. Survey was not embedded to a website |
| View rate (Ratio of unique survey visitors/unique site visitors) | Requires counting unique visitors to the first page of the survey, divided by the number of unique site visitors (not page views!). It is not unusual to have view rates of less than 0.1 % if the survey is voluntary. | N/A. Survey was not embedded to a website |
| Participation rate (Ratio of unique visitors who agreed to participate/unique first survey page visitors) | Count the unique number of people who filled in the first survey page (or agreed to participate, for example by checking a checkbox), divided by visitors who visit the first page of the survey (or the informed consents page, if present). This can also be called “recruitment” rate. | Under the Results section: “Demographics” |
| Completion rate (Ratio of users who finished the survey/users who agreed to participate) | The number of people submitting the last questionnaire page, divided by the number of people who agreed to participate (or submitted the first survey page). This is only relevant if there is a separate “informed consent” page or if the survey goes over several pages. This is a measure for attrition. Note that “completion” can involve leaving questionnaire items blank. This is not a measure for how completely questionnaires were filled in. (If you need a measure for this, use the word “completeness rate”.) | Under the Results section: “Demographics” |
| Cookies used | Indicate whether cookies were used to assign a unique user identifier to each client computer. If so, mention the page on which the cookie was set and read, and how long the cookie was valid. Were duplicate entries avoided by preventing users access to the survey twice; or were duplicate database entries having the same user ID eliminated before analysis? In the latter case, which entries were kept for analysis (e.g., the first entry or the most recent)? | No information available on what page the cookie was set to read and how long the cookie was valid |
| IP check | Indicate whether the IP address of the client computer was used to identify potential duplicate entries from the same user. If so, mention the period of time for which no two entries from the same IP address were allowed (e.g., 24 h). Were duplicate entries avoided by preventing users with the same IP address access to the survey twice; or were duplicate database entries having the same IP address within a given period of time eliminated before analysis? If the latter, which entries were kept for analysis (e.g., the first entry or the most recent)? | N/A. The Survey Monkey platform |
| Log file analysis | Indicate whether other techniques to analyze the log file for identification of multiple entries were used. If so, please describe. | N/A. The Survey Monkey platform used cookies to prevent users access to the survey twice |
| Registration | In “closed” (non-open) surveys, users need to login first and it is easier to prevent duplicate entries from the same user. Describe how this was done. For example, was the survey never displayed a second time once the user had filled it in, or was the username stored together with the survey results and later eliminated? If the latter, which entries were kept for analysis (e.g., the first entry or the most recent)? | N/A. the survey was an open survey with targeted respondents |
| Handling of incomplete questionnaires | Were only completed questionnaires analyzed? Were questionnaires which terminated early (where, for example, users did not go through all questionnaire pages) also analyzed? | Under the Results section: “Demographics” |
| Questionnaires submitted with an atypical timestamp | Some investigators may measure the time people needed to fill in a questionnaire and exclude questionnaires that were submitted too soon. Specify the timeframe that was used as a cutoff point, and describe how this point was determined. | No time stamp was provided. |
| Statistical correction | Indicate whether any methods such as weighting of items or propensity scores have been used to adjust for the non-representative sample; if so, please describe the methods. | No statistical approach was used to weight the responses, as it was a scan to identify current PCC practices. |