Literature DB >> 22701349

Participants' evaluation of the project P.A.T.H.S.: are findings based on different datasets consistent?

Daniel T L Shek1, Rachel C F Sun.   

Abstract

Subjective outcome evaluation findings based on the perspective of the participants of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in nine datasets collected from 2005 to 2009 (n = 206, 313 program participants) were examined in this paper. Based on the consolidated data with schools as units, results showed that the participants generally had positive perceptions of the program, implementers, and benefits of the program. More than four-fifths of the participants regarded the program as beneficial to their holistic development. Multiple regression analysis revealed that the perceived qualities of the program and the program implementers predicted perceived effectiveness of the program. Based on the subjective outcome evaluation findings, the present study provides support for the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. in Hong Kong.

Entities:  

Mesh:

Year:  2012        PMID: 22701349      PMCID: PMC3373125          DOI: 10.1100/2012/187450

Source DB:  PubMed          Journal:  ScientificWorldJournal        ISSN: 1537-744X


1. Introduction

In his review of adolescent developmental issues in Hong Kong, Shek [1] drew several conclusions based on the available statistics and research findings. First, the adolescent substance abuse problem was part of a changing scene in Hong Kong. Second, although the overall youth crime trend was relatively stable in the past decade, the crimes of shoplifting and stealing deserved our concern. Third, the adolescent mental health problem was a growing issue. Fourth, adolescent unhealthy life styles, such as smoking, early sex, and moral confusion, were issues of concern. Fifth, adolescents experiencing economic disadvantage were a growing problem in Hong Kong, with one-quarter of children and adolescents experiencing economic disadvantage. Sixth, unemployed and nonengaged youth were emerging problems in the past 2 decades. Seventh, family and parenting problems in families with adolescents deserved our attention, and the Social Development Index showed that there was a gradual drop in family solidarity in the past decade. In the past few years, adolescent substance abuse has appeared to deteriorate, and it is a topic attracting much public attention in Hong Kong [2, 3]. With reference to these adolescent developmental problems, one important question is how we can promote holistic development in young people in Hong Kong. To promote holistic development among adolescents in Hong Kong, The Hong Kong Jockey Club Charities Trust approved HK$400 million to launch a project entitled “P.A.T.H.S. to Adulthood: A Jockey Club Youth Enhancement Scheme” based on the perspective of positive youth development. The word “P.A.T.H.S.” denotes Positive Adolescent Training through Holistic Social Programmes. There are two tiers of programs (Tier 1 and Tier 2 Programs) in the P.A.T.H.S. Project. Whereas the Tier 2 Program is a selective program for students with greater psychosocial needs, the Tier 1 Program is a universal positive youth development program in which students in secondary 1 to 3 participate, normally, with 20 h of training in the school year at each grade, involving 40 teaching units that have been developed with reference to 15 positive youth development constructs [4]. These constructs include promotion of bonding, cultivation of resilience, promotion of social competence, promotion of emotional competence, promotion of cognitive competence, promotion of behavioral competence, promotion of moral competence, cultivation of self-determination, promotion of spirituality, development of self-efficacy, development of a clear and positive identity, promotion of beliefs in the future, provision of recognition for positive behavior, provision of opportunities for prosocial involvement, and fostering prosocial norms. Because of the overwhelming success of the program, the project has been extended for another cycle, from 2009 to 2012, with an additional earmarked grant of HK$350 million. There were two implementation phases in the original phase of the project—the experimental implementation phase and the full implementation phase. For the experimental implementation phase (January 2006 to August 2008), 52 secondary schools participated in the project with the objectives of accumulating experience in program implementation and familiarizing the front-line workers with the program design and philosophy. In the 2006/2007 school year, the programs were implemented on a full scale at the secondary 1 level. In the 2007/2008 school year, the programs were implemented at the secondary 1 and 2 levels. In the 2008/2009 school year, the programs were implemented at the secondary 1, 2, and 3 levels. The experimental and full implementation phases for the first cycle were successfully completed [5]. To provide a comprehensive picture pertaining to the effectiveness of the project, a wide range of evaluation strategies were employed to examine the program effect, including objective outcome evaluation utilizing a randomized group trial; subjective outcome evaluation based on quantitative and qualitative data collected from the program participants and instructors; qualitative evaluation based on focus groups involving students and instructors; in-depth interviews with program implementers; student products, such as weekly diaries; process evaluation involving systematic observations of delivery of the program and interim evaluation. The available evaluation findings consistently provide strong evidence that the Project P.A.T.H.S. has a beneficial influence on students [6-9]. To examine the perceptions of the program participants concerning the effectiveness of the project, subjective outcome evaluation or the client satisfaction approach was used. In human services, the importance of involving service users or program participants in evaluation is advocated, and thus subjective outcome evaluation becomes popularly used to capture the viewpoints of the participants. To capture the viewpoint of the participants, client satisfaction surveys are commonly used as feedback for transforming services, to meet the users' needs for planning and administration purposes, or simply as an indicator of program effectiveness from the participants' perspective for research purposes. Although there are many criticisms of this approach, the client satisfaction approach is widely used in different service settings. As pointed out by Royse [10], “despite the generally positive bias and the problems associated with collecting representative samples of clients, there is much to recommend client satisfaction studies as one means of evaluating a program. Because professionals do not experience the agency in the same way as the clients, it is important to ask clients to share their experiences” (pp. 264-265). Subjective outcome evaluation is a popular approach employed by different professionals in different fields, such as education, social work, psychology, medicine, and allied health professions. The commonly used method develops closed-ended rating scale items to quantify client satisfaction. For example, standardized rating scales, such as the medical interview satisfaction scale, consumer satisfaction questionnaire, and client satisfaction questionnaire, were developed to gauge client satisfaction and perceived helpfulness of the program. In fact, it is commonly argued that, with the use of valid and reliable measures of the perceptions of the program participants, subjective outcome evaluation can yield objective pictures about program evaluation. Previous studies showed that roughly four-fifths of the program participants generally had positive perceptions of the program, instructors, and benefits of the P.A.T.H.S. Project. In addition, the findings are fairly stable in different cohorts of students in the experimental and full implementation phases [11-13]. Furthermore, program content and program instructors were found to be significant predictors of perceived benefits of the program. As the Project P.A.T.H.S. was implemented in different cohorts of students from 2005 to 2009, it would be illuminating to aggregate findings in different cohorts to form an overall picture regarding the satisfaction of the participants. With data collected from large samples over time, a more stable picture of the subjective outcome evaluation findings can be generated. Against this background, the present paper attempts to describe the profile of subjective outcome evaluation findings based on the perspective of the participants. In addition, predictors of the perceived effectiveness of the program were also examined in this study involving aggregation of different datasets.

2. Methods

2.1. Participants and Procedures

From 2005 to 2009, the total number of schools that participated in the Project P.A.T.H.S. was 244, with 669 schools in the secondary 1 level, 443 in the secondary 2 level, and 215 in the secondary 3 level (Table 1). Altogether, 223,101 students participated in the Tier 1 Program in these 5 years. In these three grades, the mean number of students per school was 167.28 (range: 5–280), with an average of 4.61 classes per school (range: 1–8). Among them, 46.27% of the respondent schools adopted the full program (i.e., 20 h program involving 40 units), whereas 53.73% of the respondent schools adopted the core program (i.e., 10 h program involving 20 units). The mean number of sessions used to implement the program was 22.77 (range: 3–66). While 51.54% of the respondent schools incorporated the program into the formal curriculum (e.g., liberal studies, life education), 48.46% used other modes (e.g., form teachers' periods and other combinations) to implement the program.
Table 1

Description of data characteristics from 2005 to 2009.

S1S2S3
2005/06 EIP2006/07 FIP2007/08 FIP2008/09 FIP2006/07 EIP2007/08 FIP2008/09 FIP2007/08 EIP2008/09 FIP
Total schools that joined P.A.T.H.S.522072131974919619848167
(i) 10 h program23951081042711311029104
(ii) 20 h program29112105932283881963

Tier 1 Program:
Mean no. of sessions of program implementation17.75 (3–50)23.55 (2–50)23.61 (5–60)23.54 (5–65)23.76 (10–40)22.81 (7–60)23.04 (4–48)24.07 (10–44)22.78 (7–66)
No. of schools incorporated into formal curriculum211011169826108993085
No. of schools incorporated into other modes3110697992388991882
Mean no. of classes per school4.58 (2–7)4.66 (1–8)4.69 (1–8)4.56 (1–8)4.51 (1–7)4.62 (1–8)4.64 (1–8)4.56 (1–8)4.67 (1–8)
Total no. of students867935,73536,34331,280816733,44933,583770828,157
Mean no. of students per school166.90 (37–240)172.63 (17–280)171.05 (16–267)158.78 (5–251)166.67 (32–240)170.66 (12–280)169.61 (15–263)160.58 (26–240)168.60 (28–240)
Total no. of student respondents8,05733,69333,86729,1007,40630,73131,1976,83025,432
Mean no. of student respondents per school154.94 (37–212)162.77 (15–265)159.00 (14–267)147.72 (3–251)151.14 (32–220)156.80 (12–243)157.56 (15–263)142.29 (23–213)152.29 (22–229)

Note: S1: secondary 1 level; S2: secondary 2 level; S3: secondary 3 level; EIP: Experimental Implementation Phase, FIP: Full Implementation Phase.

After completing the Tier 1 Program, the students were invited to respond to a Subjective Outcome Evaluation Form for Students (Form A) developed by the first author. From 2005 to 2009, a total of 206,313 questionnaires were completed (104,717 for the secondary 1 level, 69,334 for the secondary 2 level, and 32,262 for the secondary 3 level). The overall response rate was 92.48%. To facilitate the program evaluation, the research team developed an evaluation manual with standardized instructions for collecting the subjective outcome evaluation data. In addition, adequate training was provided to the implementers during the 20 h training workshops on how to collect and analyze the data collected by Form A. On the day when the evaluation data were collected, the purpose of the evaluation was mentioned and the confidentiality of the data collected was repeatedly emphasized to all of the respondents. The respondents were asked to indicate if they did not want to respond to the evaluation questionnaire (i.e., “passive” informed consent was obtained). All respondents responded to all scales in the evaluation form in a self-administration format. Adequate time was provided for the respondents to complete the questionnaire.

2.2. Instruments

The Subjective Outcome Evaluation Form (Form A) [11-13] was used to measure the program participants' perceptions of the Tier 1 Program. Broadly speaking, there are several parts in this evaluation form as follows: participants' perceptions of the program, such as program objectives, design, classroom atmosphere, interaction among the students, and the respondents' participation during class (10 items) participants' perceptions of the workers, such as the preparation of the instructor, professional attitude, involvement, and interaction with the students (10 items) participants' perceptions of the effectiveness of the program, such as promotion of different psychosocial competencies, resilience, and overall personal development (16 items) the extent to which the participants would recommend the program to other people with similar needs (1 item) the extent to which the participants would join similar programs in the future (1 item) overall satisfaction with the program (1 item) things that the participants learned from the program (open-ended question) things that the participants appreciated most (open-ended question) opinion about the instructor(s) (open-ended question) areas that require improvement (open-ended question). For the quantitative data, the implementers collecting the data were requested to input the data into an EXCEL file developed by the research team that would automatically compute the frequencies and percentages associated with the different ratings for an item. When the schools submitted the reports, they were also requested to submit the soft copy of the consolidated datasheets. After receiving the consolidated data by the funding body, the data were aggregated in order to “reconstruct” the overall profile based on the subjective outcome evaluation data by the research team.

2.3. Data Analyses

Percentage findings were examined using descriptive statistics. A composite measure of each domain (i.e., perceived qualities of program content, perceived qualities of program implementers, and perceived program effectiveness) was created based on the total scores of each domain divided by the number of items. Pearson's correlation analysis was used to examine if the program content and program implementers were related to the program effectiveness. Multiple regression analysis was performed to compare which factor would predict the program effectiveness. All analyses were performed by using the Statistical Package for Social Sciences version 17.0.

3. Results

The quantitative findings based on the closed-ended questions are presented in this paper. Several observations can be highlighted from the findings. First, the participants generally had positive perceptions of the program (Table 2), including clear objectives of the curriculum (83.50%), much interaction among students (81.84%), and well-planned teaching activities (81.76%). Second, a high proportion of the participants had a positive evaluation of implementers' performance (Table 3). For example, the participants thought that the implementers were very involved (88.63%), ready to help them when needed (88.22%), and encouraged them to participate in activities (88.12%). Third, as shown in Table 4, many participants perceived that the program promoted their development, including moral competence (84.05%), compassion for others (81.75%), social competence (82.11%), and overall development (83.24%). Fourth, 77% of the participants would recommend the program to students with similar needs. Fifth, 65.37% of the participants expressed that they would participate in similar courses again in the future. Finally, 84.48% of the respondents were satisfied with the program on the whole (Table 5).
Table 2

Summary of the students' perception towards the program.

Respondents with positive responses (options 4–6)
S1S2S3Overall
n % n % n % n %
(1)The objectives of the curriculum were very clear.87,33783.9656,77882.4326,97984.11171,09483.50
(2)The design of the curriculum was very good.83,44680.3053,94878.4125,82180.55163,21579.75
(3)The activities were carefully planned.84,79381.7555,53280.8326,46582.70166,79081.76
(4)The classroom atmosphere was very pleasant.81,98679.1854,04778.7926,13781.76162,17079.91
(5)There was much peer interaction among the students.83,73081.2155,50781.1626,48683.15165,72381.84
(6)Students participated actively during lessons (including discussions, sharing, games, etc.).84,12481.0854,93279.9725,89680.91164,95280.65
(7)The program had a strong and sound theoretical support.79,51376.6952,06375.7825,01878.17156,59476.88
(8)The teaching experience I encountered enhanced my interest in the course.79,69277.1151,63575.3524,87277.88156,19976.78
(9)Overall speaking, I have a very positive evaluation of the program.78,67675.9651,58075.1325,04978.33155,30576.47
(10)On the whole, I like this curriculum very much.79,81177.2751,52775.1924,94478.13156,28276.86

Note: all items are on a 6-point Likert scale with 1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, 6 = strongly agree. Only respondents with positive responses (options 4–6) are shown in the table.

Table 3

Summary of the students' perception towards the performance of program implementers.

Respondents with positive responses (options 4–6)
S1S2S3Overall
n % n % n % n %
(1)The instructor(s) had a good mastery of the curriculum.89,35986.2158,70785.5228,03587.49176,10186.41
(2)The instructor(s) was well prepared for the lessons.91,32488.1859,81987.1928,31388.36179,45687.91
(3)The instructor(s)' teaching skills were good.89,20186.3357,92984.6427,73486.66174,86485.88
(4)The instructor(s) showed good professional attitudes.90,77187.7959,35686.6328,17987.99178,30687.47
(5)The instructor(s) was very involved.91,90288.8560,14987.8028,55889.25180,60988.63
(6)The instructor(s) encouraged students to participate in the activities.91,45388.4959,79187.2628,35088.60179,59488.12
(7)The instructor(s) cared for the students.89,52686.5958,49685.3427,86487.08175,88686.34
(8)The instructor(s) was ready to offer help to students when needed.91,22088.2559,90387.4728,46788.93179,59088.22
(9)The instructor(s) had much interaction with the students.87,31084.4157,32983.6427,56286.07172,20184.71
(10)Overall speaking, I have very positive evaluation of the instructors.91,45888.2459,99287.4328,51188.99179,96188.22

Note: all items are on a 6-point Likert scale with 1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree, 5 = agree, 6 = strongly agree. Only respondents with positive responses (options 4–6) are shown in the table.

Table 4

Summary of the students' perception towards the program effectiveness.

Respondents with Positive Responses (Options 3–5)
S1S2S3Overall
n % n % n % n %
The extent to which the course (i.e., the program that all students have joined) has helped you
(1)It has strengthened my bonding with teachers, classmates, and my family.80,95177.9752,22776.0425,00878.28158,18677.43
(2)It has strengthened my resilience in adverse conditions.83,59880.5953,83778.4325,70780.53163,14279.85
(3)It has enhanced my social competence.85,84782.8955,51781.0226,27282.43167,63682.11
(4)It has improved my ability in handling and expressing my emotions.85,02482.1154,97480.2426,02681.69166,02481.35
(5)It has enhanced my cognitive competence.84,67981.8054,76579.9325,95281.41165,39681.05
(6)My ability to resist harmful influences has been improved.86,18283.3055,87281.5226,38782.75168,44182.52
(7)It has strengthened my ability to distinguish between the good and the bad.87,90984.9456,85183.0226,80984.18171,56984.05
(8)It has increased my competence in making sensible and wise choices.86,50483.6156,16882.0226,44483.02169,11682.88
(9)It has helped me to have life reflections.83,68680.8454,75379.9426,11181.96164,55080.91
(10)It has reinforced my self-confidence.82,63279.8853,05877.4925,09378.77160,78378.71
(11)It has increased my self-awareness.84,33781.5454,13579.0325,81380.99164,28580.52
(12)It has helped me to face the future with a positive attitude.84,70381.9254,80480.0626,13582.02165,64281.33
(13)It has helped me to cultivate compassion and care about others.84,89282.0655,27980.7326,25282.45166,42381.75
(14)It has encouraged me to care about the community.82,26979.5853,43178.0225,27679.73160,97679.11
(15)It has promoted my sense of responsibility in serving the society.83,74780.9354,23079.1525,58080.57163,55780.22
(16)It has enriched my overall development.86,74383.8056,24582.1226,59683.81169,58483.24

Note: all items are on a 5-point Likert scale with 1 = unhelpful, 2 = not very helpful, 3 = slightly helpful, 4 = helpful, 5 = very helpful. Only respondents with positive responses (options 3–5) are shown in the table.

Reliability analysis with the schools as the unit of analyses showed that Form A was internally consistent (Table 6): 10 items related to the program (α = 0.98), 10 items related to the implementer (α = 0.99), 16 items related to the benefits (α = 1.00), and 36 items measuring overall program effectiveness (α = 0.99). Results of correlation analyses showed that both program content (r = 0.85, P < 0.01) and program implementers (r = 0.74, P < 0.01) were strongly associated with program effectiveness (Table 7).
Table 6

Means, standard deviations, Cronbach's alphas, and mean of interitem correlations among the variables by grade.

S1S2S3Overall
M(SD) α (mean#) M(SD) α (mean#) M(SD) α (mean#) M(SD) α (mean#)
Program content (10 items)4.28 (0.29)0.98 (0.85)4.22 (0.32)0.99 (0.89)4.26 (0.31)0.99 (0.87)4.26 (0.31)0.98 (0.87)
Program implementers (10 items)4.62 (0.30)0.99 (0.93)4.54 (0.31)1.00 (0.95)4.58 (0.32)1.00 (0.95)4.59 (0.31)0.99 (0.94)
Program effectiveness (16 items)3.41 (0.26)1.00 (0.94)3.31 (0.28)1.00 (0.95)3.33 (0.29)1.00 (0.95)3.36 (0.28)1.00 (0.94)
Total effectiveness (36 items)3.99 (0.26)0.99 (0.80)3.91 (0.28)0.99 (0.83)3.94 (0.28)0.99 (0.82)3.95 (0.28)0.99 (0.82)

# Mean interitem correlations.

Table 7

Correlation coefficients among the variables.

Variable123
(1)Program content (10 items)
(2)Program implementers (10 items)0.91**
(3)Program effectiveness (16 items)0.85**0.74**

**P < 0.01.

Table 8 presents multiple regression analysis results. Higher positive views toward the program and program implementers were associated with higher program effectiveness (P < 0.01). Further analyses showed that program content (β = 0.75) was a significantly stronger predictor than program implementers (β = 0.24). This model explained 95% of the variance toward the prediction of program effectiveness. Interestingly, the above relationships and the amount of variance were consistent across grade levels.
Table 8

Multiple regression analyses predicting program effectiveness.

Predictors
Program contentProgram implementersModel
β a β a R R2
S10.75**0.24**0.970.94
S20.78**0.21**0.980.95
S30.80**0.18**0.970.94
Overall0.75**0.24**0.970.95

Standardized coefficients.

**P < 0.01.

4. Discussion

The purpose of this study was to evaluate the Tier 1 Program of the Project P.A.T.H.S. via the subjective outcome evaluation approach based on the perspective of the program participants using the data collected in the experimental and full implementation phases (2005–2009) of the project. There are several characteristics of this study. First, a large sample of schools (more than 200 schools per grade) and students (n = 206, 313) were involved. Second, different datasets collected at different points of time were analyzed in this study. Third, responses of students in different grades were collected. Fourth, this is the first known scientific study of the subjective outcome evaluation of a positive youth development program based on different cohorts in China. Finally, this is also the first study of subjective outcome evaluation based on such a large sample of participants in the global context. Generally speaking, the quantitative findings showed that a high proportion of the respondents had positive perceptions of the program and the workers; roughly four-fifths of the respondents regarded the program as helpful to them. The findings basically replicated those findings reported previously based on the perspective of the program participants and they are also consistent with those based on the perspective of the program implementers. In fact, an examination of the percentages of responses to different items revealed that the figures were very similar across different studies. In conjunction with findings based on other evaluation strategies, the present integrative evaluation study showed that the Tier 1 Program of the Project P.A.T.H.S. was well received by the program participants, and over four-fifths of them were of the view that the program was beneficial to their development. There are several contributions of the present study. First, in view of the lack of positive youth development programs and related evaluation findings in the Chinese context, the present study is a pioneer study. Besides showing that Project P.A.T.H.S. is effective, it also demonstrates how subjective outcome evaluation based on a large sample size can be carried out. Second, the findings show that the subjective outcome measure is reliable. Because there are few validated measures in the Chinese culture [14, 15], the present study contributes to the assessment literature on psychosocial measures in the Chinese context. Finally, findings on the predictors of subjective outcome evaluation are important because there are currently few conceptual models on the determinants of subjective outcomes. There has been some discussion in the literature on how the quality of a program can be enhanced by tailoring an appropriate program to suit the values and needs of target populations [16, 17]. For example, using the Youth Program Quality Assessment (YPQA) instrument, researchers found that the effect of program delivery qualities varied with the students' ages [18]. In addition, the positive youth-oriented approach was found to be more beneficial for high-school students, while staff-oriented pedagogy was more appropriate for elementary school students [17, 18]. Unfortunately, although program components and their interactions with individual factors are important determinants of the effectiveness of youth programs, very few studies have examined the effect of different program components on perceived program effectiveness, especially in the Chinese context. The present findings fill an important gap in the formulation of theoretical models on the determinants of subjective outcome evaluation of positive youth development programs. Although utilization of subjective outcome evaluation or the client satisfaction approach in evaluation has a long history in human services, there are arguments against the use of subjective outcome evaluation. For example, subjective outcome evaluation has been criticized as biased and unable to reflect the real behavioral changes in the program participants [19, 20]. Nevertheless, there are several features in this study that may be used to argue against such criticisms. First, a very big sample was used in this study, with 206,313 students in roughly half of the secondary schools in Hong Kong. Such a big sample size substantially enhances the generalizability of the research findings and their credibility. Second, different aspects of subjective outcome, including views on the program, worker, perceived effectiveness, and overall satisfaction, were covered in the study. The present findings also showed that the Form A rating items were reliable with reference to the sections and the whole scale. According to Royse [10], the lack of standardized assessment tools for conducting a client satisfaction survey also introduces biases for the client satisfaction approach. As such, he recommended the use of an assessment tool with known reliability and validity that would “eliminate many of the problems found in hastily designed questionnaires” (p. 265). Third, because the findings reported in this paper were “reconstructed” based on the reports submitted anonymously by the participating schools, the possibility that the students reported in an over-cooperative manner was not high. Finally, previous research findings based on the same project have shown that subjective outcome evaluation findings actually converged with objective outcome evaluation findings [21, 22]. In view of the lack of research data in this view, such studies point to the value of collecting subjective outcome evaluation data. Of course, the use of schools as the units of analyses might mask individual differences involved. However, in view of the large number of schools involved, this is not a particularly acute problem. Despite these limitations, the present findings suggest that the Tier 1 Program of the Project P.A.T.H.S. and its implementation were perceived in a positive manner by the program participants. In conjunction with other evaluation findings, the present study suggests that the Tier 1 Program of the Project P.A.T.H.S. was perceived to be beneficial to the development of the program participants. With reference to the gradual decline of parental control in the early teenage years of Chinese adolescents in Hong Kong, positive youth development programs such as the Project P.A.T.H.S. are important initiatives to promote psychosocial competencies in Chinese adolescents of Hong Kong [23]. Furthermore, although subjective outcome evaluation is a popular approach used in human services in the Western contexts [24-28], there are comparatively few published studies in the Chinese contexts, particularly in the area of positive youth development. As such, the present integrative study and the related studies can be regarded as groundbreaking in the field of positive youth development in different Chinese contexts.

(a) If your friends have needs and conditions similar to yours, will you suggest him/her to join this course?

Respondents with positive responses (Options 3-4)
S1S2S3Overall
n % n % n % n %
82,17779.8651,26175.2024,07875.94157,51677.00

Note: The item is on a 4-point Likert scale with 1 = definitely will not suggest, 2 = will not suggest, 3 = will suggest, 4 = definitely will suggest. Only respondents with positive responses (options 3-4) are shown in the table.

(b) Will you participate in similar courses again in the future?

Respondents with positive responses (options 3-4)
S1S2S3Overall
n % n % n % n %
70,00768.0543,38263.7020,39264.35133,78165.37

Note: The item is on a 4-point Likert scale with 1 = definitely will not participate, 2 = will not participate, 3 = will participate, 4 = definitely will participate. Only respondents with positive responses (options 3-4) are shown in the table.

(c) On the whole, are you satisfied with this course?

Respondents with positive responses (options 4–6)
S1S2S3Overall
n % n % n % n %
87,59685.1956,69283.2126,97585.04171,26384.48

Note: all items are on a 6-point Likert scale with 1 = very dissatisfied, 2 = moderately dissatisfied, 3 = slightly dissatisfied, 4 = slightly satisfied, 5 = moderately satisfied, 6 = very satisfied. Only respondents with positive responses (options 4–6) are shown in the table.

  19 in total

1.  Effectiveness of the Tier 1 Program of Project P.A.T.H.S.: findings based on three years of program implementation.

Authors:  Daniel T L Shek; Rachel C F Sun
Journal:  ScientificWorldJournal       Date:  2010-08-03

2.  Quality at the point of service: profiles of practice in after-school settings.

Authors:  Charles Smith; Stephen C Peck; Anne-Sophie Denault; Juliane Blazevski; Tom Akiva
Journal:  Am J Community Psychol       Date:  2010-06

3.  Secondary data analyses of subjective outcome evaluation findings of the project P.A.T.H.S. in Hong Kong.

Authors:  Daniel T L Shek; Rachel C F Sun
Journal:  ScientificWorldJournal       Date:  2010-11-04

4.  Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

Authors:  Daniel T L Shek; Cecilia M S Ma
Journal:  ScientificWorldJournal       Date:  2011-01-05

5.  Editorial: Evaluation of the project P.A.T.H.S. in Hong Kong: are the findings replicable across different populations?

Authors:  Daniel T L Shek; Hing Keung Ma
Journal:  ScientificWorldJournal       Date:  2010-02-12

6.  Subjective outcome evaluation of a positive youth development program in Hong Kong: profiles and correlates.

Authors:  Hing Keung Ma; Daniel T L Shek
Journal:  ScientificWorldJournal       Date:  2010-02-12

7.  When physicians and patients think alike: patient-centered beliefs and their impact on satisfaction and trust.

Authors:  E Krupat; R A Bell; R L Kravitz; D Thom; R Azari
Journal:  J Fam Pract       Date:  2001-12       Impact factor: 0.493

8.  Prevention of adolescent problem behavior: longitudinal impact of the Project P.A.T.H.S. in Hong Kong.

Authors:  Daniel T L Shek; Lu Yu
Journal:  ScientificWorldJournal       Date:  2011-03-07

9.  Inside the black box: assessing and improving quality in youth programs.

Authors:  Nicole Yohalem; Alicia Wilson-Ahlstrom
Journal:  Am J Community Psychol       Date:  2010-06

10.  Patient-selected goals: a new perspective on surgical outcome.

Authors:  Eman A Elkadry; Kimberly S Kenton; Mary P FitzGerald; Susan Shott; Linda Brubaker
Journal:  Am J Obstet Gynecol       Date:  2003-12       Impact factor: 8.661

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.