Literature DB >> 35998157

How are public engagement health festivals evaluated? A systematic review with narrative synthesis.

Susannah Martin1, Charlotte Chamberlain1, Alison Rivett2, Lucy E Selman1.   

Abstract

The evaluation of public engagement health festivals is of growing importance, but there has been no synthesis of its practice to date. We conducted a systematic review of evidence from the evaluation of health-related public engagement festivals published since 2000 to inform future evaluation. Primary study quality was assessed using the Mixed Methods Appraisal Tool. Extracted data were integrated using narrative synthesis, with evaluation methods compared with the Queen Mary University of London public engagement evaluation toolkit. 407 database records were screened; eight studies of varied methodological quality met the inclusion criteria. Evaluations frequently used questionnaires to collect mixed-methods data. Higher quality studies had specific evaluation aims, used a wider variety of evaluation methods and had independent evaluation teams. Evaluation sample profiles were often gender-biased and not ethnically representative. Patient involvement in event delivery supported learning and engagement. These findings and recommendations can help improve future evaluations. (Research Registry ID reviewregistry1021).

Entities:  

Mesh:

Year:  2022        PMID: 35998157      PMCID: PMC9398006          DOI: 10.1371/journal.pone.0267158

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Engagement and collaboration with the public are increasingly recognised as a core aspect of all research and particularly in health-related research [1, 2]. Reasons for such engagement include conversing with the public about research to raise awareness and trust; conducting citizen science; using two-way dialogue to inform and improve research; disseminating research results and sharing knowledge; and influencing policy [3, 4]. There are many overlaps between public engagement (PE) and the long-standing practice of Patient & Public Involvement (PPI) in medical and healthcare research. Commonly accepted definitions of these terms are given below, as set out by leading organisations in these two fields, the National Coordinating Centre for Public Engagement (NCCPE) and the National Institute for Health Research (NIHR) respectively.

Public engagement

The myriad of ways in which the activity and benefits of higher education and research can be shared with the public. Engagement is by definition a two-way process, involving interaction and listening, with the goal of generating mutual benefit [5].

Patient & Public Involvement

Research being carried out ‘with’ or ‘by’ members of the public rather than ‘to’, ‘about’ or ‘for’ them. It is an active partnership between patients, carers and members of the public with researchers that influences and shapes research [6]. The definitions above demonstrate that whilst PPI is a relatively tightly defined concept as understood by healthcare practitioners and researchers, PE is a much more amorphous term [7, 8], encompassing many ways of engaging with the public and not necessarily just about research. PE, particularly when it involves engagement with specific research projects, or research-related matters (e.g. research ethics), rather than engagement around a wider subject area or topic, is sometimes specifically referred to as ‘Public Engagement with Research’ (PER). This part of the engagement spectrum is where there are most overlaps with PPI. Whereas PPI is generally a formally defined process within a healthcare research project, PE activities are often more informal, sometimes ad-hoc and can be delivered in a multitude of ways for a wide variety of audiences [9, 10]. If the precise meaning of engagement is vague, then a catch-all definition of ‘the public’ is even harder to pin down [11, 12]. What is commonly accepted in the PE sphere is that ‘the public’ should never be considered as one single entity, but a multi-dimensional spectrum of people with widely varying levels of expertise, lived experiences, interests, opinions and so on [13, 14]. It is critical that any PE activity is tailored to the specific audience it is aimed at, perhaps even co-developed with that group of people. In the context of this paper the understanding of ‘publics’ as “gatherings of people, things, objects and ideas convened around a matter of concern.” as derived by Facer (2020) is helpful [15]. PPI and PE both play an important role in research related to human health. The UK’s National Health Service (NHS) and the USA’s Institute of Medicine support the co-production of healthcare plans with patients, increasing patient control over their health and emphasising disease prevention [16, 17]. People may therefore, more than ever, have reason to seek out and engage with health-related research. While relatively few members of the public have the opportunity to take part in a formal PPI process, public engagement opportunities might more readily present themselves. Science festivals are one increasingly popular format for communication of, and public engagement with, health research [18]. Such festivals offer audiences a time-limited opportunity to engage directly with scientists and research [18, 19], but vary in their budget, venues, activity format, size and theme. With the proliferation of PE activity comes the need to understand how specific types of PE such as festivals work, who they work for and why [18]. Good quality evaluation of science and health-related festivals, with reflection and learning from current evaluation practice, is therefore essential [19-21]. A previous review of science festival evaluation by Peterman and colleagues [21] examined the methods and results reported in published science festival evaluations and research. Their review examined the literature from an expert standpoint within the context of visitor studies and informal science learning, however they did not use systematic review methods, included evaluations published after 2011 only, and excluded studies of individual activities within festivals. Attendees of health-related PE events are likely to include patients or users of health services, including families and informal carers, as well as health and social care professionals. Given the needs of this audience and the potential demand for and interest in health-related science festivals, understanding best practice in the evaluation of these events is crucial. However, there are no published syntheses of evidence in this area. While guidance is available for researchers evaluating a PE event [22, 23], PE evaluation efforts have been criticised for poor design, execution, and interpretation [20], for example, use of a restricted range of evaluation methods [21], and using evaluation as a token activity to justify funding [24]. The Queen Mary University of London (QMUL) public engagement evaluation toolkit [25, 26] has been developed as an open-access, pragmatic, generic toolkit applicable to diverse forms of academic PE and proposed as a “common ‘evaluation standard’” [27]. The toolkit gives practical advice about evaluation methods, the adoption of which, the authors suggest, could result in more consistent and higher quality PE evaluations, offering valuable data about the impact and value of health-related engagement activities at festivals [28]. We chose the QMUL toolkit as an appropriate comparator for this review as it is familiar to health researchers [29], and is applicable to a wide variety of engagement activities, including those evaluated in the studies included in this review, which utilise multiple different PE approaches and frameworks. In this review we aimed to comprehensively synthesise the evidence from evaluations of health-related PE festivals. Our primary research question was: What methods and outcomes are reported in published evaluations of health-related public engagement festivals? Our secondary question was: How do the evaluation methods used in these reports compare to those outlined in the QMUL public engagement toolkit [25, 26]?

Methods

We conducted a systematic review with narrative synthesis [30], to comprehensively describe and synthesise the methods and outcomes of health-related PE festival evaluations. The protocol for the review was registered prospectively on Research Registry (ID review registry 1021) [S1 File]. There were no amendments made to the protocol.

Search strategy

The following databases were searched on 28/12/2020: MEDLINE, Embase, and CINAHL (all via OvidSP) and Web of Science—core collection, with the search restricted to publications since 1 January 2000. Literature scoping and discussion with a subject librarian helped to inform the choice of databases and the search strategy. The search strategy was adapted for each database by combining the same groups of search terms, namely, “public engagement”, engagement type (i.e. “festival” or “event”) and topic (i.e. “science”, “research” or “health”). Search strings for each database can be found in S1 Table.

Inclusion and exclusion criteria

Inclusion and exclusion criteria were established a priori. To be included in the synthesis, studies had to self-identify the evaluated event as a ‘festival’; state public engagement i.e. two-way dialogue with the public [3] as one of the festival aims; provide evaluation data on adults; be a single or multi-year festival; be on a human health-related topic; and be an arts, culture or science festival which had an identified health-related theme or activity with evaluation of the health-related element. We included studies where festival audiences were members of the general public, i.e. who were non-specialists and not in academia or teaching. The following definitions of ‘public engagement’ and ‘festival’ were developed for this review to support application of the inclusion criteria: ‘Public Engagement’: Two-way dialogue between health-related researchers (including social scientists) or PE practitioners and members of the general public [3]. We focus here on engagement in relation to health-related research or a health topic, including medicine and applied health. ‘Festival’: A live event which engages the public in health-related science or a health-related topic. The event had to be transient, provide a brief and concentrated focus on the topic, and take place in a specific place or region. Studies were excluded if they: (1) used festivals to recruit participants for research, policy or service planning or prioritisation; (2) implemented the festival primarily as a health intervention (i.e. to bring about a change in health-related behaviour); (3) were published before 2000; (4) evaluated festivals with no health-related science or research remit/ not on a health topic; or (5) evaluated PE events which did not fit our definition of ‘festival’. Searches were limited to English language reports of empirical studies published since the year 2000, since most PE festivals have emerged in the last twenty years [18]. We chose to include only reports of studies where the audience included adult participants, to ensure the festivals and their evaluations were comparable. Evaluation of the PE impact on children is often mediated by adults (e.g. teachers and parents), and uses different delivery formats, purposes, venues and times compared to adult-orientated PE events [21]. Studies of mixed populations of families/ children and adults were included if the adult data could be extracted for the synthesis. Festivals which evaluated the impact only on children or student and teacher participants were excluded.

Study selection

Records were managed and deduplicated in EndNote [31]. Titles and abstracts of retrieved records were screened for eligibility (SM), with 2% independently assessed by a second reviewer (LS/CC). Full text screening for study inclusion was undertaken by SM, with a random 20% sample screened by LS and CC. Citation tracking and hand searching of the reference lists of included papers was undertaken to identify any further eligible papers (SM). LS and CC independently reviewed 10% of the data extraction (performed by SM) to check for refinement or omission of data, and 10% of the quality assessment. Where there was uncertainty over study eligibility, data extraction or quality rating, this was discussed between the three researchers to reach consensus.

Data collection

A data extraction table was developed and piloted during the screening process. Data were extracted under the following headings: First author’s name, report title, year of publication, location, name of festival/ event, aim of festival, aim of the evaluation, evaluation methods, evaluation outcomes, evaluation conclusions, researcher relationship to the festival, internal or independent evaluators, sample size/ response rate and total festival/ audience size. Data were also extracted specifically for appraisal against the QMUL toolkit under the headings of design, delivery and impact [25, 26] and the additional QMUL toolkit subheadings (S2 Table).

Quality appraisal

A validated critical appraisal tool, the Mixed Methods Appraisal Tool (MMAT) Version 2018 [32] was used to assess the quality of included studies. The MMAT allows for methodological quality appraisal of qualitative, quantitative, mixed-methods, randomised controlled and non-randomised studies. As recommended in the MMAT user guide, studies were not excluded based on their quality. However, the narrative synthesis reflects and includes discussion on the quality of the included studies.

Data analysis

A narrative synthesis of collected data was carried out following the framework stages proposed by Popay, Roberts, Sowden et al. [30]. Narrative synthesis was selected a priori because studies identified during literature scoping included a range of designs and aims, and were insufficiently similar to complete meta-analysis or meta-ethnography [33]. The framework stages used in this review were [30]: Developing a preliminary synthesis Exploring relationships in the data Assessing the robustness of the synthesis product. Comparison with the QMUL toolkit further refined the appraisal and synthesis of the included studies and informed recommendations. For the preliminary synthesis, we tabulated and grouped evaluation methods and outcomes. Evaluation outcomes which were conceptually similar were grouped and data cross-tabulated based on recurring data, potential moderating factors and factors implicated by existing literature, e.g. study methodology, demographics and sample size [4, 19]. This cross-tabulation and concept mapping enabled visual representation and exploration of the data and relationships within it [33]. The strength of the evidence was examined using the quality appraisal data and consideration of bias in the included studies. Summaries and conclusions were drawn from this data interrogation. SM led the synthesis, with regular meetings with LS and CC to review preliminary findings and patterns in the data.

Results

Database searches identified 407 records after deduplication, with one further reference identified through hand-searching the reference lists of included studies and relevant reviews (Fig 1) [34]. Eight studies met the inclusion criteria [35-42].
Fig 1

PRISMA flow diagram.

Key study characteristics are described in Table 1. Six of the eight included studies were published between 2015 and 2020 [36–39, 41, 42]. Seven of the eight studies used mixed-methods research [35–40, 42], with one study using quantitative methods alone [41].
Table 1

Summary of included studies.

#, Author, YearTitleLocationAudience sizeAim of festival / EventEvaluation aimEvaluation methods and Researcher relationship to festival/ eventsEvaluation sample size (response rate)Outcomes assessedEvaluation conclusionSummary of MMAT quality appraisal
(#1) Brooks, 2019“Evaluating the acceptability of a co-produced and co-delivered mental health public engagement festival: Mental Health Matters, Jakarta, Indonesia”Jakarta, Indonesia737 attendees over the 6-day festival"1. To improve knowledge of mental health amongst attendees through a co-designed and co-delivered mental health festival 2. To strengthen relationships between community organisations, health services and higher education institutes and explore the potential for future festivals 3. To promote future engagement in mental health research”“To explore the impact of the festival on knowledge/ understanding of mental health and future behavioural intentions 2. To develop understanding on the acceptability of undertaking mental health festivals in Indonesia to raise awareness of mental health.”Mixed-methods: post-event structured and unstructured evaluation forms.Researchers separate to the festival organising committee.Forms distributed to all attendees 324 attendees completed forms (43.9% response rate)QUANT: Design (marketing/ advertising (how did you hear about the event). Acceptability: why did they attend, Delivery: Demographics (age gender, role) experience; Which events did you attend? Quality of festival and festival overall. Values: relevance to life/ times. Emotional engagement (did you feel moved or inspired/ engaged in experience). Understanding and knowledge (increased understanding of topic). Attitude: want to find out more, exposed to new ways of thinking, intention to be involved in research.QUAL: Values: important/ relevance. Understanding: enhanced understanding via speakers, venue, community, and service users (credibility). Emotional engagement: with community and experts by experience. Engagement: via arts-based /films. Experience: improvement, size of venue and event, publicity, event duration and frequency.A co-produced art-based mental health festival is an acceptable way to increase understanding of emotional health issues / promotion of public mental health to Indonesian populations.Good quality: Qualitative data higher quality than quantitative data. Acknowledged self-selecting sample.No reporting on rationale for using mixed-methods design and limited reporting of study limitations e.g. no discussion of impact of missing data on results, and no comparison of festival population to sample.
(#2) McCauley 2019“B!RTH: a mixed-methods survey of audience members’ reflections of a global women’s health arts and science programme in England, Ireland, Scotland, and Switzerland”Edin-burgh, ScotlandEstimated total attendance based on theatre capacity.“To raise awareness and debate regarding global inequalities in access to, availability and quality of maternal healthcare.”“To assess the views and experiences of audience members who had just watched a play and/ or been involved in an expert panel discussion as part of the B!RTH programme.”Mixed-methods: post-event questionnaire with structured and open questions.It is unclear if the researchers were part of the BIRTH multi-disciplinary team or separate.176 respondents in Edinburgh, 17 responses for the extended questionnaire159 to the short questionnaire. (42% response rate, estimated All audience members asked to complete a questionnaire.QUANT: Demographics (age, gender, ethnicity, theatre views) Experience: Emotional engagement: emotionally moving, identified with the characters. felt challenged and provoked. Attitude: made me think differently, concern about the topic, wanting to find out more. Knowledge and understanding/ learning: learnt something knew, opened eyes to new ideas, QUAL: Free text 5 themes: Thanks/ positive feedback (views), Innovative use of arts and science (experience), Personal feelings (engagement), Need for action (views), Suggestions for use in schools/ education (views)."The B!RTH programme can be used as an effective tool to engage the public/stimulate debate, deliver key messages, and raise awareness of [these/ global women’s health] issues".High quality. Appropriate use of thematic analysis and coding of qualitative data, with data quoted in results under themed headings. Clear discussion of limitations including of estimated response rate, self-selection bias and convenience sampling strategy, but missing data not discussed.
(#3) Verran, 2018“Fitting the message to the location: engaging adults with antimicrobial resistance in a World War 2 air raid shelter”Stockport, England37 (out of 40 spaces available—sold out on Eventbrite)Event about AMR designed activities to address 5 questions which framed the event. “(i) How important are antibiotics to us today? ii) How did we cope without them? iii) Can we find new antibiotics? iv) Can we develop alternatives to antibiotics? v) Why is AMR an issue and what is being done to address it?)”“To develop, deliver and evaluate an event designed to engage an adult audience with anti-microbial resistance.”Mixed-methods: activity output and observationResearchers involved in event planning and the “lead author introduced the audience to the event.”37 (35 gingerbread men returned, 62 agar plates / swabbed and Flickr access 57 times.)Engagement with activitiesQUANT: frequency counts of activity output (gingerbread men, agar plates, Flickr access) (not pre-specified: percentage of bacteria names recognised) and QUAL: recording questions and observation of level of engagement“Hands on practical engagement with AMR can enable high-level interaction and learning in an informal and enjoyable environment.”High quality: High response rate, clear description of appropriate data collection methods. Qualitative and quantitative data were integrated and rationale for mixed methods given. Some findings not supported by data. Missing data is minimal.
(#4) Rose, 2017“Engaging the Public at a science festival: Findings from a panel on human gene editing”Wis-consin, USA125 people attended the panel.The effects of an engagement activity on audience perception of controversial science topic. “1. To increase participants’ basic knowledge about human gene editing. 2: increase both risk and benefit perception related to human gene editing, therefore increasing participants’ understanding of the complexity of the issue and potentially avoiding polarization based solely on moral concerns.”“Explore 1. if the Wisconsin Science Festival human gene editing panel increased familiarity or perceived knowledge levels. 2. if participating increases risk perception 3. if participating increases benefit perception. 4. how will the panel affect attendees’ moral and ethical views of the technology.”Quantitative pre-post structured survey.Researchers separate to the organisers and delivery personnel. The festival “is organized by public and private institutions”. The panel “were all university faculty members”.34 responses to the pre-test survey (94.1% response rate). 26 responses to the post-test survey (100% response rate). “Randomised selection… every 5th person to enter/ leave plus $2 incentive. (16 people engaged with the panel during the discussion period).”QUANT: reach (demographics to see if the samples before and after were the same or not: age gender degree). Knowledge (perceived knowledge level before/ after), Attitude (risk and benefit perception and x1 ethics question and x2 morals questions all before / after).Attendees felt an increase in “perceived knowledge levels, risks perceptions, benefit perceptions and moral acceptability.”High quality: Detailed discussion of limitations e.g. acknowledgment of small sample size, and strengths e.g. using randomised sampling, statistical comparison between pre- and post- samples and between audience and wider population.
(#5) Brookfield 2016“Informal Science Learning for Older Adults”Edin-burgh, Scotland50 people registered for stand-alone event and 40 people registered for the festival event.“A free-to-attend science festival style event that presented "content" linked a project (MMP) and that incorporated several different learning formats and engagement techniques in order to gain insights into what worked.” (As a stand-alone event and within the Edinburgh International Science Festival).“Report on the process involved in creating and promoting the event and the overall experience of delivering it in two different settings.” “To reflect on where the event succeeded and how it could have been improved, and consider its performance as a vehicle for older adults’ learning.”Mixed-methods: post-event audience feedback forms and commentaryResearchers developed, delivered and evaluated the event.38 forms in total. 39 people attended the stand-alone event. 18 people attended the festival event.QUANT: number registered/ attended. Demographics: Age. Experience: Enjoyableness / usefulness of events Knowledge: would attendees share / use what they learnt. Attitude: Would they attend again, Behaviour: distance travelled. Marketing: Tweets.QUAL: Behaviour: previous event attendance, levels of participation, questions asked. Experience: Engagement (with event and researchers), satisfaction with venue, mix of activities, favoured activity. Marketing: researcher labour time and costs.“There is appetite for informal science learning among older adults” This type of event might be an “appealing and appropriate vehicle for informal science learning” but access can be restricted by venue choice by using an established science festival to support administration and marketing.Low quality: Unclear and unreported approach to qualitative data collection and analysis. Not clear which data supports the conclusions or conclusions unsupported by data or quotes. Unclear sampling strategy and missing data not reported. No rationale for using a mixed-methods design.
(#6) Fogg-Rogers, 2015"Knowledge is Power": a mixed-methods study exploring adult audience preferences for engagement and learning formats over 3 years of a health science festival.”Auckland, New ZealandApprox. 3000 attendees at the event each year,“To communicate information about brain health and disease along with current neuroscience research, while also engaging publics in the ongoing research process.”Explore audience preference for engagement styles at science festivals Research question: "what formats do audiences at a science festival prefer and why?"Mixed-methods: post-event questionnaire with structured and open questionsResearchers separate to festival organisers/ delivery team. Festival is a part of “nationwide events coordinated by the Neurological Foundation of New Zealand.” “The event is staffed by volunteer neuroscience researchers and students”.661 returned over 3 years (annual response rate approx. 7%) with mean 220.3 returned each year (SD = 24.6).Aiming for a cross-sectional sample.QUANT: Demographics: age gender ethnicity and education levels. Living with a brain disease (not pre-specified). Experience: audience format preference via x3 Quant Q’s: perceived attractiveness, perceived usefulness, and Behaviour: attendance. Likert Scales: Experience was it a good day out for the family. Knowledge and Understanding: helped me learn more, lectures are a good way to get info, I did not learn anything, I cannot understand neuroscience).QUAL: Open questions on why they had certain preferences / rankings of formats: data grouped thematically: Interested in learning (knowledge), Knowledge is power (knowledge), Career and professional development (knowledge), Research and expert opinion (experience), engaging in curiosity (experience).“Health Literacy as an Asset: Festival formats employing traditional public understanding of science style communication, namely lectures, were preferred by the majority of adult participants, with the primary motivation being non-formal learning. Lectures in an asset-based model means expert dissemination of research findings are central to an engagement model, building on the knowledge, skills and understanding that people already hold.”High quality: clear rationale for using mixed methods, to triangulate data, questionnaire was piloted, description of handling of missing data given, thematic analysis of qualitative data explained and quotes used in results under themed headings. Discussion of limitations including acknowledgment of potential sampling and response bias.
(#7) Bird, 2013“Getting Science to the Citizen—’Food Addiction’ at the British Science Festival as a Case Study of Interactive Public Engagement with High profile Scientific Controversy.”Aberdeen ScotlandOver 170 attended the workshop/ sold out in advance“The event addressed the controversial and high-profile area of ’food addiction’”, “to engage the public in dialogue about funded science.”Not stated only: "the event was evaluated"/ to describe the eventMixed methods: post-event feedback form. Not explicitly identified as evaluation methods: voting buttons, show of hands (frequency counts/ %) and an interactive challenge.Researchers initiated and presented the event.121 completed forms.Feedback Form: QUANT: Demographics, Experience: enjoyment and interest, met expectations, Knowledge: learnt something new, Values: relevant to life. Show of hands/ interactive challenge (no data reported), voting buttons (knowledge, activity output/ engagement).QUAL: what did you enjoy? Experience: interactive, presentation, accessible/ pacing. Knowledge: information, interesting. Subjective researcher reflection: Engagement e.g. researcher perceived shock and resonance in the audience.“There is public appetite for events related to real-life health issues, the event was a successful formula for controversial topic”. “Audience appeared receptive to new information and clarification.” “It was a positive experience for presenter and audience.”Low quality: Vague research question. Unstated methods for qualitative data analysis. Some results unsupported by data. Sampling strategy unreported. Missing data unreported. No stated reason for using a mixed-methods design. Data insufficiently reported to comment on divergences in the data.
(#8) Quinn, 2011“The impact of a national mental health arts and film festival on stigmas and recovery”Glasgow and Lan-arkshire, Scotland3000 at the festival in total, 1318 at the evaluated events. Attendance at events ranged from n = 15–113To challenge stigma and discrimination against people with mental health problems: 1) to promote positive attitudes towards mental health amongst opinion formers and the public through arts and culture; 2) To strengthen the links between arts, community and public organisations, and explore the evidence and support for an annual festival.i) To identify who might attend mental health arts events; ii) To identify the impact upon knowledge, attitudes and likely future behaviour; iii) To explore whether specific components of stigma (e.g. social distance, perceived dangerousness, possibility of recovery and unpredictability) were influenced by specific arts events; iv) To learn lessons for developing an evaluation framework for complex events in real-life circumstances.Mixed Methods: Quantitative pre- and post-event questions and qualitative post-event questions on evaluation cardsFestival organised by multiple organisations, and one of the researchers worked with one of these organisations (Mental Health Foundation)20 out of 31 events evaluated, 415 respondents out of 1318 who attended the 20 events. Response rates ranged from 9.7% - 63.6%.QUAL Experience: Felt Inspired, reflection on attendee’s own mental health, role of the arts is important, enjoyment. Attitude: Acceptance (of difference/ society), need for support, behavioural intent (won’t change behaviour), will act positively, will participate in arts, will change work practices, will change personal health-related behaviour, will be an activist). Knowledge: Of mental health (understanding of recovery, awareness of social factors/ opened eyes, understanding of different mental health perspectives, understanding impact of mental health). Behaviour: numbers of attendees.QUANT: Demographics (age, gender), Attitude: (baseline attitude (pre-event), change in attitude pre-post event),Modest evidence that an arts festival can impact stigma. "A collaborative national arts festival can contribute towards reducing stigma and should integrate with other national initiatives that address stigma and promote public mental health."High quality: Reason for use of mixed methods clearly stated and explained, detailed discussion of limitations including justification of use of opportunistic sampling method, selection effects and ceiling effects. Missing data reported in results.
Most included studies were conducted in the UK (four in Scotland [35, 36, 39, 40], one in England [42], and one study each was from Indonesia [37], the USA [41] and New Zealand [38]. Five of the evaluations were of events embedded within larger festivals [35, 36, 39, 41, 42]. One event took place in an air raid shelter [42] and three in a performing arts space [37, 39, 40]. Two of the eight festivals were on the topic of mental health [37, 40]. One of the studies aimed to evaluate the whole festival [36], whilst the other seven studies evaluated a specific element of the festival. A summary of study quality is given in Table 1. Detailed study quality analysis is presented in the S4 Table. Five of the eight studies had superior methodological quality, meeting four or more of the five criteria in the relevant category for the study design [38-42]. The pure quantitative study was of high methodological quality [41]. Two studies were rated as low methodological quality due to inadequate reporting [35, 36]. Data extracted on study characteristics showed that the studies which had separate researcher or evaluation teams were of higher methodological quality [37, 38, 40–42]. One study did not explicitly state the relationship between the evaluation and festival teams [39]. Across the mixed-methods studies, researchers frequently omitted key information to assess the quality of either their qualitative or quantitative methods [35–37, 39]. For the qualitative component, some reports did not state the theoretical position underpinning the research question and did not describe any data analysis methods [35, 36]. The quantitative aspects of the mixed-methods studies commonly underreported on their sampling strategy [35, 36]. They also sometimes failed to report missing data or its management [35–37, 39]. Examples of good quality data analysis include stating hypotheses for testing and using statistical tests to compare festival attendees to the general population [41].

Evaluation methods

The most prevalent method of evaluation (n = 6/8) was a self-completed post-event questionnaire, with structured and open questions [35-40] (Table 2). Studies with large sample sizes used questionnaires, while the two studies with the smallest evaluation samples used more labour-intensive evaluation methods, e.g. observation [42] and in-person surveys [41] (see supplementary material). These latter studies also had separate evaluators collecting the data, higher response rates and were of high methodological quality. Two studies of higher quality also collected pre-event data [40, 41], while another used an electronic voting system for the audience [39]. Two studies without separate evaluation teams had broad or unspecified evaluation aims and poorer methodological quality [35, 36]. Studies of higher methodological quality used a wider range of evaluation methods [38, 40–42].
Table 2

Evaluation tools utilised.

Evaluation ToolRecord ref numberTotal number of studies
Higher qualityLower quality
Structured post-event self-completion questionnaire#2, #6, #8#1, #5, #76
Open question post-event self-completion questionnaire#2, #6, #8#1, #5, #76
Structured pre-event self-completion questionnaire#8-1
Festival activity output#3#72
Spoken audience questions (recorded)#3-1
Observation of engagement#3-1
Structured pre-event survey (administered)#4-1
Structured post-event survey (administered)#4-1
Social media analytics-#51

Evaluation outcomes

Evaluation outputs and outcomes (as defined by Grant (2011) [43]) were grouped into four conceptual themes: reach, attitude, knowledge and experience (Fig 2). Four of the studies evaluated outputs/outcomes in all four themes [37-40]. One study exclusively evaluated the attendees’ experience [42]. Reach, knowledge and experience were assessed by seven out of eight studies [35, 36, 38–42] and attitude by six out of eight [36-41].
Fig 2

Conceptual map of evaluated outcomes and outputs.

The studies often used the terms ‘participant’, ‘audience’, ‘visitor’ and ‘attendee’ somewhat interchangeably to describe the people involved in the festival activity. Although the term ‘audience’ might indicate a more passive level of engagement (e.g. just listening) and ‘participant’ a more active style of engagement (e.g. sharing opinions), these studies generally did not define such terms.

Reach

All except one evaluation [42] assessed participant age; five out of eight assessed gender [37-41] (Fig 2). More women than men attended the festivals [41] and completed the evaluations [37-40] (Table 3). Only two studies reported demographic data on ethnicity [38, 39], with the largest proportion of participants self-identifying as “white” [39] or “New Zealand/ European descent” [38]. Data on attendee education level [38, 41] or occupation [37], though only measured in three studies, indicated that visitors represented in the evaluation samples were largely well educated.
Table 3

Demographic data for studies reporting reach.

Record ref numberDemographics of evaluation sample or entire festival population.GenderAge (years)EthnicityOccupationEducation
#1 Evaluationfemale = 88.6% (n = 286)Male = 10.8% (n = 35)Mean = 22.5 yearsRange = 17–51 yearsn/aStudent = 50.3% (n = 163)Patient/ public = 20.4% (n = 66)Professional = 5.3% (n = 17)Missing 24.1% (n = 78)n/a
#2 EvaluationFemale = 12 (71%)Male = 5 (29%)18–30 = 1 (6%)31–40 = 1 (6%)41–50 = 3 (18%)51–60 = 7 (41%)>60 = 5 (29%)Asian/ Asian British = 0Black/ Black British = 0Mixed = 0Other = 0White = 17 (100%)n/an/a
#4 FestivalFemale = 52%Median = 41 yearsn/an/aCollege degree = 91%
#5 Evaluationn/a>65 (n = 21) (out of 38 people who completed the evaluation form)n/an/an/a
#6 Evaluation (data aggregated across three years)Female = 66.4%Range = 7–87 yearsMeans age = 48.5 years50–64 years = 25.5% (dominant age category)New Zealand European descent = 64.1%Asian = 11.2%Maori = 1.9%Pacific Islanders = 1.7%16% of respondents citing the festival was relevant to “career path or job” e.g. updating professional knowledge, directing future career path or the knowledge gained would be useful in their profession.Post-graduate studies = 42.3%Undergraduate education or a trade certificate = 25.2%No formal education post-secondary school = 26.6%
#7 Evaluationn/a“majority 19–40 years”n/an/an/a
#8 EvaluationIn evaluation sample: Gender ratios per event reported (two events with all-female audience)In festival: Male = 30.8% /Female = 69.2%Average age and range of ages per event reported.Average ages per event ranged between 25.2 and 71.9 years, audiences ages ranged from 13–89.In Festival: “higher proportion of younger people, (especially those between 25 and 34 years) than the Scottish population” given as percentage per age range.n/an/an/a

Note: study #3 [42] did not report demographic characteristics

Note: study #3 [42] did not report demographic characteristics Only two studies reported on the marketing of their public engagement event [36, 37] with one study including social media analytics as part of their marketing assessment.

Attitude

Of the six studies which evaluated the attitude of attendees, only two measured this using a pre-post quantitative methodology [40, 41]. Both these studies had high methodological quality. All other attitude outcomes were evaluated post-attendance. Quantitative methods were frequently used to evaluate attitude, with only two studies adopting qualitative methods, although both these studies were of high methodological quality [39, 40]. Three studies looked at attitudinal outcomes involving attendee behavioural intent [36, 37, 40]. Two studies evaluated four or more outcomes related to attitude [39, 40].

Knowledge

Outcomes related to audience knowledge were evaluated quantitatively in five studies [35, 37–39, 41] and qualitatively in two [38, 40]. Only one study, of high methodological quality, used mixed methods [38]. Evaluations commonly asked attendees after the event to indicate if they felt they had learnt something new. Five of the six studies evaluating knowledge were of high methodological quality [37-41].

Experience

All studies except one [41] assessed audience experience, with multiple indicators used (Fig 2). Audience experience was commonly measured through engagement-related outcome or output e.g. emotional engagement, degree of engagement, mediator of engagement such as format. Engagement was evaluated by all except one of the seven studies which evaluated experience [42]. Attendee emotional engagement was assessed by five of these seven studies [35–37, 39, 40]. Characteristics which were reported to enhance engagement at health festivals included the format (e.g. theatre, film, lecture) [35, 37–39] and the use of community, patient, or research experts [37, 38]. Three studies reported on perceived relevance of the content to attendees’ life, society or times and whether it met participant expectations [35, 37, 39]. Five studies recorded and evaluated audience outputs, such as audience questions and counting the number of visitors [35, 36, 38, 40, 42]. One study used the physical output from activities within the festival as a more objective measure of engagement, e.g. number of materials used or submitted [42]. This study also measured the degree of engagement qualitatively, using observation.

QMUL toolkit

The conceptual map in Fig 2 indicates whether the studies were evaluating design, delivery/outcomes or longer-term impact (e.g. evaluation activities sometime after the original project is completed), as specified in the QMUL toolkit [25, 27]. 7/8 studies (with [42] the exception) evaluated festival design; all studies evaluated festival delivery, and no studies evaluated long-term impact. One study used aggregate data across three years [38]. The QMUL toolkit offers a range of 21 different tools to use for evaluation [26]. Evaluation methods described in the toolkit and applied in the studies included ‘structured questionnaires’, ‘public lecture multiple-choice questions’ and ‘event feedback forms’ [35-41]. Additional methods employed in the included studies, but not listed in the toolkit, included frequency counts of physical outputs from the festival and observation of audience behaviour [35, 42].

Discussion

In this systematic review of PE health-related festivals, eight studies were eligible for inclusion, and all were published in the last ten years, despite the science festival scene burgeoning over the last two decades [18]. PE was evaluated predominantly via mixed-methods, often using self-report questionnaires. Evaluated outcomes included reach, experience, knowledge and attitude. A limited range of evaluation methods were used compared to an existing evaluation toolkit, and no long-term outcomes were evaluated. The studies’ frequent use of evaluation forms is in line with observations of overreliance on audience self-report [21]. However, the higher quality studies used a wider range of evaluation methods. Researchers are encouraged to use technology-based and unobtrusive evaluation methods [21], to reduce feedback burden on attendees and provide alternative ways to capture data [18, 44]. One of the studies we included used social media data [36] and another avoided using an evaluation form completely, in favour of observation and frequency counts of activity output [42]. Evaluators must ensure though that their methods are appropriate as certain evaluation aims require specific research methods; for example, measuring changes in audience attitudes requires a pre-post design as a minimum standard [20], but of the six studies which assessed attitude, only two studies used this design [40, 41]. The evaluations collected outcome data on attendee reach. Existing literature highlights that science festival audiences are often biased towards people already interested in science and rarely have good representation of minority ethnic groups [19, 24, 45]. For health-related PE festival evaluations, the ‘already interested’ population includes health professionals. Indeed, one study indicated that visitors attended the festival to further professional knowledge or career development [38]. Festivals often target schools and families as part of formal education or to capitalise on those already interested in science [19, 46]. However, families from more deprived areas and parents or adults without degrees are less well represented at festivals [21, 44–46]. Given these known biases, it is concerning that reach was not more thoroughly evaluated in the studies we identified. Audience members in our studies were mostly female, well-educated, and ethnically white, indicating a gender bias and lack of ethnic diversity in the evaluation samples. Coupled with the high education status of the attendees, this indicates the festivals included were restricted in their reach. It is encouraging that one study discussed improving reach as a future aim [39] and another specifically attempted to reach an under-represented older age group [36]. The literature suggests that science festival audiences value interaction with scientists or experts [21, 24]. The evaluation data we identified corroborate this finding and suggest further that learning and engagement are positively mediated by contact with experts by experience, as a result of patients and other health service users being involved in the delivery and design of the PE events [37, 38]. For example, patient stories may play an important role in audience engagement [40]. Since involving community partners or patients is a distinctive feature of human health-related festivals, more research is warranted to establish how and why the public can best influence festival design, delivery and impact. Alongside existing literature [47], data on engagement from co-produced events [37] and events tailored to specific audiences [36], could provide useful guidance to other science festival organisers on how to include and engage a diverse audience. It was important that the studies evaluated audience attitude because PE with research can involve ethically challenging discussion [41] or be concerning to the public [39]. One study acknowledged the responsibility of festival organisers to provide adequate reassurance or support to audience members who are engaging with emotive topics [39] and another study discussed how a dialogue-based delivery format enabled conversations about attitude [41]. A responsibility for audience well-being and the impact on audience attitude is particularly important for health-related topics, where attendees may be personally affected. This is exemplified by both studies on mental-health topics which evaluated audience attitude [37, 40]. It could seem regressive that all but two of our studies assessed an outcome related to the knowledge or learning of the attendees [36, 42], because the literature notes that science festivals have moved from informing audiences to actively engaging them [19]. However, in addition to knowledge, the studies also evaluated a range of other outcomes, e.g. experience and attitude. This suggests that, as recommended [21, 24], health-related PE festivals are not just unloading knowledge onto a passive audience via what is known as the ‘deficit model’ [48], but are using the two-way element which festival interaction enables to achieve multi-dimensional impact. Fogg-Rogers et al. (2015) [38] argue that, uniquely for health, knowledge gained at a festival could improve health literacy. This is in line with Ko’s (2016) [48] view that knowledge outcomes are still required to ensure factual comprehension is accurate, thus helping to prevent any negative physical or legal health-related consequences. Whilst health-related festivals are trying to be more dynamic in their evaluations, there is still a place for evaluating knowledge. All except two of the festival evaluations were published before the QMUL toolkit [27], and none of the studies were informed by the toolkit. Evaluations universally assessed the design or delivery of their festivals, with none assessing longer-term impact. Established alternative and creative qualitative data collection methods are listed in the QMUL toolkit, such as interviews and focus groups, and the toolkit also offers some technology-based evaluation method ideas, e.g. mobile event app, aerial photography [26], but these were not evident in the range of methods used by the studies in our review. It is important for evaluators to clearly define and differentiate immediate evaluation outcomes and outputs from any other impacts for clarity. This clarity can be supported through consistent use of terminology [27]. At present, terms are used inconsistently: for instance, Quinn et al. 2011 [40] evaluate the “impact” on stigma by assessing attendee attitude immediately after the event, while Verran et al. 2018 [42] use audience engagement, assessed via outputs and observations, as an indicator of impact. Whilst strict adherence to the QMUL toolkit could restrict the creative development of evaluations [21], application of the toolkit, including adoption of its terminology, might improve individual evaluation quality, increase learning derived from each festival and facilitate comparison. We therefore recommend that PE evaluations clearly define the concepts being evaluated; the QMUL toolkit may provide a useful reference point in this regard. Related to this, evaluations would benefit from more explicit discussion of the aims, framework and assumptions underlying PE initiatives. Differences in conceptualisation and fundamental approach (e.g. regarding the role of the public in the engagement experience) have implications for the choice of appropriate outcomes and evaluation methodology [8, 49, 50]. Assessing the longer term and broader impacts of festival activities can be practically difficult within a time-limited research grant, but more reflective opinions of a festival and accounts of whether, for example, potential changes in behaviour translated into actual changes in behaviour, might still be relevant, especially when PE festivals are ongoing [18, 21, 27, 51]. Such longer-term evaluations could help explain the complex effects and interactions at play and help develop a better understanding of active ingredients and mechanisms of action in PE via festivals. One of the studies evaluating behavioural intent acknowledged that future research could address whether attendees followed through with their intentions [37]. Three other studies also discussed the need for longitudinal follow-up to their evaluations [39, 40, 42]. Health-related PE festivals are still relatively rare, which might account for the paucity of longer-term evaluations. However, alongside our finding that not all studies had separate evaluation teams, underinvestment and limited evaluation resources might also account for the lack of impact evaluations in the literature. We found that the higher quality studies which used specific evaluation aims, a wider range of methods [41, 42] and pooled data [38] all had separate evaluation teams. This supports findings that the paucity of ringfenced time and resources for PE evaluation has a detrimental effect on evaluation quality [18]. Better resourced evaluation teams enable alternative and more rigorous evaluation methods to be planned and deployed, and independent evaluators might have more evaluation expertise than is present in a PE event team. A strength of this review is the use of established narrative synthesis methods, which enabled the mixed-methods findings to be combined conceptually, overcoming methodological differences [30, 33]. Limitations related to the resources available for the review include not searching the grey literature, using only adult data, restricting the review to English language study reports only, and requiring that authors self-identified their PE as a festival. There might be relevant records which have not been identified in this study, particularly as, in this nascent field, terminology is not always used consistently and not all evaluations are published in peer-review journals. Using additional methods to identify unpublished studies for inclusion may have resulted in further studies for inclusion, including studies on a wider range of health-related topics. It can also be difficult to discretely categorise a PE activity from an intervention, particularly in health-related topics, however by clearly defining and reporting our inclusion and exclusion criteria, we have demonstrated transparency in our methods. The results of this review enable us to make some recommendations to evaluators of future health-related PE and suggestions for future research. Given the need to broaden the reach of PE events and improve inclusion, particularly from underrepresented or minority groups [19, 21], evaluations should include a range of demographic indicators, including ethnicity, gender, occupation and a measure of socio-economic status/deprivation level. We found that patient/service user involvement in event delivery supported learning and engagement [37, 38]. With the increasing focus on co-producing and co-delivering health-related PE events with patients and communities, there is a need for future research to understand and assess how the public can best influence festival design, delivery and impact. Health-related PE festivals should deliver evaluations which use consistent terminology and high-quality methodologies. Evaluators should be creative in their use of evaluation methods and open to considering a variety of different outcomes, depending on the aims of the festival and the evaluation. Using the QMUL evaluation toolkit [25, 26] might help researchers to achieve this. Consideration should also be given to the use of independent evaluators with specific expertise and distance from the PE event. The current lack of assessment of long-term impact highlights the need for more investment into PE evaluation, which should include comparison of the impact of different PE methods as well as optimisation of PE evaluation methods. In conclusion, whilst there are examples of high-quality reports and creative data collection methods, there is still a need to address the reach of health-related PE events and improve PE evaluation. The QMUL evaluation toolkit [25, 26] may help improve the consistency and quality of evaluation methodology and reporting. More robust evaluation of PE festivals could help to improve our understanding of how to engage with every part of a community and give clarity about which design and delivery methods work for which topics and audiences, and how best to improve reach and impact.

PRISMA-P 2015 checklist.

(DOCX) Click here for additional data file.

Registered protocol.

(PDF) Click here for additional data file.

Search strategy.

(DOCX) Click here for additional data file.

Queen Mary University of London (QMUL) toolkit headings.

(DOCX) Click here for additional data file.

Study sample size, response rate and evaluation method.

(DOCX) Click here for additional data file.

Quality assessment using the Mixed-Methods Appraisal Tool (MMAT).

(DOCX) Click here for additional data file. 1 Dec 2021
PONE-D-21-31416
How are public engagement health festivals evaluated? A systematic review with narrative synthesis
PLOS ONE Dear Dr. Selman, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Please submit your revised manuscript by 30 Jan 2022. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Professor Benjamin Tan, BNSc MMed PhD RN Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter Additional Editor Comments: Thank you for submitting your manuscript “How are public engagement health festivals evaluated? A systematic review with narrative synthesis" to PloS ONE. The editorial team have assessed your submission and a few concerns have been raised regarding the precision about terms used in the paper. Please see the reviewers’ comments for further details about necessary revisions. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Paper Two- How are public engagement health festivals evaluated? A systematic review with narrative synthesis? Authors: Susannah M., Chamberlain C., Alison R., Selman L.E., Overview: This is relevant topic of interest to public health practitioners and researchers in the field of public engagement. Nonetheless, I suggest some substantial reforms to the authors. They are at liberty to address these suggestions to improve the readability and quality of the manuscript. Topic: The authors may consider changing the term “systematic review” to “Scoping review”. Mostly systematic reviews are followed with meta-analysis, whiles usually scoping reviews adopt narrative synthesis. Abstract: Line 13, author may specifically state the required “findings and recommendation” that may help improve future evaluations Search strategy: The authors may highlight the justification of searching the listed databases Conclusion: The authors may highlight certain areas that may require future research concentration based on the review conducted on Public Engagement festivals Reviewer #2: Thank you for writing this paper and inviting me to comment on it. I found it a clear paper to read and thought the research was interesting, informative and at times surprising. My criticisms of the paper are not in the research itself, but are in the framing of the paper and lack of real precision about terms used in the paper, which then has implications for the interpretation of the data and conclusions. The argument for looking at health-related science festivals starts from a public engagement with research (PER) frame, with festivals being a potential site of PER, and health-related science festivals being a subsection of festivals and being worthy of investigation. I am not convinced that this is the most robust case for looking at health-related science festivals. I was surprised, for example, that there was no mention of the long-standing practice of PPI in health research and how PER at festivals relates to PPI. An alternative argument would be to start with science festivals in general, how they can be a site for PER, and for health PER specifically. There are others! In the paper there are mentions of “audience experience”, “informal science learning”, “visitor studies”, “health literacy”, alongside “public engagement with research” but there is little demonstration of an understanding of the differences and similarities in evaluation that arise from these different frames. An understanding of these different areas of practice would have helped with interpreting the data. In particular there were three areas where this came through: how publics are presented, why the QMUL toolkit was used, and the observation of there being no longitudinal work presented: 1. In the paper we encounter terms such as “general public”, “lay citizen”, “patients”, “users of health services”, “professionals”, “citizens” and “audiences”. There is a substantial body of work that examines what we mean when we use these terms, for example we would largely argue that there is no such thing as a “general public”. We can describe how publics form or respond to particular interventions or circumstances. We can think about what the publics’ roles are in the engagement experience: are they audiences there to listen? participants with experience / insight to share? We can think about current levels of knowledge / attitude / behaviour with respect to the topic under discussion. We can refute the idea that all publics are citizens. This lack of critical engagement with these concepts, particularly in the different areas of evaluation practice mentioned in the previous paragraph, comes through repeatedly in the introduction and therefore in the interpretation of the data (particularly with respect to what outcomes are being assessed). 2. I would have found it useful to have some sense of why the QMUL Evaluation Toolkit was cited and used. Again, this links back to the framing issue. Does PER really need its own evaluation toolkit given the decades of eg audience research that we can draw on. Is there something unique to PER that justifies a new toolkit? How does the PER Evaluation Toolkit extend or develop existing literature from the fields of eg audience research, or informal science learning? Considering the reference to “informal science learning” I would have expected reference to / critique of the Generic Learning Outcomes framework. 3. The observation that were no examples of long-term impact studies and the suggestion (in line 406) that they “should ideally be done” reinforces that lack of in-depth understanding of the different traditions of evaluation. We know that attending a science festival will be one of many interactions with science (in this case health research) over a lifetime (Archer’s Science Capital work is useful here). These interactions can contradict or reinforce the attitudes to science that the person currently holds when they attend the activity. What that person takes from the experience is affected not only be the activity, but by how they are feeling that day, and what else is going on in their lives outside of attending a festival event. I think it’s naïve to think that it’s possible to track the long-term impacts of a single festival intervention considering the other “noise” that happens in a person’s life before, during, and after participating. In lines 114-117 you offer a definition of public engagement which seems sensible although I always make a distinction between public engagement with research (PER) and public engagement with a topic (in this case health). PER has to involve the researchers / academics while PE practitioners could do PE with a topic without direct researcher involvement. It would help the paper to be really clear if you are looking at PER or PE with Health. In the section about inclusion criteria (lines 105 and 106) you state that the studies self-identified as being festivals and as being PE. I have seen a lot of things called festivals that weren’t festivals and a lot of things called PE(R) that aren’t PER. Given the small number of examples that were actually looked at, are you confident that they were all festivals and all PER? I hope these comments are useful and can help to improve the next version – do let me know if anything needs clarifying. I found the methods and results sections very clear to read and understand and I agree with the limitations you identified in the study. Thank you again for an interesting read. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Kofi Aduo-Adjei Reviewer #2: Yes: Helen Featherstone [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 11 Jan 2022 10 January 2022 Dear Editors, Re: PONE-D-21-31416 How are public engagement health festivals evaluated? A systematic review with narrative synthesis Thank you for sending the reviewers’ comments on the above articles. We have revised the paper in light of the comments and respond below on a point-by-point basis. We believe the paper is stronger as a result and would like to thank the reviewers for their helpful feedback. We have ensured that the manuscript meets PLOS ONE's style requirements. As this is a review, there is no primary data to make available and reference in the ‘Data Availability Statement’ – all the included papers are referenced in the review together with the search strategy, and the data extracted is presented in a table. We now state in our Data Availability Statement that the article does not contain data and the data availability policy is not applicable. We look forward to hearing from you in due course. Best wishes, Dr Lucy Selman Reviewer #1 Overview: This is relevant topic of interest to public health practitioners and researchers in the field of public engagement. Nonetheless, I suggest some substantial reforms to the authors. They are at liberty to address these suggestions to improve the readability and quality of the manuscript. Thank you Topic: The authors may consider changing the term “systematic review” to “Scoping review”. Mostly systematic reviews are followed with meta-analysis, whiles usually scoping reviews adopt narrative synthesis. A scoping review is defined as a type of research synthesis that aims to ‘map the literature on a particular topic or research area and provide an opportunity to identify key concepts; gaps in the research; and types and sources of evidence to inform practice, policymaking, and research’ (Daudt et al. BMC Med Res Methodol 2013). In contrast, narrative synthesis is an approach to the systematic review and synthesis of findings from multiple studies that relies primarily on the use of words and text to summarise and explain the findings of the synthesis (Popay et al. 2006). While scoping reviews may commonly use narrative synthesis, narrative synthesis is also very commonly used in systematic reviews – for just a few recent examples, please see: Pian et al 2021 Karran et al 2020 Habtewold et al 2020 We have therefore not changed the title or terminology of the methods, as systematic review with narrative synthesis accurately captures our evidence synthesis approach. Abstract: Line 13, author may specifically state the required “findings and recommendation” that may help improve future evaluations We have now added the word ‘these’ to the abstract to be clear that we referring to the findings just mentioned: Higher quality studies had specific evaluation aims, used a wider variety of evaluation methods and had independent evaluation teams. Evaluation sample profiles were often gender-biased and not ethnically representative. Patient involvement in event delivery supported learning and engagement. These findings and recommendations can help improve future evaluations. Search strategy: The authors may highlight the justification of searching the listed databases Now added on page 8: Literature scoping and discussion with a subject librarian helped to inform the choice of databases and the search strategy. Conclusion: The authors may highlight certain areas that may require future research concentration based on the review conducted on Public Engagement festivals Thanks for this suggestion. We discuss areas for future research as well as directions for future PE evaluation on page 32, and have added specific wording to clarify this, e.g. The results of this review enable us to make some recommendations to evaluators of future health-related PE and suggestions for future research…. With the increasing focus on co-producing and co-delivering health-related PE events with patients and communities, there is a need for future research to understand and assess how the public can best influence festival design, delivery and impact. Reviewer #2 Thank you for writing this paper and inviting me to comment on it. I found it a clear paper to read and thought the research was interesting, informative and at times surprising. Thank you My criticisms of the paper are not in the research itself, but are in the framing of the paper and lack of real precision about terms used in the paper, which then has implications for the interpretation of the data and conclusions. The argument for looking at health-related science festivals starts from a public engagement with research (PER) frame, with festivals being a potential site of PER, and health-related science festivals being a subsection of festivals and being worthy of investigation. I am not convinced that this is the most robust case for looking at health-related science festivals. I was surprised, for example, that there was no mention of the long-standing practice of PPI in health research and how PER at festivals relates to PPI. An alternative argument would be to start with science festivals in general, how they can be a site for PER, and for health PER specifically. There are others! Thank you for your really helpful engagement with the review and your suggestions to clarify our use of terms and revisit how we frame the review. We have now added a substantial discussion of public engagement and patient and public involvement (definitions and differences) to the Introduction, referencing the importance of both within health research: There are many overlaps between public engagement (PE) and the long-standing practice of Patient & Public Involvement (PPI) in medical and healthcare research. Commonly accepted definitions of these terms are given below, as set out by leading organisations in these two fields, the National Coordinating Centre for Public Engagement (NCCPE) and the National Institute for Health Research (NIHR) respectively. Public Engagement: The myriad of ways in which the activity and benefits of higher education and research can be shared with the public. Engagement is by definition a two-way process, involving interaction and listening, with the goal of generating mutual benefit.[5] Patient & Public Involvement: Research being carried out ‘with’ or ‘by’ members of the public rather than ‘to’, ‘about’ or ‘for’ them. It is an active partnership between patients, carers and members of the public with researchers that influences and shapes research.[6] The definitions above demonstrate that whilst PPI is a relatively tightly defined concept as understood by healthcare practitioners and researchers, PE is a much more amorphous term [7,8], encompassing many ways of engaging with the public and not necessarily just about research. PE, particularly when it involves engagement with specific research projects, or research-related matters (e.g. research ethics), rather than engagement around a wider subject area or topic, is sometimes specifically referred to as ‘Public Engagement with Research’ (PER). This part of the engagement spectrum is where there are most overlaps with PPI. Whereas PPI is generally a formally defined process within a healthcare research project, PE activities are often more informal, sometimes ad-hoc and can be delivered in a multitude of ways for a wide variety of audiences. If the precise meaning of engagement is vague, then an catch-all definition of ‘the public’ is even harder to pin down [11, 12]. What is commonly accepted in the PE sphere is that ‘the public’ should never be considered as one single entity, but a multi-dimensional spectrum of people with widely varying levels of expertise, lived experiences, interests, opinions and so on [13, 14]. It is critical that any PE activity is tailored to the specific audience it is aimed at, perhaps even co-developed with that group of people. In the context of this paper the understanding of ‘publics’ as “gatherings of people, things, objects and ideas convened around a matter of concern.” as derived by Facer (2020) [15] is helpful. PPI and PE both play an important role in research related to human health. The UK’s National Health Service (NHS) and the USA’s Institute of Medicine support the co-production of healthcare plans with patients, increasing patient control over their health and emphasising disease prevention [16, 17]. People may therefore, more than ever, have reason to seek out and engage with health-related research. While relatively few members of the public have the opportunity to take part in a formal PPI process, public engagement opportunities might more readily present themselves. We go on to discuss science festivals as one increasingly popular format for communication of, and public engagement with, health research. In the paper there are mentions of “audience experience”, “informal science learning”, “visitor studies”, “health literacy”, alongside “public engagement with research” but there is little demonstration of an understanding of the differences and similarities in evaluation that arise from these different frames. An understanding of these different areas of practice would have helped with interpreting the data. In particular there were three areas where this came through: how publics are presented, why the QMUL toolkit was used, and the observation of there being no longitudinal work presented: 1. In the paper we encounter terms such as “general public”, “lay citizen”, “patients”, “users of health services”, “professionals”, “citizens” and “audiences”. There is a substantial body of work that examines what we mean when we use these terms, for example we would largely argue that there is no such thing as a “general public”. We can describe how publics form or respond to particular interventions or circumstances. We can think about what the publics’ roles are in the engagement experience: are they audiences there to listen? participants with experience / insight to share? We can think about current levels of knowledge / attitude / behaviour with respect to the topic under discussion. We can refute the idea that all publics are citizens. This lack of critical engagement with these concepts, particularly in the different areas of evaluation practice mentioned in the previous paragraph, comes through repeatedly in the introduction and therefore in the interpretation of the data (particularly with respect to what outcomes are being assessed). We agree that different approaches to evaluation are reflected in and associated with the use of different kinds of terms in public engagement. However, it was not in the remit of this this review to examine differences in evaluation approaches associated with different PE frames or traditions. Our aims were more practical than theoretical in this respect. In response to your specific points: 1. We have now added to the Introduction a passage reflecting on definitions of the public which includes the following: What is commonly accepted in the PE sphere is that ‘the public’ should never be considered as one single entity, but a multi-dimensional spectrum of people with widely varying levels of expertise, lived experiences, interests, opinions and so on [13, 14]. It is critical that any PE activity is tailored to the specific audience it is aimed at, perhaps even co-developed with that group of people. In the context of this paper the understanding of ‘publics’ as “gatherings of people, things, objects and ideas convened around a matter of concern.” as derived by Facer (2020) is helpful [15]. In our interpretation of the data, we use the evaluation outputs and outcomes defined by Grant (2011). We have added the following point to the Discussion to reflect the comment regarding how concepts and frameworks link to evaluation: Related to this, evaluations would benefit from more explicit discussion of the aims, framework and assumptions underlying PE initiatives. Differences in conceptualisation and fundamental approach (e.g. regarding the role of the public in the engagement experience) have implications for the choice of appropriate outcomes and evaluation methodology [8, 49, 50]. 2. I would have found it useful to have some sense of why the QMUL Evaluation Toolkit was cited and used. Again, this links back to the framing issue. Does PER really need its own evaluation toolkit given the decades of eg audience research that we can draw on. Is there something unique to PER that justifies a new toolkit? How does the PER Evaluation Toolkit extend or develop existing literature from the fields of e.g. audience research, or informal science learning? Considering the reference to “informal science learning” I would have expected reference to / critique of the Generic Learning Outcomes framework. 2. Our aim in this review was to synthesise evidence on health-related PE festivals from a range of traditions/using different framing of concepts. One of the identified studies investigated ‘informal science learning’, however this type of PE was not a focus of the review. The QMUL toolkit seemed appropriate as a comparator in this review given the different approaches and frameworks employed in the studies (most of which were not explicit about which PE research/tradition they were based on). The toolkit is widely used by health researchers and has been developed in an attempt to strengthen the evaluation of academic PE initiatives and simplify the process. Many non-PE academics are unfamiliar with the theories and traditions of PE. However, rather than explicate these our aim was to add to the utility of our review by comparing published evaluations with a well-recognised toolkit. We have no connection with the toolkit or its development/use, in case that was a concern here. We have added the following on page 7: The Queen Mary University of London (QMUL) public engagement evaluation toolkit [25, 26] has been developed as an open-access, pragmatic, generic toolkit applicable to diverse forms of academic PE and proposed as a “common ‘evaluation standard’” [27]. We chose the QMUL toolkit as an appropriate comparator for this review as it is familiar to health researchers [29] and is applicable to a wide variety of engagement activities, such as the studies included in this review, which utilise multiple different approaches and frameworks. 3. The observation that were no examples of long-term impact studies and the suggestion (in line 406) that they “should ideally be done” reinforces that lack of in-depth understanding of the different traditions of evaluation. We know that attending a science festival will be one of many interactions with science (in this case health research) over a lifetime (Archer’s Science Capital work is useful here). These interactions can contradict or reinforce the attitudes to science that the person currently holds when they attend the activity. What that person takes from the experience is affected not only be the activity, but by how they are feeling that day, and what else is going on in their lives outside of attending a festival event. I think it’s naïve to think that it’s possible to track the long-term impacts of a single festival intervention considering the other “noise” that happens in a person’s life before, during, and after participating. Thanks for these interesting points. Our intention was not to imply that long-term follow up was needed in the sense of identifying the impact of a specific festival in someone’s life separate to the ‘noise’ of other factors. Rather, we were referring to the fact that the evaluations we identified focus on immediate impact (e.g. views immediately after the festival), i.e. outcomes which are extremely proximal to the event. There is little attempt to ask for more reflective opinions or ascertain whether e.g. changes in behaviour posited as a result of the festival were actual rather than potential. For example, someone could report that at a festival they learnt about how antibiotics function and they thought this would change their behaviour, and it would be relevant to ask at a later timepoint whether this learning had actually resulted in changes in behaviour or information sharing with others. We believe these kinds of outcomes would be relevant and interesting – not to ‘track’ long-term effects as such, but to unpick the complex effects and influences at play (which much evaluation of other interventions tries to do, e.g. through the development of logic models or using a realist approach). We have modified the relevant section to reflect this (p.30): Assessing the longer term and broader impacts of festival activities can be practically difficult within a time-limited research grant, but more reflective opinions of a festival and accounts of whether, for example, potential changes in behaviour translated into actual changes in behaviour, might still be relevant, especially when PE festivals are ongoing [18,21,27,51]. Such longer-term evaluations could help explain the complex effects and interactions at play and help develop a better understanding of active ingredients and mechanisms of action in PE via festivals. In lines 114-117 you offer a definition of public engagement which seems sensible although I always make a distinction between public engagement with research (PER) and public engagement with a topic (in this case health). PER has to involve the researchers / academics while PE practitioners could do PE with a topic without direct researcher involvement. It would help the paper to be really clear if you are looking at PER or PE with Health. Thanks for flagging this – as it wasn’t a requirement for researchers or academics to be present at the festival, we have edited the definition to be clear about this (p.7): Two-way dialogue between health-related researchers (including social scientists) or PE practitioners and members of the general public [3]. We focus here on engagement in relation to health-related research or a health topic, including medicine and applied health. In the section about inclusion criteria (lines 105 and 106) you state that the studies self-identified as being festivals and as being PE. I have seen a lot of things called festivals that weren’t festivals and a lot of things called PE(R) that aren’t PER. Given the small number of examples that were actually looked at, are you confident that they were all festivals and all PER? We chose to focus on studies which self-identified as evaluations of festivals, and used our definition of ‘festival’ to determine whether or not their self-description should be included in the review. We could have chosen another approach and reviewed all health PE evaluations to identify those which met the definition, however that much wider review was not possible within the resource constraints of this project. It would, though, be an interesting approach in a future evidence synthesis for the reasons you have outlined. We have added to our Limitations (p.31): Limitations related to the resources available for the review include not searching the grey literature, using only adult data, restricting the review to English language study reports only, and requiring that authors self-identified their PE as a festival. There is always some degree of subjectivity involved in applying inclusion and exclusion criteria, however we attempted to minimise this by: scoping and considering the literature as a first step, developing precise inclusion/exclusion criteria based on established definitions of PE and ‘festival’, and considering complex cases as a team to determine the application of the criteria. We are confident that the included studies meet the inclusion criteria. I hope these comments are useful and can help to improve the next version – do let me know if anything needs clarifying. I found the methods and results sections very clear to read and understand and I agree with the limitations you identified in the study. Thank you again for an interesting read. Thank you for your very helpful comments. Submitted filename: Response to Reviewers.docx Click here for additional data file. 4 Apr 2022 How are public engagement health festivals evaluated? A systematic review with narrative synthesis PONE-D-21-31416R1 Dear Dr. Selman, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Professor Benjamin Tan, BNSc MMed PhD RN Academic Editor, PLOS ONE Additional Editor Comments (optional): The reviewers' comments have been well addressed in the revised manuscript. The academic editor is satisfied with your revision. A decision has therefore been made to accept your paper for publication. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Helen Featherstone, PhD 12 Aug 2022 PONE-D-21-31416R1 How are public engagement health festivals evaluated? A systematic review with narrative synthesis Dear Dr. Selman: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Benjamin Tan Academic Editor PLOS ONE
  14 in total

1.  The impact of a national mental health arts and film festival on stigma and recovery.

Authors:  N Quinn; A Shulman; L Knifton; P Byrne
Journal:  Acta Psychiatr Scand       Date:  2011-01       Impact factor: 6.392

2.  Defining issue-based publics for public engagement: climate change as a case study.

Authors:  Helen Featherstone; Emma Weitkamp; Katy Ling; Frank Burnet
Journal:  Public Underst Sci       Date:  2009-03

Review 3.  Addressing epistemologic and practical issues in multimethod research: a procedure for conceptual triangulation.

Authors:  R L Foster
Journal:  ANS Adv Nurs Sci       Date:  1997-12       Impact factor: 1.824

4.  Fitting the message to the location: engaging adults with antimicrobial resistance in a World War 2 air raid shelter.

Authors:  J Verran; C Haigh; J Brooks; J A Butler; J Redfern
Journal:  J Appl Microbiol       Date:  2018-07-23       Impact factor: 3.772

5.  Why people attend science festivals: Interests, motivations and self-reported benefits of public engagement with research.

Authors:  Eric Jensen; Nicol Buckley
Journal:  Public Underst Sci       Date:  2014-07

6.  The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration.

Authors:  Alessandro Liberati; Douglas G Altman; Jennifer Tetzlaff; Cynthia Mulrow; Peter C Gøtzsche; John P A Ioannidis; Mike Clarke; P J Devereaux; Jos Kleijnen; David Moher
Journal:  BMJ       Date:  2009-07-21

7.  Evaluating the acceptability of a co-produced and co-delivered mental health public engagement festival: Mental Health Matters, Jakarta, Indonesia.

Authors:  Helen Brooks; Irmansyah Irmansyah; Herni Susanti; Bagus Utomo; Benny Prawira; Livia Iskandar; Erminia Colucci; Budi-Anna Keliat; Karen James; Penny Bee; Vicky Bell; Karina Lovell
Journal:  Res Involv Engagem       Date:  2019-09-06

8.  Getting science to the citizen--'food addiction' at the British Science Festival as a case study of interactive public engagement with high profile scientific controversy.

Authors:  Sue P Bird; Michelle Murphy; Tina Bake; Ozgür Albayrak; Julian G Mercer
Journal:  Obes Facts       Date:  2013-03-06       Impact factor: 3.942

9.  B!RTH: a mixed-methods survey of audience members' reflections of a global women's health arts and science programme in England, Ireland, Scotland and Switzerland.

Authors:  Mary McCauley; Joanne Thomas; Cristianne Connor; Nynke van den Broek
Journal:  BMJ Open       Date:  2019-12-30       Impact factor: 2.692

10.  Investigating diversity in European audiences for public engagement with research: Who attends European Researchers' Night in Ireland, the UK and Malta?

Authors:  Aaron Michael Jensen; Eric Allen Jensen; Edward Duca; Joseph Roche
Journal:  PLoS One       Date:  2021-07-14       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.