Literature DB >> 36065429

Comparing the effect of virtual and in-person instruction on students' performance in a design for additive manufacturing learning activity.

Anastasia M K Schauer1, Kenton B Fillingim2, Anna Pavleszek1, Michael Chen1, Katherine Fu1.   

Abstract

The goal of this work is to compare the outcome of a design for additive manufacturing (DfAM) heuristics lesson conducted in a virtual learning environment to the same in an in-person learning environment. Prior work revealed that receiving DfAM heuristics at different points in the design process impacts the quality and novelty of designs produced afterward, but this work may have been limited by the solely virtual format. In this work, an identical experiment was performed in a face-to-face learning environment. Results indicate that neither learning format presents an advantage over the other when it comes to the quality of designs produced during the intervention. Participants across all experimental groups reported an increase in self-efficacy after the intervention, with improved performance on quiz-type questions. However, the novelty and variety of the designs produced by the in-person experimental groups were significantly lower than that of the virtual experimental groups. In addition to validating the effectiveness of virtual instruction as a teaching method, these results also support the authors' hypothesis that the priming effect is stronger in an in-person classroom than in a virtual classroom.
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Entities:  

Keywords:  Design for additive manufacturing; Engineering education; Virtual learning

Year:  2022        PMID: 36065429      PMCID: PMC9434074          DOI: 10.1007/s00163-022-00399-8

Source DB:  PubMed          Journal:  Res Eng Des        ISSN: 0934-9839            Impact factor:   2.964


Introduction

Although fully online degree programs have gained popularity in the twenty-first century, virtual learning was not widespread until 2020, when the COVID-19 pandemic suddenly forced students and educators out of the physical classroom and into the virtual one. The return to in-person learning over the following semesters has presented a unique opportunity for researchers to make direct comparisons of student performance in different learning environments. This paper compares undergraduate students’ learning in a sophomore-level mechanical design course by replicating a study that was previously performed virtually (Schauer, Fillingim and Fu 2022). The goal of this paper is to answer the following research question: How is students’ learning impacted when design for additive manufacturing heuristics are presented in a virtual environment compared to an in-person environment? To answer this question, engineering students received a lesson on DfAM heuristics, completed a design problem, and completed pre- and post-assessments containing quiz-type questions and self-efficacy questions about their DfAM abilities. The quality and novelty of their design problem solutions, as well as their performance on the quiz-type questions, were analyzed to quantify any differences that exist in learning between the groups. The self-efficacy data were used to evaluate students’ perceptions of their DfAM abilities before and after the heuristics lesson.

Background

Additive manufacturing, colloquially referred to as “3D printing,” is a relatively new manufacturing process that requires a shift in designers’ mindsets. Researchers have compiled lists of design for additive manufacturing (DfAM) heuristics, or rules of thumb, from expert knowledge, existing products, and public 3D printing databases (Blösch-Paidosh and Shea 2017; Adam and Zimmer 2014; Urbanic and Hedrick 2016). DfAM heuristics have been tested experimentally as a training tool to convey DfAM knowledge to designers (Prabhu et al. 2018a; b; Prabhu et al. 2020). Schauer et al. (2022) built upon existing work on DfAM heuristics by exploring whether receiving heuristics at different points in the design process impacts the novelty and quality of designs produced. Although students reported feeling more confident in their DfAM abilities, analysis of the designs they produced did not indicate that the groups with DfAM knowledge created higher-quality designs than groups without DfAM knowledge, regardless of design process timing. This contradicted the hypotheses, and it was suggested that the cause for this was the fact that the study was conducted fully virtually, potentially resulting in multitasking behaviors (Lepp et al. 2019). Students who have a tendency to multitask are more likely to multitask in online courses than in face-to-face classroom settings (Srivastava et al. 2016; Lepp et al. 2019). Tendencies toward multitasking are associated with lower GPAs in both in-person and virtual courses (Alghamdi et al. 2020). Conversely, students who are intrinsically motivated, potentially indicating a lower tendency to multitask, are more successful in asynchronous online courses, contributing more frequent and higher-quality responses to discussions (Lee 2013). The effectiveness of online learning can vary depending on the field. For example, online learning is especially beneficial when it affords opportunities that would not be possible in a face-to-face classroom environment, such as allowing foreign language students to interact with native speakers (Allen et al. 2004). Active student engagement (Prince et al. 2020; Meade and Parthasarathy 2020) and increased access to online course materials (DeNeui and Dodge 2006) also contribute to better performance by students. Reich (2020) presents the concept of an “online penalty” that disproportionately impacts underprivileged students, while students who are affluent and already highly educated benefit disproportionately from online learning (Reich and Ito 2017). Students who are already successful in a field benefit more from online learning compared to their face-to-face counterparts (Heppen et al. 2012), whereas students who are disadvantaged or struggling with a topic are negatively affected by online learning (Heppen et al. 2017). Additionally, teachers may encounter additional difficulties teaching the online version of a class, which can interfere with the success of active interaction and material delivery (Sol, et al. 2021). Because the study by Schauer et al. (2022) was conducted fully virtually, this paper will utilize the same DfAM intervention in a face-to-face format and compare the results of in-person and virtual learning, as discussed in the Methodology section. In the initial study, priming was a significant factor impacting the novelty of designs developed, and it is expected to remain relevant in this study. Priming in this context refers to the effect in which participants who receive information that is similar to a design problem tend to fixate on that information and produce less novel designs than participants who are not given priming information (Tseng et al. 2008). Literature has found that priming is present in virtual environments (Peña et al. 2009; Lu and Davis 2018), and that the effects of priming in the real world transfer over to performance in a virtual environment (Eskinazi and Giannopulu 2021). Virtual reality simulations are a popular setting for priming-related research (Bhagwatwar et al. 2013; Qu and Brinkman 2013), bringing into question the effects introduced by the less-immersive virtual classroom. Hypotheses based upon this background information will be introduced after the experimental groups have been described in the following section.

Methodology

The participants recruited for this study were undergraduate students from seven class sections of an introductory mechanical engineering design course at the Georgia Institute of Technology in Atlanta, Georgia, USA. In a 90-min intervention during their regular class period, all students received a lecture covering DfAM heuristics and participated in two design activities. This intervention was performed virtually in the Fall 2020 and in-person in summer 2021. In the virtual iteration of the study, the researcher lectured over Microsoft Teams and presented the heuristic slides using the screen share feature. Design sketches were submitted virtually by participants, either by scanning and uploading a hand sketch or by uploading a virtual drawing from a touchscreen device. In the face-to-face iteration of the study, participants sketched designs on paper and handed the physical copy in to the researcher. The researcher lectured at the front of a classroom with heuristic slides projected on a large screen. After introducing the study and obtaining consent, the study followed the format of that conducted by Schauer et al. (2022). The researcher used a script to navigate participants through the remainder of the study. First, all participants were given ten minutes to take an online pre-assessment containing self-efficacy questions testing students’ perceived capacity to perform DfAM-related tasks, as well as quiz-type questions covering key DfAM topics. After these initial steps, the procedure varied slightly for the different experimental groups, as shown in Fig. 1. The Heuristics-First group received a 35-min heuristics lesson, completed a 10-min design activity, took a 5-min break, and completed a 10-min redesign activity. The Heuristics-Between group received the heuristics lesson between the first design activity and the break. The Control group received the heuristics lesson after completing the design and redesign activities.
Fig. 1

Timeline of study (Schauer et al. 2022)

Timeline of study (Schauer et al. 2022) After completing these tasks, all experimental groups were given 10 min to take the post-assessment, which was identical to the pre-assessment. Finally, they were given one week to model their redesign sketch in a CAD program and upload it into Cura, a 3D printing software (Ultimaker Cura https://ultimaker.com/software/ultimaker-cura). The students submitted a screenshot of the virtual print bed displaying the estimated print time, filament usage, and support material use. The DfAM heuristics presentation was comprised of four sections: (1) an overview of additive manufacturing, (2) a comparison of traditional and AM methods, (3) guidance on when to utilize 3D printing for prototyping, and (4) a set of heuristics making up the majority of the presentation slides. Heuristics were selected from various larger sets of DfAM rules (Blösch-Paidosh and Shea 2017; Fillingim et al. 2020), as well as existing classroom curricular materials (Kranz et al. 2015). The slides presented one heuristic per slide, shown in Fig. 2, and made use of written explanation as well as figures and photos.
Fig. 2

Sample heuristic slide

Sample heuristic slide The objective of the design problem was to design a 3D-printable soap dish, based on the study by Fillingim et al., in which participants applied support-structure-related heuristics by redesigning a 3D printed soap dish (Fillingim, et al. 2020). The soap dish was simple enough for students to sketch and model within the timeframe, even for students with no CAD experience. The precise wording of the task was: Design a soap dish to hold a bar of soap in your shower. The dish should allow water to drain away from the bar of soap. The soap measures 2” by 3”, with a height of 0.5”. The redesign problem was stated as follows: Improve your soap dish design from the first activity. As before, the soap dish should be designed to hold a bar of soap in your shower. The dish should allow water to drain away from the bar of soap. The soap measures 2” by 3”, with a height of 0.5”.

Participant demographics

In total, 122 students signed the consent form and agreed to participate in the study. Participants were given extra credit in the course for participating in the study, and students who did not give consent to participate in the study were offered an alternative opportunity for extra credit. Table 1 shows the number of participants in each experimental condition. The virtual Heuristics-Between group was composed of two combined class sections, while all other experimental groups were made of one class section each.
Table 1

Experimental group populations

Heuristics-firstHeuristics-betweenControl
Virtual203117
In-person171918
Experimental group populations Participants were given the option to fill out a demographic survey before or after the study. The study consisted of 122 student participants, although four of them did not choose to fill out the demographic survey. Of the 118 students who completed the survey, 26 were women, 89 were men, two identified as non-binary or other, and one did not provide their gender. One participant chose not to provide their race, while 26 identified as Asian, Native Hawaiian, or Other Pacific Islander, 66 identified as white, 10 identified as Black or African-American, 6 identified as Hispanic or Latino, and 9 identified as multiracial. Although 2 students were pursuing a business major, 116 of them were pursuing a degree in mechanical engineering. The participants were in varying stages of degree completion: 1 was in their first year, 36 were in their second year, 57 in their third year, 19 in their fourth year, and 5 participants were in their fifth year of undergraduate studies. The demographic survey also contained questions to gage participants’ current level of AM experience. The in-person group reported significantly less 3D printing experience than the virtual group: 25 out of 54 members of the in-person group reported having no experience using 3D printers, while 18 out of 64 members of the virtual group reported the same (χ2 = 4.175, p = 0.041).

Design assessment

Two researchers utilized coding the schemes refined by Schauer et al. (2022) for evaluating the quality and novelty of the design solutions. The five components of the Quality score were Functionality, Print Strength, Support Material, Interfacing Items, and Ease of Assembly, with Functionality being given a weight of 50% toward the overall quality score and the other four categories weighted evenly. Designs were assigned a positive score of + 1, a neutral score of 0, or a negative score of − 1 for each criterion, then overall quality scores were normalized to range between 0 and 1. Novelty assessment was based on the metric developed by Shah et al. (2003). Six different categories were identified: drainage design, soap-holding method, number of additively manufactured parts, mounting style, and OTS parts required (Schauer et al. 2022). Novelty scores were calculated for each category as a function of how many designs used the same solution for each category; then, the scores from each of the five categories were averaged together to obtain a total novelty score. Two researchers independently examined and rated the quality and novelty of 25% of the data. Inter-rater agreement across the quality criteria resulted in 91.6% agreement and a sufficient Cohen’s Kappa of 0.78, while novelty data ratings resulted in 91.1% agreement, so one researcher coded the remainder of the data for quality and novelty. With the data that were coded for novelty, the variety within each experimental group could also be calculated. Variety indicates how much of the solution space was explored by a group, rather than how unique a solution is compared to all the other solutions generated (Shah et al. 2003). The total number of unique solutions identified during novelty coding was summed. Variety was calculated for each experimental group as the number of the total number of solutions that occurred within that group. A higher variety score indicates that a higher number of unique solutions occurred within that group, while a lower variety score indicates that fewer unique solutions were developed in that group.

Hypotheses

Overall, the DfAM intervention was expected to have a greater impact on the in-person group than the virtual group. This hypothesis was broken down into sub-hypotheses for ease of analysis. The in-person groups were expected to produce higher-quality designs than the virtual groups due to the increased pressure to avoid multitasking in an in-person classroom environment (Hypothesis 1). While none of the virtual groups experienced an increase in quality between the design and redesign session, the in-person groups were expected to increase the quality of their designs in the redesign phase, especially within the in-person Heuristics-Between group. While all groups were expected to produce less novel designs after exposure to the heuristics, due to the tendency of designers to simplify their designs after exposure to DfAM heuristics (Prabhu et al. 2018a; b), this effect was expected to be amplified in the in-person groups. Due to the increased attention paid to the DfAM heuristics, the in-person groups were expected to be more strongly affected by the priming effect, resulting in lower novelty and variety within these groups (Hypothesis 2).

Results

Self-efficacy assessment

Self-efficacy was evaluated on a five-point scale ranging from “Extremely Uncomfortable” to “Extremely Comfortable,” with scores converted to a numerical 1–5 scale for data analysis purposes. An average score consisting of responses to six self-efficacy questions immediately related to DfAM skills was used for analysis. These six questions measured students’ self-efficacy related to designing a part for 3D printing, determining if a part is a good fit for 3D printing, understanding the types of 3D printing, determining part orientation on a print bed, determining when support structures should be added, and choosing part infill. The Wilcoxon Signed-Rank test was used to test for within-subjects effects on the continuous dependent variable, as the requirements for the test were met through the experimental design (Clark-Carter 1997). This test revealed that the mean self-efficacy scores increased for all experimental groups from the pre-assessment to the post-assessment, indicating that participants felt more confident in their ability to perform DfAM-related tasks after the intervention, as shown in Fig. 3. The mean self-efficacy scores of the virtual groups (mean = 3.346) were significantly higher than the scores of the in-person groups (mean = 2.812) at the pre-assessment (U = 1317.500, z =  − 2.676, p = 0.007), as assessed by the Mann–Whitney U Test. Again, this test was used because the experimental design fulfilled the requirements of a between-subjects experiment with a continuous dependent variable (Clark-Carter 1997). However, there was no significant difference between the groups at the post-assessment, with a mean score of 4.130 compared to 4.019 (U = 1586.500, z =  − 1.294, p = 0.196), due to a significantly larger increase in AM self-efficacy scores across the in-person groups than across the virtual groups, with an average increase of 1.207 compared to 0.784 (U = 2370.500, z = 2.759, p = 0.006).
Fig. 3

Mean self-efficacy scores pertaining to additive manufacturing tasks. Error bars show ± 1 SE

Mean self-efficacy scores pertaining to additive manufacturing tasks. Error bars show ± 1 SE This trend was supported by a higher increase in self-efficacy in the in-person Heuristics-Between group compared to the virtual Heuristics-Between group (U = 421.500, z = 2.542, p = 0.011), as shown in Fig. 4.
Fig. 4

Mean self-efficacy scores for each experimental group at both phases of the design experiment. Error bars show ± 1 SE

Mean self-efficacy scores for each experimental group at both phases of the design experiment. Error bars show ± 1 SE The pre- and post-assessments also contained questions to test the participants’ knowledge of AM concepts. Question #8 covered print orientation in the context of maximizing the chance of a successful print and reducing the likelihood of part breakage under stress. According to McNemar’s test for dichotomous dependent variables in a within-subjects experiment (Clark-Carter 1997), there was a significant increase in the proportion of participants in the virtual groups who answered the question correctly between the pre- and post-assessment (z = 5.786, p = 0.013), although there was no significant increase in the proportion of participants in the in-person groups who answered the question correctly. The test of two proportions for dichotomous dependent variables in a between-subjects experiment (Clark-Carter 1997) revealed that there was no significant difference in the scores of the groups at either the pre-assessment or post-assessment stage. Question #14 of the assessment asked students to fill in a heuristic on the topic of surface finish, requiring recall of the staircase effect that can occur when a surface angle is not 0 or 90 degrees. The test of two proportions again revealed that there was no significant difference in the scores of the groups at either the pre-assessment or post-assessment stage. There was a significant increase in the proportion of participants in the virtual groups (z = 14.815, p < 0.0005) and in-person groups (z = 18.375, p < 0.0005) who answered the question correctly between the pre- and post-assessment.

Design novelty

Figure 5 shows the mean novelty scores across all experimental virtual and in-person groups at both experimental phases. Because the Control group did not have heuristic access at any point in the design sketch phases, their scores were omitted from the aggregate scores for the experimental conditions. At the design phase, the mean novelty score of the virtual group was not significantly different from the mean novelty score of the in-person. While there were no significant differences between the virtual/in-person counterpart groups, it can be noted in Fig. 6 that the mean novelty score of the in-person Heuristics-First group (mean = 0.350) was significantly lower than the mean novelty scores of the in-person Heuristics-Between (mean = 0.415, U = 236.500, z = 2.401, p = 0.016) and Control groups (mean = 0.423, U = 225.500, z = 2.424, p = 0.015).
Fig. 5

Mean novelty scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE

Fig. 6

Mean novelty scores for each experimental group at both phases of the design experiment. Error bars show ± 1 SE

Mean novelty scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE Mean novelty scores for each experimental group at both phases of the design experiment. Error bars show ± 1 SE Although there was again no significant difference between the virtual and in-person groups’ mean novelty scores in the redesign phase, the mean novelty score of the in-person Heuristics-First group (mean = 0.355) was significantly lower than the mean novelty score of the virtual Heuristics-First group (mean = 0.447, U = 93.500, z = -2.351, p = 0.018). Both Control groups produced high novelty scores relative to other groups in the same learning format: the in-person Control group had higher mean novelty (mean = 0.458) than the in-person Heuristics-First group (mean = 0.355, U = 232.500, z = 2.649, p = 0.007), while the virtual Control Group had higher mean novelty (mean = 0.512) than the virtual Heuristics-Between group (mean = 0.436, U = 338.000, z = 2.087, p = 0.037). Examining each of the novelty subcategories individually revealed that a trend in off-the-shelf component usage supported the trend in novelty at the design phase. As shown in Fig. 7, the virtual group had higher mean novelty in their use of OTS components (mean = 0.419) compared to the in-person group (mean = 0.224, U = 650.000, z =  − 3.108, p = 0.002). However, this gap was no longer significant in the redesign phase. Further inspection of design phase breakdown scores revealed that the virtual Heuristics-Between group had significantly higher mean OTS component novelty (mean = 0.437) than the in-person Heuristics-Between group (mean = 0.242, U = 202.000, z =  − 2.186, p = 0.029).
Fig. 7

Mean OTS component novelty scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE

Mean OTS component novelty scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE Variety was calculated from the solution categories identified during novelty coding. As 39 unique solutions were identified between the five novelty categories, variety scores ranged between 5 and 39. Figure 8 shows that while the variety of the virtual groups (25 unique solutions) was higher than the variety of the in-person groups (21 unique solutions) in the design phase, this gap was closed almost completely in the redesign phase.
Fig. 8

Variety scores for combined experimental groups at both phases of the design experiment

Variety scores for combined experimental groups at both phases of the design experiment Variety was also assessed as the solution space explored by a group across the design and redesign phases together. It can be seen in Fig. 9 that both Heuristics-First groups had low variety compared to the other two groups in the same learning format. Across all three experimental groups, the in-person groups had lower variety than their virtual counterparts.
Fig. 9

Variety score for each experimental group across both phases of the design experiment

Variety score for each experimental group across both phases of the design experiment

Design quality

Figure 10 shows no significant difference in the performance of the virtual and in-person groups. Neither group had a significant change in quality scores from the design to redesign phase, nor were differences between experimental groups statistically significant.
Fig. 10

Mean weighted quality scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE

Mean weighted quality scores for combined experimental groups at both phases of the design experiment. Error bars show ± 1 SE While coding data for quality and novelty, it was observed that some participants had chosen not to make any changes to their soap dish design during the redesign session. In the in-person groups, nearly all participants iterated on their designs, even if they simply tweaked a minor detail. Only 3 out of 54 participants in the in-person groups chose not to make changes to their designs, while 18 out of 66 participants in the virtual groups kept their designs the same.

Print settings

Three pieces of data were collected from the participants’ 3D printing follow-up assignment: (1) the amount of filament used to print the soap dish, (2) the amount of time needed to complete the print job, and (3) the percentage of the total print time dedicated to printing support material. For analysis purposes, outliers were removed from the filament and print time data sets. Outliers were defined as data points that were over 1.5 times the interquartile range away from the median of all the data; outliers were identified by examination of a box-and-whisker plot. This was done because it was difficult to determine if extreme data points were genuine outliers or due to errors by the participants in modeling and using the printing software, as some of them had little to no experience in doing so. Within the virtual group, the Heuristics-Between group (mean = 3.54%) used significantly less support material than the Heuristics-First group (mean = 7.37%, U = 170, z = − 2.238, p = 0.025). In the in-person group, the Heuristics-Between group (mean = 2.11%) used less support material than the Control group (mean = 8.13%, U = 209, z = 2.566, p = 0.025). Other variation in support material usage between groups was not statistically significant, as shown in Fig. 11. Additionally, there was no significant difference in the groups’ usage of print time or print filament.
Fig. 11

Mean amount of support material used by experimental groups. Percentages indicate the percentage of printing time spent printing support material. Error bars show ± 1 SE

Mean amount of support material used by experimental groups. Percentages indicate the percentage of printing time spent printing support material. Error bars show ± 1 SE

Discussion

The increase in self-efficacy scores across all groups indicated that students felt more confident in their ability to perform DfAM-related tasks after undergoing the intervention, supporting the use of heuristics as an education tool. This finding was reinforced by an improvement in participants’ performance on the objective quiz-type questions in the assessments. Although the virtual groups reported higher self-efficacy scores than the in-person groups during the pre-assessment, this gap was closed at the post-assessment as the in-person groups reported a significantly higher increase in self-efficacy scores. This supports the main hypothesis, which predicted that the DfAM intervention would have a greater impact on students in a face-to-face learning environment. The initial higher self-efficacy of the virtual groups may be attributable to the higher level of 3D-printing-related experience reported by the virtual groups on the demographic survey. The results of the study supported Hypothesis 2, which predicted that a stronger in-person priming effect would result in lower novelty and variety of designs in the in-person groups. The low novelty of the in-person Heuristics-First group compared to its virtual counterpart suggests that the priming effect may be stronger and more likely to cause design fixation in a face-to-face learning environment. Although the influence of priming has been established in virtual environments (Peña et al. 2009; Lu and Davis 2018), priming relies on subconscious recall of implicit memories (Schacter and Buckner 1998). In order for memories to form, participants must have paid attention to the information given. Although some students were observed using cell phones or other devices during the face-to-face heuristics lecture, students are more likely to perform multitasking behaviors during a virtual class rather than a face-to-face class (Srivastava et al. 2016; Lepp et al. 2019), increasing the likelihood that some participants in the virtual group failed to form and store memories of the heuristic information. Students who had multitasked during the lecture would thus be less likely to display evidence of the priming effect. The existence of the priming effect in this experiment is strengthened by the high novelty of both Control groups at the redesign phase compared to the other groups in the same learning format, as they were the only groups that had not received priming information. Analysis of the OTS Component Usage novelty subcategory revealed that students in the in-person group made innovations in the way their designs interacted with off-the-shelf components in the redesign phase while the virtual group did not, showing that the incubation effect (Ritter and Dijksterhuis 2014; Sio and Ormerod 2009; Yang et al. 2012) may have been at play during the break between design sessions, closing the gap caused by initial priming in the design phase. The theory of the stronger in-person priming effect is corroborated by the variety scores: each in-person experimental group had lower variety than their virtual counterparts, as shown in Fig. 9. However, while the priming effect can clearly be seen in the low variety performance of both Heuristics-First groups, it is less clear why the Control groups follow this pattern, having received no priming information. The difference in Control groups may be attributable to environmental factors: factors as simple as the height of a ceiling in a room (Meyers-Levy and Zhu 2007) or levels of ambient light (Steidle and Werth 2013) can impact creativity levels. While all in-person participants experienced the same environmental conditions, these variables were uncontrollable for the virtual groups. Figure 10 shows no significant difference in the quality scores of the virtual and in-person groups, as well as a lack of change in quality scores between the design and redesign phases. This result was unexpected and contradicted Hypothesis 1, which predicted an overall increase in quality across the groups. It is possible that because participants were overly familiar with the design problem, there was a lack of significant diversity in responses and little room for improvement in the redesign session. In the previous section, potential environmental factors that may have had an impact on creativity were discussed. Another uncontrollable but relevant variable is the mental health of the students, especially during the pandemic. The COVID-19 pandemic and associated increased screen time corresponds to increased depression and stress in adults (Madhav et al. 2017; Savage et al. 2020; Browning et al. 2021), which in turn are correlated with lower-quality designs (Paige et al. 2021). Despite these potential factors, there were no significant differences in quality scores when each virtual group was compared with its in-person counterpart, including the Control groups. However, the virtual group had a higher proportion of students who chose not to iterate upon their design during the redesign process, potentially indicating that the virtual environment caused the students to exert less effort toward the activity. There was no significant difference between the virtual and in-person groups in their use of support material, filament, or print time. This lack of variability in solutions was potentially due to the fact that designs were constrained to the same build envelope, as participants had been given the dimensions of the print bed and the bar of soap.

Limitations

In the previous section, the researchers’ lack of control over the participants’ environment in the virtual condition was discussed. Due to pandemic-imposed limitations, this source of error was unavoidable, but could have been accounted for by including environment-related questions in the surveys. As discussed previously, deviations from the hypotheses may have been attributable to environmental factors such as these. In addition, the design problem was relatively simple, with well-known existing solutions. This, in conjunction with the constrained build envelope of the design problem and low sample size, may have caused the general lack in variability of solutions. Despite being instructed not to, it is also possible that participants from the virtual groups looked at existing solutions online during the break in the experiment. An additional limitation was due to the setup of the experimental groups. Due to class size constraints, the virtual Heuristics-Between group was comprised of two class sections, causing the experimental group to be larger than the others and potentially impacting the power of the effects. Prior 3D printing experience was not evenly distributed between the groups, with the in-person groups having less experience than the virtual groups. The design of the experiment was vulnerable to serial-position effects such as primacy or recency bias (Murdock Jr 1962) at multiple points. Post-assessment performance may have been impacted by students being better able to recall heuristics presented at the beginning or end of the lecture. This may have also impacted how participants applied the various heuristics to their sketches, with additional uncertainty from prior work showing that the order in which different types of DfAM knowledge are presented can impact creativity (Prabhu et al. 2021). Although the results of this study indicate that students are able to absorb and use DfAM concepts equally well in virtual and in-person environments, this conclusion is reached solely based on analysis of designs produced by the students. Further investigation on more subjective methods of classroom evaluation, such as students’ mental health and enjoyment of classroom activities, is recommended.

Conclusion

The work presented in this paper has contributed to the literature on the use of design heuristics for additive manufacturing as a teaching tool. The main research question identified was: How is students’ learning impacted when design for additive manufacturing heuristics are presented in a virtual environment compared to an in-person environment? Quantitative analysis comparing virtual and in-person heuristics use showed that students receiving virtual instruction were able to create designs of equivalent quality to their in-person peers. However, due to the presence of a stronger priming effect in an in-person environment, students receiving in-person instruction were actually hindered in the development of creative and novel ideas. These results reinforce the findings from Schauer et al. (2022) that DfAM heuristics in lecture form can be a valuable education tool, but caution must be taken in their application to avoid priming and design fixation, especially in an in-person environment. These findings are especially meaningful and relevant with the recent explosion of hybrid and fully virtual learning.
  6 in total

1.  Does incubation enhance problem solving? A meta-analytic review.

Authors:  Ut Na Sio; Thomas C Ormerod
Journal:  Psychol Bull       Date:  2009-01       Impact factor: 17.737

Review 2.  Priming and the brain.

Authors:  D L Schacter; R L Buckner
Journal:  Neuron       Date:  1998-02       Impact factor: 17.173

Review 3.  Creativity-the unconscious foundations of the incubation period.

Authors:  Simone M Ritter; Ap Dijksterhuis
Journal:  Front Hum Neurosci       Date:  2014-04-11       Impact factor: 3.169

4.  Association between screen time and depression among US adults.

Authors:  K C Madhav; Shardulendra Prasad Sherchand; Samendra Sherchan
Journal:  Prev Med Rep       Date:  2017-08-16

5.  Continuity in intuition and insight: from real to naturalistic virtual environment.

Authors:  M Eskinazi; I Giannopulu
Journal:  Sci Rep       Date:  2021-01-21       Impact factor: 4.379

6.  Psychological impacts from COVID-19 among university students: Risk factors across seven states in the United States.

Authors:  Matthew H E M Browning; Lincoln R Larson; Iryna Sharaievska; Alessandro Rigolon; Olivia McAnirlin; Lauren Mullenbach; Scott Cloutier; Tue M Vu; Jennifer Thomsen; Nathan Reigner; Elizabeth Covelli Metcalf; Ashley D'Antonio; Marco Helbich; Gregory N Bratman; Hector Olvera Alvarez
Journal:  PLoS One       Date:  2021-01-07       Impact factor: 3.752

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.