Literature DB >> 22577309

Quantifying creativity: can measures span the spectrum?

Dean Keith Simonton1.   

Abstract

Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into "little-c" versus "Big-C" creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum.

Entities:  

Keywords:  creativity; measurement; performance; person; process; product; self-report

Mesh:

Year:  2012        PMID: 22577309      PMCID: PMC3341645     

Source DB:  PubMed          Journal:  Dialogues Clin Neurosci        ISSN: 1294-8322            Impact factor:   5.986


Introduction

Creativity is a very important psychological phenomenon that has attracted increased research interest in the cognitive neurosciences.[1-3] At the same time, creativity is an extremely complex phenomenon that renders such research rather more difficult than studying a more basic cognitive process, such as attention or memory.[4] Because of these difficulties, the empirical research does not always generate consistent results.[5] In part, these inconsistencies can be attributed to the immense variety of creativity measures. [6-7] There are far more ways of measuring creativity than there are of measuring general intelligence, for example, and these diverse methods do not even have to agree with each other.[8] Furthermore, these measures are often tapping into rather distinct cognitive events. The goal of this brief report is to survey the alternative routes to assessing creativity and to suggest an integrative approach to such assessment. However, before we can do so, it is first necessary to define what creativity means. It would be most unwise to start measuring something before we know what we are trying to measure.

Defining creativity

Unfortunately, researchers have been somewhat too creative in their definitions, with over a dozen possibilities being suggested in the literature. Most investigators seem to favor a two-criterion definition: an idea or response is said to be creative if it is (i) novel or original; and (ii) useful, adaptive, or functional.[9-10] The drawback to this definition is that it is perfectly feasible for an idea to be novel and useful without being necessarily surprising. Algorithmic solutions are of this nature. Because the cognitive processes supporting algorithmic problem solving are quite unlikely to be similar to the processes supporting more heuristic problem solving, it is advisable to add a third criterion, namely, surprising[11] or “nonobvious” as determined by the standards established by the United States Patent Office.[12] This three-criterion definition has several repercussions, including the increased necessity of engaging in blind-variation and selective-retention (BVSR) processes.[13] Yet, from the standpoint of this brief note, the main implication is that creativity must be separated from both general intelligence and domain-specific expertise, neither of which can produce anything surprising because each is dedicated to converging on the single most correct response. Convergent thinking seldom induces surprise. Indeed, the convergent thinking witnessed in the application of general intelligence and domain-specific expertise is designed for different kinds of problems than for divergent thinking and other processes seen in creativity. A nice illustration is the distinction between reasonable problems that “can be reasoned out step by step to home in on the solutions” (eg, anagrams and crossword puzzles) and unreasonable problems that “do not lend themselves to step-by-step thinking. One has to sneak up on them,” eg, all true insight problems).[14] Because solutions to unreasonable problems involve some problem restructuring (eg, serendipitous changes in problem representation), such solutions tend involve a Eureka or “aha!” experience, and accordingly involve different cognitive processes.[5]

Measuring creativity

Given the foregoing definition, we then have to figure out the optimal procedures for assessing creativity It turns out that the options are, if anything, too numerous.[6-7,15] Many researchers attempt to measure the processes presumed to be responsible for the generation of creative ideas, such as divergent thinking (DT)[16-17] and remote associations (RAT).[18] Other researchers concentrate on assessments of the creative person, most often via some personality measure, such as the Creative Personality Scale (CPS) of the Gough Adjective Check List.[19] In addition, because individual differences in creativity strongly correlate with both the openness to experience factor in the Five -Factor Model[20-21] and the psychoticism scale of the Eysenck Personality Inventory,[22-23] these latter measures can be used as indirect predictors.[24] Taking a different tack, other investigators will focus on the creative product, often using the Consensual Assessment Technique (CAT).[25] Although distinct, these three approaches do share some conceptual overlap. For example, scores on the CPS correlate positively with divergent thinking.[26] And both openness to experience and psychoticism correlate positively with defocused attention or reduced latent inhibition, which has been identified as an important process in creative thought.[23,27-30] Moreover, the creativity of persons can be gauged by the number of creative products or actions they have generated, identified through either self-reports or bibliographic sources.[26,31] Because creative productivity is strongly associated with achieved eminence, some researchers will use expert evaluations or conspicuous awards as indicators of creativity.[32-34] Such historiometric measures have been shown to have some highly desirable features, including high reliability and face validity.[35-37] Implicit in the above inventory of measures is a subtle shift in the magnitude of the creativity assessed. At the lower level is everyday, psychological, or “little-c” creativity, whereas at the higher level is eminent, historical, or “Big-C” Creativity.[11,38] On the one hand, lower-level creativity is most often gauged using a process measure, such as the unusual uses test,[16] or an everyday product measure, such as the CAT.[25] On the other hand, higher-level creativity is most often measured using an eminence or productivity indicator.[35-36,39] Another important difference is that little-c creativity is usually assessed using generic instruments that are assumed to be applicable to any domain (eg, the RAT), whereas Big-C Creativity is most often quantified via measures that are inherently domain-specific. Thus, the creative output of a scientist might be recorded by domain-specific publications and citations as well as award recognition.[32,34]

Integrating assessment

The key question is whether it is possible to create a comprehensive measurement tool that does for creativity what “IQ tests” do for intelligence. That is, can we devise a scale that taps creativity from almost trivial problem solving to the accomplishments of creative genius and everything between, without a single hiatus? Most desirably, this measure should be applicable to every major form of creativity rather than being tied down to a particular domain. At present, no such instrument exists, but I would like to suggest the most promising starting point for future developments: the Creative Achievement Questionnaire or CAQ.[26] Although the CAQ concentrates on actual achievements, these achievements are scaled from an effective zero point (none whatsoever; the person claims no talent or training) through various degrees of little-c creativity (eg, having written a poem or short story), and ending with domain-specific accomplishments of a very high order (having received a national prize). The CAQ also assesses creativity in several distinct domains, including scientific inquiry, creative writing, humor, theater and film, visual arts (painting, sculpture), architectural design, music, dance, inventions, and culinary arts. Finally scores on this measure positively correlate with such person measures as openness and the CPS, and with such process measures as divergent thinking (including its components fluency, originality, and flexibility), and thereby taps into more than just product assessment. The CAQ has already joined the inventory of creativity measures used in the cognitive neurosciences.[2,40] Even so, it would appear that the next step should be an integrative battery of tests that combine the product-oriented CAQ with both process and person measures that would better anchor the lower end of the underlying creativity dimension. In addition, the upper end of the scale can be further refined by introducing measures of broader impact, such as citation measures and domain-specific awards that differentiate the best from the very best.[41-42] Within the sciences, a Nobel Laureate dwells at a more elite level than elevation to the National Academy of Sciences.[43-44] Precisely merging these diverse assessments at opposite ends of the CAQ would not be an easy task, to be sure. Interpolating such heterogeneous measures into a single indicator would require extremely careful calibration based on large samples of research participants who vary greatly in creativity. Complicating matters even further, the calibration of the upper end of the scale would have to be executed separately for each domain and even sub-domains. The eminence of physicists cannot be scaled in the exact same way as the eminence of psychologists. A closely related complication concerns the transition from subjective assessments of creative achievement in the middle portion of the scale to objective assessments of creativity achievement at the upper end of the scale. On the one hand, the CAQ asks respondents to self-report their products and awards, a clearly subjective judgment that might differ from one respondent to another. On the other hand, productivity, eminence, and similar historiometric measures of achievement depend on an objective consensus established at the disciplinary or societal level. It may require some additional empirical research—again largely domain-specific—to learn how the former method can be made to dovetail properly with the latter method.

Conclusion

The difficulties aside, some kind of psychometric integration of creativity measures is required if we are ever going to be able to differentiate Einstein's brain from the brain of his less distinguished colleagues, as well as separate the brain of a competent but noneminent scientist from someone who is struggling to pass a university science course. If we can gauge intelligence across its full population variance, we must be able to do the same for creativity. Besides IQ, we would possess something that might be styled CQ. Until we obtain a proper CQ instrument, our neuroscientific understanding of creativity will always be compromised.
  6 in total

Review 1.  Creativity.

Authors:  Mark A Runco
Journal:  Annu Rev Psychol       Date:  2004       Impact factor: 24.137

Review 2.  A review of EEG, ERP, and neuroimaging studies of creativity and insight.

Authors:  Arne Dietrich; Riam Kanso
Journal:  Psychol Bull       Date:  2010-09       Impact factor: 17.737

Review 3.  Neuroimaging creativity: a psychometric view.

Authors:  Rosalind Arden; Robert S Chavez; Rachael Grazioplene; Rex E Jung
Journal:  Behav Brain Res       Date:  2010-05-19       Impact factor: 3.332

4.  A meta-analysis of personality in scientific and artistic creativity.

Authors:  G J Feist
Journal:  Pers Soc Psychol Rev       Date:  1998

5.  The associative basis of the creative process.

Authors:  S A MEDNICK
Journal:  Psychol Rev       Date:  1962-05       Impact factor: 8.934

6.  Neuroanatomy of creativity.

Authors:  Rex E Jung; Judith M Segall; H Jeremy Bockholt; Ranee A Flores; Shirley M Smith; Robert S Chavez; Richard J Haier
Journal:  Hum Brain Mapp       Date:  2010-03       Impact factor: 5.038

  6 in total
  3 in total

1.  Quantity yields quality when it comes to creativity: a brain and behavioral test of the equal-odds rule.

Authors:  Rex E Jung; Christopher J Wertz; Christine A Meadows; Sephira G Ryman; Andrei A Vakhtin; Ranee A Flores
Journal:  Front Psychol       Date:  2015-06-25

2.  Creativity in art and science: are there two cultures?

Authors:  Nancy C Andreasen; Kanchna Ramchandran
Journal:  Dialogues Clin Neurosci       Date:  2012-03       Impact factor: 5.986

3.  A New Measure of Imagination Ability: Anatomical Brain Imaging Correlates.

Authors:  Rex E Jung; Ranee A Flores; Dan Hunter
Journal:  Front Psychol       Date:  2016-04-18
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.