Continuous Improvement
1
Running head: CONTINUOUS IMPROVEMENT
Applying a Continuous Improvement Model to Creativity:
A Method to Increase the Quantity and Quality of Brainstorming Output
Doug Hall and Chris Stormann
Richard Saunders International
Jonathan A. Plucker
Indiana University
Jeffrey Stamp
Richard Saunders International
November 1, 1999
Address correspondence to the second author at:
Richard Saunders International
3849 Edwards Road
Newtown, OH 45244
513/271-9911x159
chris@EurekaRanch.com
Continuous Improvement
2
Applying a Continuous Improvement Model to Creativity:
A Method to Increase the Quantity and Quality of Brainstorming Output
Abstract
The facilitated brainstorming session includes a collection of creativity techniques
designed to increase the quantity and quality of ideas produced by a group of participants. First,
this article introduces a measure that estimates the effectiveness of an individual creativity
technique. Second, this article describes how the evaluative measure—inspired by the concept of
quality control charting—can be integrated into the brainstorming session and used within a larger
framework for continuous improvement of the brainstorming process. Results from two analyses
using client survey data with an exceptionally high response rate (92%), captured in a real-world
creative environment, find a statistically significant relationship between the ratings of a
technique’s effectiveness and the quantity of ideas (r = 0.30, p<.022) and the number of refined
new product and service concepts (r = 0.58, p<.014) originating from the collective group.
These results point to the ability to do two things. First, one can monitor the effectiveness of a
brainstorming session and make needed changes while the ideas are being created. Second, the
results point to the ability to improve future brainstorming sessions as substantive knowledge of
the creativity techniques and methods accumulate within a larger framework of quality control
and experiment.
Continuous Improvement
3
One of the commonplace approaches to the creation of new ideas for business application
is the so-called brainstorming session (see Osborn, 1953; Parnes, 1999). First developed by
Alex Osborn and later adopted in the 1960s to inspire broader thinking in developing effective
advertising campaigns, the basic philosophy of the brainstorming session as an out-of-the-box
creative method remains relatively unchanged over the past 30 years. However, over the past 30
years, hundreds of brainstorming exercises have been created and promoted as techniques to
improve the output of idea generation sessions, namely the quantity and quality of new and
different ideas (Nickerson, 1999; Rickards, 1999; Van Gundy, 1981). With relatively little
quantitative comparison of brainstorming techniques (Runco, 1991), the current landscape in the
creativity industry is to search for more and expanded techniques rather than an understanding of
the relationship between the exercise and the quantity and quality of ideas produced. Several
researchers have focused on whether brainstorming techniques work at all (e.g., Weisberg,
1993), but little attention has been paid to the conditions under which brainstorming
effectiveness may be maximized. A notable exception is the examination of group versus
individual contributions to brainstorming sessions (e.g., Diehl & Stroebe, 1986; Finke, Ward, &
Smith, 1992), but even this line of research is rather summative in its evaluations of idea
generation activities success or lack thereof. Thus, a significant opportunity exists for those
providing creative services such as brainstorming to find an appropriate set of measures and a
process that can be put in place to monitor and increase creative output.
The survey method can be used to achieve a better understanding of systems and
processes (Chang & Kelley, 1994). When survey methods are instituted as an integral part of the
brainstorming process, and relative standards of output and quality are established through
repeated use, manipulation, and the setting of goals against norms, the system becomes a form of
Continuous Improvement
4
quality control and internal benchmarking. This article discusses how the survey method can be
used to capture information about process effectiveness for companies that perform
brainstorming.
Brainstorming and a World of Uncertainty
Improving the quality and quantity of ideas originating from a brainstorming session is
important because a higher quality and quantity of ideas increases the likelihood that an idea will
survive development and testing to become a marketplace success. Indeed, a fair degree of
uncertainty surrounds brainstorming, and data suggest that it takes 3,000 raw ideas to come up
with just one commercially viable success (Stevens & Burley, 1997). The enormity of this ratio
demonstrates the need for more efficient methods of brainstorming.
In a similar vein, project leaders have uncertainties not only about the estimated results of
a brainstorming session but also about the proper objective of the brainstorming session (e.g., the
scope or type of ideas). That is, in addition to the incertitude surrounding how to get the ideas
they need, the project leader doesn’t always know what kind of ideas they want. More
troublesome, however, are the instances where direction is purposely withheld by a project
leader because of a fear that overt direction would be limiting at such an early stage in the
creative process. Regardless of the reasons mentioned above, experience teaches us that
communication is more often beneficial than not, and speaking from the point of view of an
ideation provider, it is difficult to fully deliver if clients do not delineate their general objectives.
To the point, without proper communication, the value of brainstorming is diminished because
the ideas originating from a loosely defined project objective are less likely to be pertinent and in
alignment with expectations. Those familiar with projects requiring substantive amounts of
creativity (e.g., architects, advertisers, graphical and interior designers) recognize and accept this
Continuous Improvement
5
circularity because uncertainty is a necessary part of creativity that can bring about innovation.
1
Arguably, the key to more successful brainstorming sessions is to find a balance between
uncertainty and direction.
Monitoring Effectiveness During Brainstorming
But how does one provide direction while maximizing creative imagination? Several
researchers and theorists have raised concerns about the use of evaluation and assessment
techniques during ideation or expressly discussed the negative role of assessment (e.g., Amabile,
1987, 1998; Baer, 1993, 1994; Davis, 1999). As Williams and Yang (1999) note, traditional
concepts of organizations that so heavily emphasize control have had the effect of minimizing
employee creativity (p. 374, emphases in original). Indeed, Osborn (1953) based the creation of
brainstorming on the idea of deferred (i.e., summative) evaluation as opposed to formative
evaluation. All of these concerns about over evaluating ideation certainly have merit, but these
warnings should not be over generalized to conclude that assessment and evaluation, when
properly managed, cannot serve as a positive force during idea generation sessions (Carson &
Carson, 1993; Cramond, 1994; Davis, 1999; Plucker & Runco, 1998, 1999; Schroder, 1994).
For example, Lingle and Scienmann (1996) found that measurement managed companies
outperform non-measurement managed companies in areas where uncertainty abounds (see also
Czarnecki, 1998). Specifically, measurement-oriented companies outperformed non-
measurement managed companies in agreement on strategy, clarity of communications, focus
and alignment efforts, and organizational culture. Though the Lingle and Scienmann (1996)
results do not focus specifically on brainstorming, their data suggest that measurement-managed
1
However, too little direction means false starts and the familiar "I don’t know what it is that
we’re looking for but I know this isn’t it" phenomenon. Moreover, false starts are particularly
Continuous Improvement
6
brainstorming sessions would result in increased efficiency if the assessment information helped
invest creative resources in a more productive manner. Measurement in this way would help
ideation directly by potentially increasing the quantity and quality of ideas produced from any
one brainstorming exercise. Formative evaluation (i.e., consistent and continual measurement), a
process where measurements are taken at different times throughout the brainstorming session,
would keep participants on target with expectations in a somewhat —and necessarily—chaotic
environment. In sum, the impact of monitored brainstorming would be not only direct through a
greater understanding of the ideation process, but also indirect because real time client
participation in the evaluation process would result in ideas more germane to the client’s
objective.
However, using measurement and statistical methods of quality control to monitor the
effectiveness of creative endeavors involves several challenges for those in service related
industries. First, there is a misconception that only product manufacturers can use statistical
quality control. This belief exists because traditional instruments of measurement (e.g., rulers,
scales, calipers and chemical analyses) and units of measurement (e.g., length, weight, volume,
and parts per count) used in quality control do not apply to the intangible features of a service.
However, consider that the framework of inputs, outputs, and outcomes apply equally to services
be they creative brainstorming or otherwise. For example, inputs include the raw materials used
in creation and the people brought to bear on some task (e.g., pens, paper, computers, creativity
techniques and the creators). Outputs are the products or items that are created (e.g., drawings,
ideas, concepts and customized services or products). Finally, outcomes are results related to the
output (e.g., profits, sales, returning customers). This perspective is obviously a pragmatic one,
wasteful because rework often means starting over from scratch. That is, the retracing of steps is
Continuous Improvement
7
but that is the environment in which new product and service development teams work: Clients
demand quantifiable results.
Second, and related to the first, companies that provide creative based services are
overwhelmed with the difficulties of directly measuring their output and development processes.
Indeed, in the creative domain of new product development, there is an outright resistance about
trying to measure output and processes. Common sources of resistance are related to the beliefs
that creativity is an "art," undefinable, and thus not subject to scientific measurement (Khatena,
1982; Plucker & Renzulli, 1998; Plucker & Runco, 1998) and judgments based on traditional
quantitative measures will limit innovation and creativity. Regardless of the validity of these
claims, creativity-for-profit demands that some level of quality be attained even if quality is
defined differently depending on the user and task. More importantly, understanding how
quality is achieved and improving systems for doing so encourages growth and allows for the
flexibility necessary for survival in a changing marketplace. Service providers and companies
with a creative emphasis are not unique in this respect.
Third, service providers have become complacent and rely heavily on a too distant proxy
for quality control. This proxy takes many forms but it is commonly referred to as consumer
satisfaction. A measure of satisfaction certainly has benefit (e.g., it allows for comparisons over
time and against competitors), but in its present form it is not an accurate replacement for quality
control and output related measurement systems. The disadvantage of the satisfaction construct
is twofold. First, satisfaction is generally assessed summatively (i.e., as an outcome after the
service is concluded). This means by this time the data are collected it is either too late to make
a difference, or at a minimum, there is an unnecessary increase cost considering the extra time
avoided in ideation because it will take the ideas to the same place.
Continuous Improvement
8
and wasted resources that could have been invested earlier or elsewhere. Second, the
relationship between satisfaction and quality is theoretical and controversial. That is, the link is
theoretical because the consumer, unless well versed in the industry, generally does not know
what the product or service could be or how it could be created better.
The purpose of this study is to determine if measurement and a system of quality control
charting can be used to evaluate the creative process (i.e., are formative evaluations of idea
generation processes predictive of the quantity and quality of ideas produced?). If the formative
assessments are not associated with evidence of predictive validity, using them to guide the
brainstorming activities would be pointless. If the assessments are associated with evidence of
predictive validity, a foundation will emerge from which the quality control charting process can
be developed.
METHOD
Description of the Facilitated Brainstorming Session
All sessions used in this research took place at Richard Saunders International’s (RSI’s)
Eureka! Ranch facility. The Eureka! Ranch is a specialized center for creative idea generation
and is designed to accommodate corporate groups in all aspects of creative activity and comfort.
Sessions last for three days in order to take the clients out of their normal corporate
environments, which many authors have suggested may be creativity stifling (Amabile, 1998;
Van Gundy, 1987; Williams & Yang, 1999).
Each brainstorming session at the Eureka! Ranch is preceded by an "immersion" process.
An immersion is a fact-gathering process designed to better understand a client’s products,
services, category, and any additional unique industry circumstances they may face. Immersions
can include on-site visits, videoconferences, reviews of financial and market research reports,
Continuous Improvement
9
and informal telephone calls with a cross-section of organizational members, suppliers, and
customers. This information is used to develop creativity techniques and for the selection of
stimuli used in the techniques. Stimuli can include anything related or unrelated to the project
objective. For example, related stimuli for the maker of a hot sauce may range from a collection
of hot sauces currently available on the market to a curling iron (i.e., it is also hot). Unrelated
stimuli may be a picture of a laptop computer or a pair of running shoes. Stimuli are found
almost anywhere. Particularly rich sources of stimuli can be found in grocery aisles, magazine
headlines, catalogs, and the various superstores common to most areas.
The human makeup of a typical brainstorming session used in this study consists of 11 -
15 participants brought by the client company. The client team is matched with an equal number
of brainstorming facilitators called "Trained Brains¤." These facilitators are a collection of
independent entrepreneurs with a proven record of experience and openness to new ideas that
challenge the status quo.
The clients and facilitators meet on Day One for an eight-hour period with various
breaks. An icebreaker is used at the start of a brainstorming session to set an enjoyable tone,
break down any perceived barriers between the participants, and mitigate against fears (see
Amabile, 1998, and Ford, 1999, for a discussion of barriers in corporate settings).
The brainstorming portion of the session normally consists of five to seven creativity
techniques. Participants are randomly assigned to one of four small groups for each creativity
technique. These groups are identified by the color of their couch they are assigned to (e.g,
green, red, blue and brown) and randomization results in the benefit of different group
compositions throughout the day. After completing the five to seven creativity techniques during
Day One, the majority of clients and Trained Brains are finished with the session and leave the
Continuous Improvement
10
facility. Five to seven core members of the client team remain to sort through what generally
amounts to more than a 1,000 "seed" ideas originating from the brainstorming session. Seed
ideas are the mere sparks of ideas that can be a name, a product feature, a technology, a package,
an advertising slogan, etc. All seed ideas are reviewed, and those with even a hint of promise are
transferred to small index cards and saved for review. This process is referred to as "Interact¤."
The index cards are then spread out on the floor and voted upon by the client team. These
activities complete the first day.
That night, selected seed ideas are transformed into written concepts by RSI staff.
Briefly defined, a concept contains the name of the product, a tag line stating the key benefit of
the product, and a paragraph of copy further describing the concept. Concept copy normally
offers the benefit of the product in greater detail with a supporting reason to believe (e.g.,
demonstration, guarantee, or endorsement), and a statement illustrating how it is new and
different though in many cases the concept’s uniqueness is self-evident. Concepts also receive
supporting artwork (added later in the session) that further depicts key features and core
components of the concept (i.e., benefits, reasons to believe, how it is new and different).
Day Two of the session begins with the clients reviewing the concepts. During a typical
session, 10 - 40 concepts are presented to the clients. After an individual review of the new
product concepts by each of the remaining clients, the rest of the day is made up of group
discussions about the concepts. The five to seven corporate tean members reviewing the
concepts (e.g., senior management and persons with the ultimate say in which products will be
developed) participate in this discussion and totally new concepts often emerge from this
process. These new concepts are created and also discussed. Necessary changes are made and
the concepts are returned to the clients on the morning of Day Three. The session is generally
Continuous Improvement
11
concluded on the afternoon of Day Three after final thoughts, editing, and strategies, and
reviewed.
Creativity Techniques
In our work, we define a creativity technique as an exercise in mental gymnastics, a way
to stimulate your mind into new and different directions and force unlikely associations. This
can be accomplished with relatively simple techniques for individuals (e.g., display thinking on a
blank sheet of paper) or with more complex exercises that require groups of individuals, multiple
steps, and mixtures of physical, written, verbal, and picture stimuli. This is by no means an
exhaustive list and at times the pool of creativity techniques to choose from seems infinitely
large. There are, however, approximately 50 - 70 techniques found to be productive, and even
these techniques can be twisted and evolved into various permutations and combinations (see
Hall & Wecker, 1995; and Van Gundy, 1981, for an extensive list and description of creativity
techniques).
A complete discussion of creativity techniques and technique differences is beyond the
scope of this paper. In general, most techniques are similar to the multitude of ideational
thinking exercises that have traditionally been used to generate new product ideas. In using these
various activities, the staff at Eureka Ranch! noticed that some activities worked better with
certain clients than with others. In a prior research effort, the authors designed a survey
describing over 50 different brainstorming techniques in an attempt to find differences among
the techniques. The survey was sent out to past brainstorming participants familiar with
creativity exercises (n = 103). When asked how effective participants felt the creativity
techniques were regarding the creation of new ideas, significant variation existed in the
Continuous Improvement
12
responses. Though the techniques faired well overall, individual techniques ranged from a high
of 96.1% to a low of 45.6% of respondents stating that a particular technique is effective.
Indeed, estimating the relative value of creativity techniques in different contexts was the
major motivating force for the development of a quantitative system that provides formative
feedback during the idea generation phase of new product development. Moreover, the
development of a continuous feedback system will also facilitate evaluation of and comparisons
among creativity techniques, comparisons which Runco (1991) and Nickerson (1999) have noted
are surprisingly lacking in the literature. Before that can take place, however, a measure
demonstrating evidence of predictive validity must be established.
Instrumentation
Ideation participants complete a survey instrument made up of three closed ended
questions and one open ended question (see appendix A). The three closed-ended measures use
a 0-10 response set in the form of a Likert scale. This scale is used because evidence suggests
that this scale facilitates adequate variation and better prediction than the narrower five-item
scale (Kahle, Hall, & Kasinski, 1997; Kalwani & Silk, 1982). Participants complete this survey
multiple times during an ideation session. Recall that each creativity session is made up of five
to seven techniques and this equates to five to seven completed instruments per participant for
each session.
The first question asks, "overall, how much did you dislike or like this creativity
exercise?" This question is included because both common perception and prior research
suggests a positive correlation between liking a technique and it’s ability to create ideas for those
that like using it (see Van Gundy, 1992, p. 119). Endpoints for the response scale aligned with
this measure range from dislike a lot to like a lot. The second question asks, "with this exercise
Continuous Improvement
13
and your group, how effective were you individually in generating quality ideas?" Endpoints for
the response scale aligned with this measure range from not very effective to very effective. The
third question asks, "with this exercise, how effective was your group, as a whole, in generating
quality ideas?" Endpoints for the response scale aligned with this measure range from not very
effective to very effective. Indeed, questions two and three in the questionnaire are to some
extent redundant. What follows is the reasoning for the inclusion of these two questions in the
same instrument.
At the time of questionnaire development there was some concern about whether or not
asking a participant to estimate their group’s effectiveness towards the generation of ideas would
be too great a challenge. Stated differently, the concern was "can we expect participants to be
accurate judges of a creativity techniques’ ability to generate ideas when they have little, if any,
experience using these techniques?" The question was included, however, because it more
closely and conceptually matches the response variable (i.e., quantity and quality of ideas
originating from the group). Moreover, the measure was included because its predictive validity
still remained an empirical question.
Strategies were considered to bring the ideation output variable down to a lower unit of
analysis to match the individual participant, however, this was deemed unfeasible for three
reasons. First, disentangling the ideas coming from the small groups and/or assigning ideas to
individuals within the groups was ruled out because individuals interact and feed off each other
to create the ideas. It would be very difficult to say who did what. Second, these were paying
clients and there was a limit to how disruptive we could actually be regarding selection of the
best research design. Third, and for reasons beyond the scope of this text, the "ownership" of
ideas is something to be handled with great care when creating new ideas within a group context.
Continuous Improvement
14
In sum, the choice of disaggregating the small group ideation output was not a choice at all.
Thus, attention was directed towards an additional self-report measure that might the individual’s
contribution to the small group sum of ideas.
Though less preferred, the question measuring an individual’s contribution towards the
generation of the small group sum of ideas is believed to be a reasonable addition and proxy.
This measure is considered to be a reasonable proxy because the individual participant’s scores
would be aggregated for each small group anyway, and more importantly, random assignment to
the small groups would allow for the control of any individual differences.
Procedure
Study 1. To gather evidence on the formative assessments’ ability to predict the quantity
of potentially successful ideas emerging from brainstorming activities, a study was conducted
where an independent observer recorded and scored the ideas from each small group. These data
were gathered from two brainstorming sessions. The first session included a major beer brewing
company in need of new alcoholic beverage ideas, and the second session included an
established brand of candy bars in search of new confectionery product ideas.
The observer counted the number of unique, substantive ideas coming from the collection
of small groups (e.g., if the idea was only a name and the purpose of the technique was not for
generating names, the idea was not counted). The observer did not participate in any of the small
groups generating the ideas. The first study included 60 small groups, each consisting of five to
six participants. Participants were randomly assigned to these groups and a total of 659 reported
ideas were recorded. The actual sample size is 60 small groups though the data before
aggregation includes 399 surveys.
Continuous Improvement
15
Study 2. A second study was conducted in an attempt to replicate the finding from the
first study, with a stronger emphasis on outcomes of quantity and quality of ideas. This shift in
emphasis from idea quantity to quality exists because the number of fully written concepts is
used as the response variable rather than the number of neophyte ideas and idea fragments.
The number of fully written concepts in the second data set is 411. For a rough
approximation of the proportion between the response variable in study one compared to the
response variable in study two (i.e., rough ideas vs. fully written concepts), consider that the
number of fully written concepts from the rough ideas in the first study was 44. This means that
in the case of the first study, only 7% of the ideas made it to the stage of a fully written concept
(44 concepts/659 rough ideas) and recall that this occurred after the observer had already made
initial assessments of idea quality from the more primitive pool of raw or "seed" ideas. The
actual percentage is likely less than 7%, since the 7% is considered to be a rough approximation
(i.e., ideas are often combined to create a concept and the earliest stage seed ideas will
sometimes serve as stimulus for new concepts). Thus, the 411 concepts in the second study are
considered to be of higher quality than the prior set of ideas because they have survived multiple
rounds of selection, attrition, and development.
In the second study, the group effectiveness question was examined across 17 unique
creativity sessions with only clients included in the survey evaluations. Trained Brains were
excluded in order to provide for a clearer understanding of the relationship between the
effectiveness measure and the totality of our client-based brainstorming effort. As a point of
reference, recall that each session is made of five to seven exercises, each exercise is made up of
four small groups, and each small group includes approximately three to four clients and three to
four facilitators. The actual sample size is 17 sessions, however, the data before aggregation
Continuous Improvement
16
includes 1,345 client surveys and this provides confidence that the estimates will be closer to the
population than would normally be expected with a sample of this size. Nonetheless, results
should be interpreted with caution. The response rate for the survey method used in the two
studies is 92%.
RESULTS
Study 1
Two of the three measures are significantly correlated with the count of ideas coming
from the small groups. Specifically, the individual effectiveness measure (r = .304, p = .018) and
the group effectiveness measure (r = .296, p = .022) reveal a positive and statistically significant
relationship with the output of ideas coming from the small groups. However, we find no
significant relationship between the liking of a creativity technique and the number of ideas
produced (r = 0.192, p = .141). This is surprising considering that the "liking of a creativity
technique" is argued to be a causal factor for stimulating creativity [i.e., if I like the creativity
exercise, I am more likely to engage and participate more and thus realize more creative output
from my efforts (see Van Gundy, 1992, p. 119)].
Tables 1 and 2 below illustrate the data used to calculate the correlations seen above
(values have been rounded to the nearest decimal place). The data points under the headings for
each of the three measures (i.e., liking, individual and group effectiveness) are the mean values
created from the participant’s survey scores originating from their respective small groups. Also
included are the small group n sizes and the number of ideas counted from each of the small
groups by the independent observer.
Findings from the first study determined what measures would be retained for further
analysis and data collection. Therefore, because the participant rating of liking a creativity
Continuous Improvement
17
technique failed to demonstrate predictive validity, the question was dropped from the survey
instrument and the collection of data for this measure ended. Further, because the group
effectiveness measure was found to be significantly related to the output of ideas, this measure
was retained for further exploration and the second less preferable measure (i.e., individual
effectiveness) was discontinued.
Study 2
Table 3 includes data from the second study. The Pearson product moment correlation
coefficient from the aggregate effectiveness values across the creativity sessions with the number
of refined and higher quality concepts is statistically significant and substantively strong (r = .58,
p<.03). Returning to the 0-10 scale used for the effectiveness question, a unit increase (i.e. one
full point) on the scale equates to roughly 10 fully written concepts.
Given the separation in time between exercise evaluations, and the number of
occurrences leading to the transformation of a large number of seed ideas into a smaller number
of more fully written concepts in the second study, the significance and strength of the
relationship between client scores and ideation output is noteworthy. Taken together, these
findings demonstrate the effectiveness measure’s predictive validity and provide evidence that
clients are reasonably accurate assessors of an exercise s ability to create quality ideas. More
importantly, these results add confidence that the effectiveness measure can be used within a
larger system to evaluate ideation efforts. Indeed, it is for this reason that the two studies were
designed and performed.
DISCUSSION
The application of a quality control system during brainstorming, though still in its
infancy, is noteworthy because it is used by a service provider of ideas that exist only on paper
Continuous Improvement
18
and in the mind. The outputs produced are ideas and no two ideas are ever completely alike.
This is in contrast to more traditional applications of quality control where the goal, generally
speaking, is to produce more uniform outputs. The specific innovation described here is a
measure and a system that allows for the improvement of brainstorming processes used to
generate original ideas. A valuable byproduct of this technique is the ability to experiment and
evaluate totally new methods of inspiring creative thoughts. In a very real sense, the
measurement system allows for an ideation provider to become a working creativity laboratory.
Measurements are taken with surveys after each brainstorming technique and these data
are used to focus and maximize creative efforts toward the client’s needs. The client benefits
directly from this process for two reasons. First, their assessments are acted upon in real time
during the ideation process making the brainstorming session more responsive to the flow of
ideas as they are being created. Data from the surveys are acted upon in real time because
creativity techniques and methods are altered whenever an exercise score is outside of the control
limits. For example, a fast paced exercise might be chosen from a pool of eligible exercises in
response to a low score. Second, a valuable body of knowledge concerning the success and
failure of each creative session is acquired and this knowledge can be applied to future sessions.
The point to be made here is that a quantifiable system of selecting and evaluating
brainstorming techniques is an improvement over prior practices that relied solely on the
personal judgements of developers and facilitators. It should be reinforced, however, that many
factors contribute to the success of a creativity technique in addition to the value a participant
might assign to an idea originating from that technique. Some of the factors known to effect the
success of a creativity technique can be further acted upon (e.g., stimulus, participants, energy
level), while other features pertaining to the idea (e.g., ideas can be more or less valuable to an
Continuous Improvement
19
organization depending on technological abilities, internal norms, expectations and marketplace
conditions) are less amenable to change via the use of a different creativity technique.
There are also qualitative benefits of the survey system. The surveys keep clients
actively involved in the creative process and active involvement can result in the following
benefits: 1) clients as a group are more likely to feel the ideas are their own and 2) they are more
likely to stand behind the ideas and defend them in the face of criticism. Moreover, the guidance
received from the survey results taken at different points helps immensely because 1) the ideas
are more valuable and relevant to the client’s organization and 2) the results are gathered and
interpreted in a supportive environment without breaking the creative spirit of participants.
Also important is the finding that participants do not object to the obtrusiveness of
multiple surveys. Indeed, clients have given positive feedback about the surveys in the open-
ended solicitation for comments within the survey. More importantly, however, 92% of the
surveys are competed and returned. This high response rate is another indication of the client’s
willingness to participate in the creativity quality measurement process.
The data from the surveys also provide three internal benefits to the brainstorming
facilitator: they provide immediate quantification of the effectiveness of brainstorming
techniques, they provide a tangible system for providing feedback, goals and rewards to those
who ve developed and participated in the exercise, and they provide for a long term charting
system to measure the impact of systemic changes to the brainstorming process. Though more
research is necessary, these are tangible gains when just one idea can make all the difference.
Implementing the Measure into a Quality Based System
This study provides evidence that formative assessments during idea generation activities
are associated with predictive validity. However, finding a useful measure is only a single step
Continuous Improvement
20
in the development of a system of continuous improvement and evaluation. For example,
parameters must be set to determine whether or not a brainstorming project qualifies for
measurement and data collection. In the case of our brainstorming sessions, many of the projects
meet the necessary criteria, however, a project far from the norm will bias results and make
comparisons less meaningful. Those employing this technique should set out in writing what
does and what does not constitute a normative task. Our criterion for inclusion includes matters
such as the number of participants, length of time for completion, objective, and project cost.
These criteria are only a guideline and will differ depending on the application and user.
For qualifying sessions, a mean value is created from the effectiveness question.
Specifically, a total mean value is created from the four small groups for each exercise and this
value is the estimate of the effectiveness for each creativity technique. The results are also
totaled for an overall mean indicating the effectiveness of the creativity session (i.e., the mean of
all exercises combined). Over time a value of 7.18 has been found to be an average estimate of
exercise effectiveness and the creativity session as a whole. Thus, the value of 7.18 is now a
benchmark we continually try to reach and increase over time.
From these data a lower and upper limit is set and the process for setting these limits is
straightforward. For an average session we find a standard deviation (or sigma) of .46. Values
at one standard deviation above or below the mean are considered to be outside of the control
limits. The control limit serves as an alarm indicating something different from the norm has
occurred. Our upper control limit at one sigma is 7.64 and our lower control limit at one sigma
is 6.72. This choice of sigma level is lower than what is traditionally seen in quality control.
That is, manufacturers commonly use values at three sigma and many are turning to values of six
sigma for their upper and lower control limits. The choice of a low sigma in this case is a
Continuous Improvement
21
practical one because of the time it takes to generate cases for inclusion. Moreover, this is
especially true when the unit of analysis is an entire brainstorming session rather than an
individual exercise. Specifically, even with hundreds of cases of exercise scores, a control limit
at three sigma would allow for the examination of only a handful of cases (e.g., less than 1%)
falling outside either of the control limits.
As more cases are collected and a better understanding of the process is reached, the
control limit can be increased to greater sigma levels. There is nothing very complicated about
the sigma level. It is merely a device to keep one honest in their assessment of progress.
Determining what causes progress is up to the interpreter and this effort makes up the bulk of our
future research.
Values at or above one sigma indicate something exceptional has happened. Though
traditional quality control proscribes that a datum at this level is out of control, within the context
of this procedure it is evidence of a very good happenstance. When a value near or above one
sigma occurs we delve deeply and search for the underlying cause and attempt to repeat it. In
doing so the bar is raised on quality and knowledge becomes actionable. Additionally, and
though random fluctuations around the mean will occur, it is also informative to look simply at
scores above and below the mean. These scores can be informative, especially during the early
stages of data collection, because it is often difficult to find or force large fluctuations in the
scores. After multiple attempts the user will begin to reach a better understanding of the degree
of process change necessary for a resultant change in effectiveness scores. Furthermore,
reviewing the open-ended comments from the surveys will aid in this endeavor and allow for a
more refined second attempt at isolating cause and effect relationships.
Continuous Improvement
22
Looking at the individual exercise results from a particular session can also lead to
valuable insights. For example, regardless of the choice of exercise, we have found that scores
tend to increase in the first or second exercise after breaking for lunch. As a test to reverse this
effect, power breaks were added so that the participants could play arcade style games, get
some refreshments, conduct a mock vote on the best idea given, or simply walk away to collect
their thoughts. Another tactic used when scores dip below the one sigma control limit is to
increase the speed of the following exercise or purposely choose a fast-paced exercise to ignite
thinking. Experience with these tactics suggest that any break in the process helps to increase
the effectiveness scores by mentally refreshing the participants. This finding is not
revolutionary, however, the ability to detect when that change is needed approaches something
nearly as important.
Another insight gathered by looking at the collective set of exercises used for a particular
session is the consistent appearance of autocorrelation in the form a step pattern that can be seen
clearly when exercise results are displayed graphically in temporal order. This pattern has
consistently appeared in varying degrees for many creativity sessions. The session illustrated in
Figure 1 below is indicative of the step pattern. The session portrayed is the same beer brewing
session seen in Table 1 though scores may differ because only the client scores are included.
Though somewhat speculative, and problematic for estimating statistical parameters, the
pattern is believed to be a form of regression to the mean. That is, when clients are completing
the evaluation of an exercise, it is believed that they are affected by the last score they gave
which results in a tendency to rate the next exercise relatively higher or lower. In part,
autocorrelation is expected because the outcomes for each exercise (i.e., the effectiveness rating)
are now acted upon during the session with intention of causing change. Nonetheless, and
Continuous Improvement
23
because the change is not a monotonic increase, the pattern suggests an additional factor
competing with a host of other complex relationships that must be identified and controlled.
Future research is necessary to see if this finding is unique to these data or if it is a fundamental
part of the process employed.
CONCLUSION
In this research, a quality control model was chosen because a significant literature
surrounds the various methods of quality control and their implementation. Moreover, the
benefits are well established and the literature is replete with successful case histories from those
employing quality control (Walton, 1986). Nonetheless, service providers rely heavily on a too
distant proxy for quality control. This proxy takes many forms but it is commonly referred to as
consumer satisfaction (i.e., a summative evaluation).
A measure of satisfaction certainly has recognized benefits, but differs from quality
control and output related measurement systems. As stated previously, the disadvantage of the
satisfaction construct is that the link of satisfaction with quality is theoretical because the
consumer, unless well versed in the industry, does not know what the product or service could be
or how it could be created better. Indeed, recall that there was no statistically significant
correlation between the client "liking of the exercise" scores and the number of ideas produced.
Finding out what a product or service could be is facilitated with detailed knowledge
about originating systems and processes. To the point, without process knowledge it is difficult
to identify those features propelling both achievement and lackluster performance. The system
of measurement introduced here is arguably an improvement over the general client satisfaction
survey because the ideal cause of client satisfaction (i.e., the quantity and quality of ideas clients
take home with them) is targeted directly.
Continuous Improvement
24
Armed with immediate and direct feedback, we tamper with what works, drop what fails
to produce (e.g., this includes facilitators), and hypothesize to create new and better creative
methods. Further, new creativity techniques are purposely introduced and tried in addition to the
optimization of current techniques. New techniques are evaluated through fast, inexpensive and
flexible forays with the understanding that an initial failure does not necessarily mean the demise
of a new technique. Failing quickly and trying the technique in another session helps to establish
whether the technique is poor on its face or whether the technique’s ideas were off strategy with
the client’s expectations and objectives. Future research will focus on the identification of
exercises consistently producing more and better ideas.
Mechanisms for change are not limited to altering creativity techniques. Future research
will also focused on the testing of additional measures. For example, measures tapping the fear
of failure, the impact of stimulus components within the techniques, and though somewhat
controversial, the impact of different participant thinking and creativity styles and attitudes
toward ideation (Basadur & Hausdorf, 1996; Kirton, 1987; Leonard & Straus, 1997; Grigorenko
& Sternberg, 1997; Rickards, 1993; Runco & Basadur, 1993) will be explored.
Certainly the work in this study is in its initial stages and likely a reflection of the social
context in which businesses, especially in the service industry, are currently finding themselves.
Indeed, according to recent research "there is little question that large numbers of firms in
virtually every industry have been trying to simplify their business processes, develop and
monitor metrics of performance, benchmark themselves against world-class competitors, and
continuously improve" (Lynn, Morone, & Paulison, 1996, p. 9). A seemingly unavoidable
result of this competitive and progressive environment is that corporations are expecting more
Continuous Improvement
25
from their creativity investments. In conclusion it is progress, and in some instances survival,
that requires us to meet the challenge.
Continuous Improvement
26
REFERENCES
Amabile, T. M. (1987). The motivation to be creative. In S. G. Isaksen (Ed.), Frontiers of
creativity research (pp. 223-254). Buffalo, NY: Bearly Limited.
Amabile, T. M. (1998, Sept.-Oct.). How to kill creativity. Harvard Business Review, 77-87.
Baer, J. (1993b, December/January). Why you shouldn’t trust creativity tests. Educational
Leadership, 80-83.
Baer, J. (1994c, October). Why you still shouldn’t trust creativity tests. Educational Leadership,
72-73.
Basadur, M., & Hausdorf, P. A. (1996). Measuring divergent thinking attitudes related to
creative problem solving and innovation management. Creativity Research Journal, 9,
21-32.
Carson, P. P., & Carson, K. D. (1993). Managing creative enhancement through goal-setting
and feedback. Journal of Creative Behavior, 27, 36-45.
Chang, R. Y., & Kelly, P. K. (1994). Improving through benchmarking: A practical guide to
achieving peak process performance. Irvine, CA: Richard Chan Associates, Inc.
Cramond, B. (1994, October). We can trust creativity tests. Educational Leadership, 70-71.
Czarnecki, M. T. (1998). Managing by measuring: How to improve your organization’s
performance through effective benchmarking. New York. American Management
Association.
Davis, G. A. (1999). Barriers to creativity and creative attitudes. In M. A. Runco & S. R.
Pritzker (Eds.), Encyclopedia of creativity. Volume one (pp. 165-174). San Diego, CA:
Academic Press.
Diehl, M., & Stroebe, W. (1986). Productivity loss in brainstorming: Toward the solution of a
riddle. Journal of Personality and Social Psychology, 53, 497-509.
Finke, R. A., Ward, T. B., & Smith, S. M. (1992). Creative cognition: Theory, research, and
applications. Cambridge, MA: MIT Press.
Flora, R., & Kenney, M. (1990). The breakthrough illusion. New York: Basic Books.
Ford, C. M. (1999). Corporate culture. In M. A. Runco & S. R. Pritzker (Eds.), Encyclopedia
of creativity. Volume one (pp. 385-393). San Diego, CA: Academic Press.
Grigorenko, E. L., & Sternberg, R. J. (1997). Styles of thinking, abilities, and academic
performance. Exceptional Children, 63, 295-312.
Continuous Improvement
27
Hall, D., & Wecker, D. (1995). Jump start your brain. New York: Warner Books.
Kahle, L. R., Hall, D. B., & Kosinski, M. J. (1997). The real-time response survey in new
product research: It’s about time. Journal of Consumer Marketing. 14, 234-248.
Kalwani, M. U., & Silk, A. (1982). On the reliability and predictive validity of purchase
intention measures. Marketing Science, 1, 243-286.
Khatena, J. (1982). Myth: Creativity is too difficult to measure! Gifted Child Quarterly, 26, 21-
23.
Kirton, M. J. (1987). Kirton Adaption — Innovation Inventory (KAI) manual
(2
nd
ed.). Hatfield,
UK: Occupational Research Centre.
Leonard, D., & Straus, S. (1997, July-Aug). Putting your company’s whole brain to work.
Harvard Business Review, 111-121.
Lingle, J. H., & Schiemann, W. A. (1996, March). From balanced scorecard to strategic gauges:
Is measurement worth it? Management Review, 56-61.
Lyne, G. S., J. G. Morone and A. S. Paulson (1996). Marketing and Discontinuous Innovation:
The Probe and Learn Process. California Management Review, 38, 8-37.
Nickerson, R. S. (1999). Enhancing creativity. In R. J. Sternberg (Ed.), Handbook of creativity
(pp. 392-430). New York: Cambridge University Press.
Osborn, A. (1953). Applied imagination. New York: Charles Scribner and Sons.
Parnes, S. J. (1999). Programs and courses in creativity. In M. A. Runco & S. R. Pritzker
(Eds.), Encyclopedia of creativity. Volume two (pp. 465-477). San Diego, CA:
Academic Press.
Plucker, J., & Renzulli, J. S. (1999). Psychometric approaches to the study of human creativity.
In R. J. Sternberg (Ed.), Handbook of human creativity (pp. 35-60). New York:
Cambridge University Press.
Plucker, J., & Runco, M. (1998). The death of creativity measurement has been greatly
exaggerated: Current issues, recent advances, and future directions in creativity
assessment. Roeper Review, 21, 36-39.
Plucker, J., & Runco, M. (1999). Enhancement of creativity. In M. A. Runco & S. R. Pritzker
(Eds.), Encyclopedia of creativity. Volume one (pp. 669-675). San Diego, CA:
Academic Press.
Rickards, T. (1993). Creative leadership: Messages from the front line and the back room.
Journal of Creative Behavior, 27, 46-56.
Continuous Improvement
28
Rickards, T. (1999). Brainstorming. In M. A. Runco & S. R. Pritzker (Eds.),
Encyclopedia of creativity. Volume one (pp. 219-227). San Diego, CA: Academic Press.
Runco, M. A. (1991). The evaluative, valuative, and divergent thinking of children. Journal of
Creative Behavior, 25, 311-319.
Runco, M. A., & Basadur, M. S. (1993). Assessing ideational and evaluative skills and creative
styles and attitudes. Creativity and Innovation Management, 2(3), 166-173.
Schroder, H. M. (1994). Managerial competence and style. In M. Kirton (Ed.), Adaptors and
innovators: Styles of creativity and problem solving (rev. ed.) (pp. 91-113). New York:
Routledge.
Stevens, A., & Burley, J. (1997). 3000 Raw Ideas = 1 Commercial Success. Research and
Technology Management, 40, 16-27.
Van Gundy, A. (1981). Techniques of structured problem solving. New York: Van Nostrand
Reinhold Company, Inc.
Van Gundy, A. (1987). Organizational creativity and innovation. In S. G. Isaksen (Ed.),
Frontiers of creativity research (pp. 358-379). Buffalo, NY: Bearly Limited.
Van Gundy, A. (1992). Idea power: Techniques & resources to unleash the creativity in your
organization. New York: AMACOM, American Management Association.
Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New York: Freeman.
Williams, W. M., & Yang, L. T. (1999). Organizational creativity. In R. J. Sternberg (Ed.),
Handbook of creativity (pp. 373-391). New York: Cambridge University Press.
Continuous Improvement
29
Table 1. Group Mean Effectiveness Scores and Number of Ideas: Results from the
Beer Brewer Brainstorming Session.
Group #
Group
Size
Degree
of Liking
Group
Effective
Individual
Effective
Number of
Ideas
Creativity Technique 1
1
5
9.0
8.6
8.4
12
2
6
8.3
8.1
7.5
8
3
6
8.3
8.3
8.7
10
4
6
7.2
7.3
6.8
8
Creativity Technique 2
5
6
6.2
7.7
6.2
11
6
7
7.3
6.9
7.0
22
7
6
6.0
6.8
6.5
12
8
3
8.0
7.3
6.7
8
Creativity Technique 3
9
7
7.3
7.7
8.1
13
10
5
7.2
7.6
6.8
13
11
6
6.8
7.5
6.8
15
12
5
6.6
6.6
5.8
8
Creativity Technique 4
13
6
7.3
7.3
6.7
12
14
5
5.8
6.0
5.8
4
15
5
4.8
4.2
3.8
6
16
4
6.3
6.8
6.5
8
Creativity Technique 5
17
7
6.9
7.9
6.9
11
18
5
8.8
8.4
6.5
17
19
5
7.6
6.4
6.2
16
20
5
6.8
6.4
6.8
9
Creativity Technique 6
21
3
8.3
6.3
7.3
13
22
5
8.0
8.0
8.2
24
23
5
3.6
4.6
4.4
16
24
4
8.3
8.0
8.3
14
Creativity Technique 7
25
5
5.8
5.8
5.8
9
26
6
5.0
6.2
5.7
10
27
4
6.5
5.8
6.0
7
28
3
8.0
7.7
7.3
15
Continuous Improvement
30
Table 2. Group Mean Effectiveness Scores and Number of Ideas: Results from the
Candy Company Brainstorming Session.
Group # Group
Size
Degree
of Liking
Group
Effective
Individual
Effective
Number of
Ideas
Creativity Technique 1
29
6
7.7
8.5
7.2
24
30
6
7.5
8.2
7.7
27
31
6
7.2
7.6
6.3
12
32
5
8.8
8.4
8.0
16
Creativity Technique 2
33
6
6.7
7.5
6.3
21
34
5
7.8
7.0
7.2
21
35
5
8.0
6.8
6.8
4
36
4
8.8
8.0
7.8
12
Creativity Technique 3
37
5
8.0
7.4
6.8
8
38
6
7.7
8.0
7.8
25
39
5
7.4
7.2
8.2
5
40
7
6.4
5.1
6.1
11
Creativity Technique 4
41
6
6.2
6.2
6.0
5
42
5
5.4
5.0
4.8
6
43
5
9.4
8.8
8.8
9
44
5
5.4
5.4
5.2
16
Creativity Technique 5
45
5
0.6
1.4
1.2
3
46
6
6.2
6.8
6.0
6
47
5
5.4
6.6
6.6
5
48
6
5.5
6.5
5.5
5
Creativity Technique 6
49
6
6.2
5.8
5.7
11
50
6
7.7
6.8
6.8
9
51
5
6.6
6.2
6.8
4
52
4
7.3
5.3
5.8
16
Creativity Technique 7
53
5
7.0
7.2
7.0
12
54
5
9.4
9.8
9.6
14
55
5
6.0
6.0
5.2
9
56
6
9.7
9.7
9.5
8
Creativity Technique 8
57
6
7.7
7.8
5.8
1
58
6
8.7
8.3
7.2
1
59
4
9.0
6.8
6.8
1
60
6
7.3
6.5
5.8
1
Continuous Improvement
31
Table 3. Mean Effectiveness Scores for the Ideation Session and the Number of Refined
Concepts for the Session.
Product Orientation
Number of
Clients
Total
Participants
Exercises
Completed
Mean
Effectiveness
# of Refined
Concepts
Beverages
11
24
7
6.8
23
Food
12
23
8
6.5
21
Food
12
27
7
8.1
36
Food Service
11
27
7
7.3
36
Telecommunications
15
31
7
7.6
40
Health and Beauty Aids
11
26
7
7.5
24
Health and Beauty Aids
12
25
7
7.0
30
Financial Services
15
31
6
7.4
32
Food
16
30
5
6.2
10
Candy
13
30
7
6.9
36
Health and Beauty Aids
15
27
7
7.6
27
OTC Pain Reliever
15
27
6
7.6
23
Food
9
21
6
7.0
25
Food
14
27
6
6.8
26
Financial Services
18
31
6
7.1
25
Baby Care
13
29
6
7.3
41
Paint and DIY Supplies
15
30
5
7.3
43
Continuous Improvement
32
Appendix A
Creativity Technique I
Company Name Here
FULL Name (please print)________________________________________________________
Group Color (PLEASE CIRCLE)
GREEN
RED
BLUE
BROWN
PLEASE ANSWER THE FOLLOWING QUESTIONS BASED ON WHAT YOU JUST DID
BY CIRCLING A VALUE BETWEEN 0 AND 10.
=====================================================================
1) Overall, how much do you DISLIKE or LIKE this creativity exercise?
DISLIKE
LIKE
A LOT
A LOT
0 1 2 3 4 5 6 7 8 9 10
2) With this exercise and your group, how effective were YOU INDIVIDUALLY in
generating quality ideas?
NOT AT ALL
VERY
EFFECTIVE
EFFECTIVE
0 1 2 3 4 5 6 7 8 9 10
3) With this exercise, how effective was YOUR GROUP, as a whole, in generating
quality ideas?
NOT AT ALL
VERY
EFFECTIVE
EFFECTIVE
0 1 2 3 4 5 6 7 8 9 10
Do you have any..........ADVICE?............SUGGESTIONS?...........COMMENTS?
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
Continuous Improvement
33
______________________________________________________________________________
______________________________________________________________________________
Continuous Improvement
34
Figure 1. Beer Brewer
0
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
6
7
Exercise Order
Morning Afternoon
Effectiveness Scores