> What is Conjoint Analysis

Conjoint Analysis is for discovering the relative importance to stakeholders – e.g. consumers or citizens – of the attributes underpinning a product or other alternative of interest.

*This article explains the main ideas behind Conjoint Analysis. Written in general, non-technical terms, the article is intended for people who are new to Conjoint Analysis or in need of a ‘refresher’.*

Conjoint Analysis (CA) – also known as Choice Modelling or Discrete Choice Experiments (DCE) – is widely used for marketing research and in the social sciences for finding out what people care about when making choices involving trade-offs.

CA addresses questions like:

- Which attributes (or features / characteristics) of a product or other alternative of interest (e.g. a government policy) are most important to consumers or citizens?
- What is the relative importance (weights) of these attributes?
- How are different products or other alternatives of interest ranked relative to each other, and which product/alternative is best?

In short, CA involves these four key components (with somewhat specialised terminology!):

*Attributes*: The features or characteristics of the product or other alternative of interest*Part-worth utilities*: Values (or weights) representing the relative importance of the attributes*Concepts*(or*Profiles*): Particular products or other alternatives of interest, represented as combinations of the attributes*Participants:*Whose preferences are to be discovered, usually via a survey

Most CA applications include fewer than a dozen attributes, which may be quantitative or qualitative in nature, with 5-7 attributes being typical.

For example, a CA survey could be used to discover the preferences of car buyers with respect to the relative importance – expressed in terms of ‘*part-worth utilities*’ – of these ‘*attributes*’ associated with possible car designs: fuel efficiency, top speed, safety features, price, etc.

This information about the relative importance of the attributes can be used to rank different car design ‘*concepts*’ (i.e. combinations of the attributes), including choosing the ‘best’ concept (design) – e.g. for a car manufacturer to produce. Scenarios involving competing concepts (designs) can be evaluated and compared, enabling market shares for each concept to be predicted.

The following link is to a CA survey set up for finding out what participants (you!) care about when choosing a breed of cat as a pet! This light-hearted example neatly demonstrates many of features of a CA survey from a participant’s perspective.

app.1000minds.com/survey/157/cats-demo

Of course, CA is also used for more ‘heavy-weight’ applications!

The information presented in the remainder of this article is intended to be practical and user-oriented. Via a simple example of a CA survey, the range of outputs available from a CA directly or with a little additional analysis are presented.

Suppose the CA survey is to discover what consumers of ‘flavoured milk drinks’ care about (generalisable to other products or alternatives of interest too).

Without going into details here, the survey would usually involve each survey participant answering a series of questions involving trade-offs between attributes associated with flavoured milk drinks – e.g. taste, nutrition, price, shelf life, brand image.

From each participant’s answers to the survey questions, ‘*part-worth utilities*’, representing the relative importance of the attributes, are calculated. These utilities are then used to rank different flavoured milk drink ‘*concepts*’ (i.e. combinations of the attributes), including choosing the ‘best’ concept – e.g. for a manufacturer to produce.

These basic CA outputs are now presented and analysed in various useful ways. Though the example here has a marketing-research focus, the ideas illustrated below can be generalised to other CA applications too (e.g. with a government policy focus).

The outputs below are from a 1000minds Conjoint Analysis survey, implementing the PAPRIKA pairwise comparisons method. A major strength of the PAPRIKA method is that part-worth utilities are generated for each *individual* participant, in contrast to other methods that produce *aggregate* results only. Individual-level data enables more indepth analysis, as illustrated below.

- Part-worth utilities
- Attribute rankings
- Radar chart
- Attribute relative importances
- Rankings of entered concepts
- Market shares
- Market simulations (“What ifs?”)
- Rankings of all possible concepts
- Willingness-to-pay (WTP)
- Cluster (market segmentation) analysis

For simplicity, suppose there are just five participants in the survey (of course a real survey would probably involve 100s or 1000s of participants): Consumers ‘X’, ‘Y’, ‘Z’, ‘Paul’ and ‘Alfonse’, as in the tables below.

First, here are the part-worth utilities for each participant – in this example relating to attributes associated with flavoured milk drinks – as well as the usual summary statistics (median, mean, standard deviation).

The value for each level on an attribute represents the combined effect of the attribute’s relative importance (weight) and its degree of achievement as reflected by the level (for more information, see interpreting preference (utility) values.

As well as the part-worth utilities reported above, here are the ‘normalised’ attribute weights and scores – an alternative, though equivalent, representation of the mean utility values (second-last column above). This equivalence is easily confirmed by multiplying the weights and single attribute scores to reproduce the (mean) part-worth utilities above.

Consistent with the part-worth utilities data above, here are the rankings of the attributes for each of five participants.

The data in the first chart can be visualised in several ways, including using a ‘radar’ chart.

This chart – also known as a ‘star’ or ‘spider web’ chart – indicates the strength of preferences for the attribute shown by each of the five participants; each one has a differently coloured ‘web’; the further from the centre of the chart, the more important the attribute. The thick black line shows the mean values.

These ratios – sometimes known as ‘marginal rates of substitution’ (MRS) – capture the relative importance of the column attribute for the row attribute (based on the mean utilities).

Although part-worth utilities (as above) are interesting, there is also enormous power in applying each individual’s preferences to new product concepts and also to competitors’ offerings, in order to predict the likely market share or market shift that might occur.

Such analysis is useful for answering questions like, “What would it take to make Product A the market leader (or to, at least, increase its market share)?”

Here are 12 illustrative product concepts for flavoured milk drinks.

The five participants’ utilities can be easily applied to the 12 concepts by calculating ‘total utility’ scores for each concept – simply by summing the values for each level on each attribute for each concept, and the concepts are then ranked for each participant by their total scores. (The linearity of the equation means that, by construction, interaction effects between the attributes are ruled out – i.e. the attributes are independent.)

Thus, it can be seen below that 60% of participants in the survey (3 out of 5 participants) would have chosen (i.e. ‘bought’) Product C and 40% (2 out of 5) would have chosen Product G.

By contrast, just 20% of participants (1 participant) would rank Product A as their 3rd most-preferred product (and probably not buy it).

Of course, just five participants is insufficient to represent the market for flavoured milk drinks! More realistically, 500 – or 1000! – survey participants would be necessary, but hopefully you get the idea of how this analysis works. Note that how a sample is selected – e.g. randomly – is more important than just sample size.

Also, look at the table below to see the frequencies of ranks for each of the 12 concepts – where we can see that 3 of the 5 participants would rank Product A 4th and 1 participant each would rank it 3rd and 6th respectively.

The number in each cell is the number of participants – out of 5 in the survey – who would give the identified concept the identified rank.

If our objective is to answer a question like, “What would it take to make Product A the market leader?”, we can make predictions, based on the utilities from the survey, as to what would happen if Product A’s attributes were changed. (As mentioned before, bear in mind that just five participants is insufficient to properly simulate a market.)

Below is a comparison of the total utilities for Product A versus Product C (the current market leader) disaggregated across the five attributes (see below for colour coding).

Clearly, relative to Product C, Product A is deficient with respect to its brand image and it is more expensive (on the other hand, A is superior with respect to shelf life).

Other attribute fine-tunings are possible too; e.g. if lowering Product A’s price were infeasible, then improving its brand image ** and **its nutrition would be sufficient to overtake Product C. This can be discerned from these Tornado Charts: −/+ 1 level (one-way sensitivity analysis):

As product concepts are refined – e.g. improving Product A’s brand image and nutrition (as above) – we can see the impact this may have on the market. In this case Product A could be expected to take a 70% market share (based on these 12 concepts and the five participants’ preferences), at the expense of the market shares of Products C and G.

In addition to rankings of particular entered concepts (e.g. 12, as above), it’s possible to see rankings of all theoretical combinations of the attributes – in this example, 3 × 3 × 4 × 3 × 3 = 324 concepts; here are the first 20:

Based on the outputs above, the following analyses are easily performed using Excel or, for the cluster analysis, a statistics package (e.g. SPSS, Stata, MATLAB).

The usual way of calculating WTP is to calculate the number of currency units (e.g. dollars) that each part-worth utility unit – often referred to as a ‘utile’ – is worth. And then it’s easy to convert all the non-monetary attributes – valued in terms of utiles (part-worth utilities) – into monetary equivalents, which can be interpreted as WTP.

Thus, for example, using the mean utilities (as reproduced below), a price fall from $6 to $3 (i.e. $3) corresponds to a utility gain of 20.7 – 0.0 = 20.7 utiles. Therefore, 1 utile is worth $3/20.6 = 14.5 cents. Applying this ‘price’ of 14.5 cents per utile allows us to convert the part-worth utilities associated with the non-monetary attributes into WTPs.

Mean | WTP | |
---|---|---|

Taste | ||

Nothing special | 0% | |

Quite good | 11.6% | $1.68 |

Delicious | 16.9% | $2.44 |

Nutrition | ||

Fattening | 0% | |

Non-fattening, but not nutritious | 6.5% | $0.94 |

Non-fattening, and nutritious (e.g. calcium rich) | 19.1% | $2.76 |

Price (per 500 ml bottle) | ||

$6 | 0% | |

$5 | 4.9% | |

$4 | 14.3% | |

$3 | 20.7% | |

Shelf Life | ||

Short shelf life | 0% | |

Medium shelf life | 7.0% | $1.02 |

Long shelf life | 15.3% | $2.22 |

Brand image | ||

Dull (a bit embarrassing) | 0% | |

OK (but not cool) | 13.4% | $1.94 |

Cool | 28.0% | $4.06 |

As mentioned earlier, a major strength of 1000minds is that part-worth utilities are generated for each *individual* decision-maker, in contrast to other methods that only produce *aggregate* data from the group of decision-makers.

Individual-level data enables cluster analysis to be performed (i.e. after exporting to Excel and then using a statistics package) in order to identify ’clusters’ – or ‘market segments’ – of participants with similar preferences (as represented by their part-worth utilities).

The schematic below illustrates the main idea behind the ‘*k*-means clustering method’, which may be briefly explained as follows.

- Imagine there are just 2 attributes, as represented by the
*x*and*y*axes in the panels below, and each point in the space corresponds to a participant’s part-worth utilities (on*x*and*y*attributes). - The k-means clustering algorithm starts by asking the analyst to set the number of potential clusters (referred to as “
*k*-means” –*k*signifying the number of clusters); in the schematic there are 3 (i.e.*k*= 3). - A starting point (
*x*,*y*co-ordinates) is randomly chosen for each of the 3 yet-to-be-discovered clusters; see Panel A. - Next, all the individuals in the space are clustered to whichever of the 3 individuals (
*x*,*y*co-ordinates) they are closest to; see Panel B. - Then a new representative centre – i.e. mean value – for each of the nascent clusters is calculated for each of the clusters; see Panel C.
- And the process repeats: All the individuals are clustered to whichever of the 3 individuals (
*x*,*y*co-ordinates) they are closest to, and this keeps repeating until no further changes are possible; see Panel D.

Finally, having identified clusters of part-worth utilities (e.g. 3 clusters, as above) , the usual next step is to test the extent to which each cluster is associated with observable socio-demographic characteristics (e.g. age, gender, etc) or other consumer behaviours, in order to define targetable market segments.

Other methods are available – for a discussion of cluster analysis, read the Wikipedia article.

In conclusion, Conjoint Analysis is for discovering the relative importance to stakeholders – e.g. consumers or citizens – of the attributes underpinning a product or other alternative of interest.

Conjoint analysis (CA) – also known as Choice Modelling or Discrete Choice Experiments (DCE) – is widely used for marketing research and in the social sciences for finding out what people care about when making choices involving trade-offs.

Hopefully, the examples above have illuminated the richness of the outputs from a Conjoint Analysis survey.

To learn more about Conjoint Analysis, you might like to read the Wikipedia article.

Seminal articles about Conjoint Analysis in the marketing literature include:

- P Green, A Krieger & Y Wind (2001), “Thirty years of conjoint analysis: Reflections and prospects”,
*Interfaces*31, S56-S73. - P Green & V Srinivasan (1990), “Conjoint analysis in marketing: New developments with implications for research and practice”,
*Journal of Marketing*54, 3-19. - P Green & V Srinivasan (1978), “Conjoint analysis in consumer research: Issues and outlooks”,
*Journal of Consumer Research*5, 103-23.