Select Page

4.15 Metrics-Based Conjoint Modeling: Analyzing the Coefficients


Model 1 (reference levels selected by Python)

Model 2 (reference levels manually selected)

If our goal here were simply to identify the most popular bundle, we could skip the entire modeling process, and just rank the bundles by average rating score.  

By focusing instead on the model coefficients, we can gain insights into the preferences of the survey respondents.

As we do, keep in mind that one level is not shown along with these results.  That level is the “reference level.”  The other coefficient values that we see are in reference to that level – anything positive means that customers prefer that feature over the reference level, and anything negative means that customers prefer the reference level.  The negative coefficients here for opentop_Y and sway_Yes mean that ferris wheels with open tops, and that sway, are less attractive to respondents, relative to ones that have closed tops, and that do not sway.  

The magnitude of the coefficients tells us about the strength of the respondents’ opinions.  Regardless of whether the level was selected by Python or manually chosen, the results are the same. For instance, among the coefficients shown in both images above, the largest magnitude is the one for the red color option.  Why is red so popular?  Perhaps visitors to Lobster Land associate reddish tones with the park, and feel that the ferris wheel should be painted accordingly.  It is also possible that the color options evoke strong sentiment among the respondents because the color of the ride is so easy to imagine and understand.  For more complicated things, it can be harder to quickly form a strong opinion.  

With the totaltime variable, we can see that respondents exhibit a general preference for longer rides.  That is helpful to know, in and of itself, but let’s dig just a bit deeper here.  In Model 1, the incremental benefit that comes from going from 80 seconds to 240 seconds is 0.575, but the additional gain that comes from going from 240 seconds to 420 seconds is just 0.176.  It looks like there would be a big payoff in terms of extra consumer satisfaction by going to 240 seconds, but the incremental gain from increasing the time to 420 seconds may not be quite as significant.

It is also helpful to bear in mind that consumers typically answer surveys from their own perspective.  When answering the question about the ride’s length, we are most likely imagining ourselves being on the ride – but we’re not thinking about waiting in line!  From the perspective of operational efficiency, a 420-second ride could be a disaster if it creates a huge bottleneck of visitors waiting to try out the new ferris wheel.  

With the height variable, there seems to be a preference here for taller ferris wheels.  Does this mean that Lobster Land should rush out to start constructing a 300-foot ferris wheel?  Not necessarily.  

One issue with these options, right off the bat, is that it is hard for someone to accurately perceive a height of 100, 200, or 300 feet, without some nearby frame of reference.  When asked, we might just opt for the highest choice, perceiving it as being “best.”  If we were standing next to a 300-foot structure, however, our bold sentiment might diminish a bit.  Furthermore, there could be cost constraints, engineering constraints, and insurance-related reasons to hesitate before rushing out to build the tallest possible ride.  Similar issues could arise with paxpercar, a variable for which respondents seemed to tell us “the more, the merrier.”  

All that said, it does seem noteworthy that for each numeric feature (paxpercar, height, and totaltime), respondents opted for the “max” option.  In and of itself, that suggests that we may wish to consider re-constituting the survey with a wider range of possible options (assuming, of course, that such options are feasible).  

From the results here, we also don’t know exactly how representative the surveyed population sample is of the Lobster Land client base – are these people even likely to visit the park at all?  

Ultimately, we should use metrics-based conjoint analysis in a nuanced way.  Even when it sends us a clear message (as with the red color choice), we may wish to refine it further down into more options (for instance, which shade of red is best?)   Sometimes, it will deliver clear themes for us (the rejection of the open top cars, and of the swaying cars, seems to point toward a preference for safety).  In other cases, the proper interpretation of its results is just as much art as it is science.  While that may be frustrating in one sense, due to its lack of clear answers, it may be encouraging in another sense – its application and interpretability is limited only by the imagination and creativity of the person using it.