Wednesday, October 15, 2014

Beware Graphical Networks from Rating Scales without Concrete Referents

We think of latent variables as hidden causes for the correlations among observed measures and rely on factor analysis to reveal the underlying structure. In a previous post, I borrowed an alternative metaphor from the R package qgraph and produced the following correlation network. Instead of depression as a disease entity represented as a factor, this figure displays depression as a set of mutually reinforcing ratings located toward the bottom of the graph.


I selected the bifi dataset from the psych R package so that readers could reproduce the analysis and so that one could compare the factor structure and the correlation network. However, I was thinking in terms of actual behaviors and not agreement ratings for items from a personality inventory. This distinction was discussed in an earlier post introducing item response theory. The node "Mood Swings" should be measured by a series of concrete behaviors in actual situations. This is the goal of the patient outcome measurement and the call for context-aware measurement. Moreover, one sees the same focus on behaviors or symptoms in the work of Borsboom and his associates, including the author of the R package qgraph that generated the above graphical network.

In an excellent tutorial on network analysis of personality data in R, Sacha Epskamp and others present another example along with all the necessary R code. Correlations networks are produced with qgraph along with partial correlation and LASSO networks, the later with the help of the R package parcor. This paper ("State of the aRt personality research") outlines all the steps to generate graphical models and interpret the indices that describe the network structure. This is not social network analysis for the nodes are variables and the links are different measures of relationship.

The data comes from a personality inventory with a list of 60 statements and a five-point agreement scale. The scoring key lists the six constructs, abbreviated HEXACO, and their associated items. The first in the list is Sincerity, one of the 24 nodes in the network maps, measured by the following three statements:
  • I wouldn't use flattery to get a raise or promotion at work, even if I thought it would succeed.
  • If I want something from someone, I will laugh at that person's worst jokes. [scale reversed]
  • I wouldn't pretend to like someone just to get that person to do favors for me.
I understand that we share a common conceptual space embedded in our language in which the endorsement or rejection of these items might provide some information about self-presentation. Yet, I expect that someone who has never worked could answer the first question because it has nothing to do with actual experience. All that I am being asked is whether I view myself as the type of person depicted in the statement. Similarly, I can respond to the second statement even if I never laugh at anyone's bad jokes. In fact, I would answer the same regardless of any propensity to laugh or not laugh at other's jokes.

The HEXACO model of personality structure is but one of a number of different approaches based on the lexical hypothesis that personality gets coded in language. There is a meeting of the minds over the distinctions that are made and what it might mean to position ourselves at different locations within this landscape. In order to communicate with others, we must come to some agreement about the meanings of the statements used in personality inventories. It is the talk and not the behavior that is responsible for the factor structure or the positioning of nodes in the network.

Where are the feedback loops or mutually reinforcing nodes with such measures? It makes sense to talk about a network when the nodes are behaviors, as in the lower portion of our above network map. I get irritated, so I am more likely to get angry. In this agitated state I panic more easily and experience mood swings, all of which is makes me feel blue. You can download the 60-item self-report form and decide for yourself if the statements are linked by anything more than a shared conception and way of talking about personality traits.

Thursday, October 2, 2014

Consumer Preference Driven by Benefits and Affordances, Yet Management Sees Only Products and Features

Return on Investment (ROI) is management's bottom line. Consequently, everything must be separated and assigned a row with associated costs and profits. Will we make more by adding another product to our line? Will we lose sales by limiting the features or services included with the product?

The assumption is that consumers see and value the same products and features that management lists as line items on their balance sheets. It simply makes data collection and analysis so easy that the most popular techniques never question this assumption. For example, in my last post about TURF Analysis, I discussed the ice cream flavors problem. How many and what flavors of ice cream should you offer given limited freezer space?

A typical data collection would present each flavor separately and ask about purchase intent, either a binary buy or no buy or an ordered rating scale that is split into a buy-or-not-buy dichotomy using a cutoff score. Even if we assume that our client only sells ice cream in grocery stores, we still do not know anything about the context triggering this purchase. Was it bought for an individual or household? Will adults or children or both be eating it for snacks or after dinner? How will the ice cream be served (e.g., cones, bowls, or with something else like pie or cake)?

Had we started with a list of usage occasions, we could have asked about flavor choices for each occasion. In addition, we could have obtained some proportional allocation of how much each occasion contributed to total ice cream consumption. Obviously, we have multiplied the number of observations from every respondent since we ask about flavor selection for every usage occasion. Much of the data matrix will be empty since individuals are likely to buy only a few flavors over a limited set of occasions.

The typical TURF Analysis, on the other hand, strips away context. By removing the "why" for the purchase, we have induced a bias toward focusing on the flavor without any context. Technically, this was the goal of the research design in the first place. Management knows the costs associated with offering the flavor, it needs to know the profit, but that it has failed to measure. In fact, it is unclear what is being measured. Does the respondent provide their own context by thinking of the most common purchase occasion, or do they report personal preferences as they might in any social gathering when asked about their favorite flavor of ice cream? Nonetheless, we still cannot calculate profit for that would require a weighted average of selections over purchase occasions with the weights reflecting volume.

Contextualized measurement yields high-dimensional sparse data that create problems for most optimization routines. Yet, we can analyze such data by searching for low-dimensional subspaces defined by benefits delivered and affordances provided. Purchases are made to deliver benefits. Flavors are but affordances. Someone in the household likes chocolate, so the ice cream must contain some minimal level of chocolate. Flavor has an underlying structure, and the substitution pattern reflects that structure. However, chocolate may not be desirable when the ice cream is served with cake or pie. Moreover, those "buy a second at a discount" sales change everything, as do special occasions when guests are invited and ice cream is served. Customers are likely to be acquired or lost at the margins, that is, in less common usage occasions where habit does not prevail. These will never be measured when we ask for preference "out of context" because they are simply not remembered without a specific purchase occasion probe.

Deconstructing Consumption

We start by identifying the situations where ice cream is the answer. Preference construction is triggered by situational need, and the consumer relies on situational constraints to assist in the purchase process. Situations tend to be separated by time and place (e.g., after dinner in the kitchen or dining area and late night snack in front of TV) and consequently can be modeled as additive effects. Each consumer can be profiled as some weighted combination of these recurring situations.

Moreover, we make sense of individual consumption by grouping together others displaying similar patterns. We can think of this as a type of collaborative filtering. Here again, we see additive effects where the total markets can be decomposed into clusters of consumers with similar preferences. In order to capture such additive effects, I have suggested the use of nonnegative matrix factorization (NMF) in a previous post. The nonnegative restrictions help uncover additive effects, in this case, the additive effects of situations within consumers and decomposition of the total market into additive consumer segments.

You can find the details covering how to use and interpret the R package NMF in a series of posts on this blog published in July, August and September 2014. R provides an easy-to-use interface to NMF, and the output is no more difficult to understand than that produced by factor and cluster analyses. In this post I have focused on one specific application in order to make explicit the correspondence between a matrix factorization and the decomposition of a product category into its components reflecting both situational variation and consumer heterogeneity.

Bradley Efron partitions the history of statistics into three centuries with each defined by the problems that occupied its attention. The 21st century focuses on large data sets and complex questions (e.g., gene expression or data mining). Such high-dimensional data present special problems that must be faced by both statistics and people engaging in everyday life. Modeling consumption from this new perspective, we hope to achieve some insight into the purchase process and measures that will reflect what the consumer will and will not buy when they actually go shopping.

Monday, September 29, 2014

TURF Analysis: A Bad Answer to the Wrong Question

Now that R has a package performing Total Unduplicated Reach and Frequency (TURF) Analysis, it might be a good time to issue a warning to all R users. DON'T DO IT!

The technique itself is straight out of media buying from the 1950s. Given some number of n alternative advertising options (e.g., magazines), which set of size k will reach the most readers and be seen the most often? Unduplicated reach is the primary goal because we want everyone in the target audience to see the ad. In addition, it was believed that seeing the ad more than once would make the ad more effective (that is, until wearout), which is why frequency is a component. When TURF is used to create product lines (e.g., flavors of ice cream to carry given limited freezer space), frequency tends to be downplayed and the focus placed on reaching the largest percentage of potential customers. All this seems simple enough until one looks carefully at the details, and then one realizes that we are interpreting random variation.

The R package turfR includes an example showing how to use their turf() function by setting n to 10 and letting k range from 3 to 6.

library(turfR)
data(turf_ex_data)
ex1 <- turf(turf_ex_data, 10, 3:6)
ex1
Created by Pretty R at inside-R.org

This code produces a considerable amount of output. I will show only the first 10 best triplets from the 120 possible sets of three that can be formed from 10 alternatives. The rchX columns tells the weighted proportion of the 180 individuals in the dataset that would buy one of the 10 products listed in the columns labeled with integers from 1 to 10. Thus, according to the first row, 99.9% would buy something if Items 8, 9, and 10 were offered for sale.

combo
rchX
frqX
1
2
3
4
5
6
7
8
9
10
1
120
0.998673
2.448993
0
0
0
0
0
0
0
1
1
1
2
119
0.998673
2.431064
0
0
0
0
0
0
1
0
1
1
3
99
0.995773
1.984364
0
0
0
1
0
0
0
1
0
1
4
110
0.992894
2.185398
0
0
0
0
1
0
0
0
1
1
5
64
0.991567
1.898693
0
1
0
0
0
0
0
0
1
1
6
109
0.990983
2.106944
0
0
0
0
1
0
0
1
0
1
7
97
0.99085
1.966436
0
0
0
1
0
0
1
0
0
1
8
116
0.989552
2.341179
0
0
0
0
0
1
0
0
1
1
9
85
0.989552
2.042792
0
0
1
0
0
0
0
0
1
1
10
36
0.989552
1.800407
1
0
0
0
0
0
0
0
1
1

The sales pitch for TURF depends on showing only the "best" solution for 3 through 6. Once we look down the list, we find that there are lots of equally good combinations with different products (e.g., the combination in the 7th position yields 99.1% reach with products 4, 7 and 10). With a sample size of 180, I do not need to run a bootstrap to know that the drop from 99.9% to 99.1% reflects random variation or error.

Of course, the data from turfR is simulated, but I have worked with many clients and many different datasets across a range of categories and I have never found anything but random differences among the top solutions. I have seen solutions where the top several hundred combinations cannot be distinguished based on reach, which is reasonable given that the number of combinations increases rapidly with n and k (e.g., the R function choose(30,5) indicates that there are 142,506 possible combinations of 30 things in sets of 5). You can find an example of what I see over and over again by visiting the TURF website for XLSTAT software.

Obviously, there is no single best item combination that dominates all others. It could have been otherwise. For example, it is possible that the market consists of distinct segments with each wanting one and only one item.

With no overlapping in this Venn diagram, it is clear that vanilla is the best single item, followed by vanilla and chocolate as the best pair, and so on had there been more flavors separated in this manner.

However, consumer segments are seldom defined by individual offerings in the market. You do not stop buying toothpaste because your brand has been discontinued. TURF asks the wrong question because consumer segmentation is not item-based.

As a quick example, we can think about credit card reward programs with its categories covering airlines, cash back, gas rebates, hotel, points, shopping and travel. Each category could contain multiple reward offers. A TURF analysis would seek the best individual rewards ignoring the categories. Yet, comparison websites use categories to organize searches because consumer segments are structured around the benefits offered by each category.

The TURF Analysis procedure from XLSTAT allows you to download an Excel file with purchase intention ratings for 27 items from 185 respondents. A TURF analysis would require that we set a cutoff score to transform the 1 through 5 ratings into a 0/1 binary measure. I prefer to maintain the 5-point scale and treat purchase intent as an intensity score after subtracting one so that the scale now ranges from 0=not at all to 4=quite sure. A nonnegative matrix factorization (NMF) reveals that the 27 items in the columns fall into 8 separable row categories marked by the red indicating a high probability of membership and yellow with values close to zero showing the categories where the product does not belong.

The above heatmap displays the coefficients for each of the 27 products, as the original Excel file names them. Unfortunately, we have only the numbers and no description of the 27 products. Still, it is clear that interest has an underlying structure and that perhaps we ought to consider grouping the products based on shared features, benefits or usages. For example, what do Products 5, 6 and 17 clustered together at the end of this heatmap have in common? Understand, we are looking for stable effects that can be found in the data and in the market where purchases are actually made.

The right question asks about consumer heterogeneity and whether it supports product differentiation. Different product offerings are only needed when the market contains segments seeking different benefits. Those advocating TURF analysis often use ice cream flavors as their example, as I did in the above Venn diagram. What if the benefit driving sales of less common flavors was not the flavor itself but the variety associated with a new flavor or a special occasion when one wants to deviate from their norm? A segmentation, whether NMF or another clustering procedure, would uncover a group interested in less typical flavors (probably many such flavors). This is what I found from the purchase history of whiskey drinkers, a number of segments each buying one of the major brands and a special occasion or variety seeking segment buying many niche brands. All of this is missed by a TURF analysis that gives us instead a bad answer to the wrong question.

Appendix with R Code needed to generate the heatmap:

First, download the Excel file, convert it to csv format, and set the working directory to the location of the data file.

test<-read.csv("demoTurf.csv")
library(NMF)
fit<-nmf(test[,-1]-1, 8, method="lee", nrun=20)
coefmap(fit)

Created by Pretty R at inside-R.org

Saturday, September 27, 2014

Recognizing Patterns in the Purchase Process by Following the Pathways Marked By Others

Herbert Simon's "ant on the beach" does not search for food in a straight line because the environment is not uniform with pebbles, pools and rough terrain. At least the ant's decision making is confined to the 3-dimensional space defining the beach. Consumers, on the other hand, roam around a much higher dimensional space in their search for just the right product to buy.

Do you search online or shop retail? Do you go directly to the manufacturer's website or do you seek out professional reviews or user ratings? Does YouTube or social media hold the key? Similar decisions must be made for physical searches of local retailers and superstores?  Of course, embedded within each of these decision points are more choices concerning features, servicing and price.

Yet, we do not observe all possible paths in the consumer purchase journey. Like the terrain of the beach, the marketplace makes some types of searches easier than others. In addition, like the ant, the first consumers leave trails that later consumers can follow. This can be direct word of mouth or indirect effects such as internet searches where the websites shown first depend on the number of previous visits. But it can also be marketing messaging and expert reviews, that is, markers along the trail telling us what to look for and where to look. We are social creatures, and it is fascinating to see how quickly all the possible paths through the product offerings are narrowed down to several well-worn trails that we all follow. Culture impacts what and how we buy, and statistical modeling that incorporates what others are doing may be our best hope of discovering those pathways.

In order to capture everyone in the product market and all possible sources of information, we require a wide net with fine webbing. Our data matrix will contain heterogeneous rows of consumers with distinctive needs who are seeking very different benefits. Moreover, our columns must be equally diverse to span everywhere that a consumer can search for product information. As a result, we can expect our data matrix to be sparse for we have included many more columns of information sources than any one consumer would access.

To make sense of such a data matrix, we will require a statistical model or algorithm that reflects this construction process, by which I mean the social and cultural grouping of consumers who share a common understanding of what is important to know and where one should seek such information. For example, someone looking for a new credit card could search and apply solely online, but not any consumer, for some do not shop on the internet or feel insecure without the presence of a physical building close to home. Those wanting to apply in-person may wait for a credit card offer to be inserted in their monthly bank statement or they may see an advertisement in the local newspaper.

Modeling the Joint Separation of Consumers and Their Information Sources

Nonnegative matrix factorization (NMF) decomposes the nonnegative data matrix into the product of two other nonnegative matrices, one for consumers and the other for information sources. The goal is dimension reduction. Before NMF, we needed all p columns of the data matrix to describe the consumer. Now, we can get by with only the r latent features, where r is much smaller than p. What are these latent features? They are defined in the same manner as the factors in factor analysis. Our second matrix from the nonnegative factorization contains coefficients that can be interpreted as one would factor loadings. We look for the information sources with the largest weights to name the latent feature.

Returning to our credit card example, the data matrix includes rows for consumers banking online and in-person plus columns for online search along with columns for direct mail and newspaper ads. Online banking customers use online information sources, while in-person banking customers can be found looking for information in a different cluster of columns. We have separation with online row and columns forming one block and in-person rows and columns coming together in a separate block.

The nonnegativity of the two product matrices enables such a "parts-based" representation with the simultaneous clustering of both rows and columns. We start with the observed data matrix. It is nonnegative so that zero indicates none and a larger positive value suggest more of whatever is being measured. Counts or frequencies of occurrence would work. Actually, the data matrix can contain any intensity measure. Hopefully, you can visualize that the data matrix will be more sparse (more zeros) with greater separation between the row-column blocks, and in turn, this sparsity will be associated with corresponding sparsity in the two product matrices.

A toy example might help with this explanation.

V1
V2
V3
V4
S1
6
3
0
0
S2
4
2
0
0
S3
2
1
0
0
S4
0
0
6
3
S5
0
0
4
2
S6
0
0
2
1

The above data matrix shows the intensity of search scores from 0 (no search) to 6 (intense search) for six consumers across four different information sources. What might have produced such a pattern? The following could be responsible:
  • Online sources in the first two columns with V1 more popular than V2,
  • Offline sources in the last two columns with V3 more popular than V4,
  • Online customers in the first three rows with individual search intensity S1 > S2 > S3, and
  • Offline customers in the last three rows with individual search intensity S4 > S5 > S6.
The pattern might seem familiar as row and column effects from an analysis of variance. The columns form a two-level repeated measures factor with V1 and V2 nested in the first level (online) and V3 and V4 in the second level (offline). Similarly, the rows fall into two levels of a between-subject factor with the first three rows nested in level one (online) and the last three rows in level two (offline). Biclustering algorithms approach the problem in this manner (e.g., the R package biclust). Matrix factorization achieves a similar outcome by factoring the data matrix into the product of two new matrices with one representing row effects and the other column effects.

The NMF R package decomposes the data matrix into the two components that are believed to have generated the data in the first place. In fact, I created the data matrix as a matrix product and then use NMF to retrieve the generating matrices. The R code is given at the end of this post. The matrices W and H, below, reflect the above four bullet points. When these two matrices are multiplied, their product W x H is the above data matrix (e.g., the first entry in the data matrix is 3x2+0x0=6).

W
R1
R2
H
V1
V2
V3
V4
S1
3
0
R1
2
1
0
0
S2
2
0
R2
0
0
2
1
S3
1
0
S4
0
3
S5
0
2
S6
0
1

As expected, when we run the nmf() function with rank r=2 on this data matrix, we get these two matrices back again with W as the basis and H as the coefficient matrix. Actually, because W and H are multiplied, we might find that every element in W is divided by 2 and every element in H is multiplied by 2, which would yield the same product. Looking at the weights in H, one concludes that R1 taps online information sources, leaving R2 as the offline latent feature. If you wished to standardize the weights, all the coefficients in a row could be transformed to range from 0 to 1 by dividing by the maximum value in that row.

Decompositions such as NMF are common in statistical modeling. Regression analysis in R using the lm() function is performed as a QR decomposition. The singular value decomposition (SVD) underlies much of principal component analysis. Nothing usual here, except for the ability of NMF to thrive when the data are sparse.

To be clear, sparsity is achieved when we ask about the details of consumer information search. Such details enable management to make precise changes in their marketing efforts. As important, detailed probes are more likely to retrieve episodic memories of specific experiences. It is better to ask about the details of price comparison (e.g., visit competitor website or side-by-side price comparison on Amazon or some similar site) than just inquire if they considered price during the purchase process.

Although we are not tracking ants, we have spread sensors out all over the beach, a wide network of fine mesh. Our beach, of course, is the high-dimensional space defined by all possible information sources. This space can be huge, over a billion combinations when we have only 30 information sources measured as yes or no. Still, as long as consumers confine their searches to low-dimensional subspaces, the data matrix will have the sparsity needed by the decompositional algorithm. That is, NMF will be successful as long as consumers adopt one of several established search pathways clearly marked by repeated consumer usage and marketing signage.

R code to create the V=WH data matrix and run the NMF package:

W=matrix(c(3,2,1,0,0,0,0,0,0,3,2,1), nrow=6)
H=matrix(c(2,0,1,0,0,2,0,1), nrow=2)
V=W%*%H
W; H; V
 
library(NMF)
fit<-nmf(V, 2, method="lee", nrun=20)
fit
round(basis(fit),3)
round(coef(fit))
round(basis(fit)%*%coef(fit))

Created by Pretty R at inside-R.org

Monday, September 22, 2014

What is Cluster Analysis? A Projective Test

Supposedly, projective tests (e.g., the inkblots of psychoanalysis) contain sufficient ambiguity that "what you see" reveals some aspect of your thinking that has escaped your awareness. Although the following will provide no insight into your neurotic thoughts or feelings, it might help separate two different way of performing and interpreting cluster analysis.

A light pollution map of the United States, a picture at night from a satellite orbiting the earth, is shown below.



Which of the following two representations more closely matches the way you think of this map?

Do you consider population density to be the mixture of distributions represented by the red spikes in the first option?




Or perhaps this mixture model is too passive for you, so that you prefer the air traffic representation in the second option showing separate airplane locations at some point in time.



The mclust package in R provides the more homeostatic first representation using density functions. Because mclust adjusts the shape of each normal distribution in the mixture, one can model the Northeast corridor from Boston to Philadelphia with a single cluster. Moreover, the documentation enables you to perform the analysis without excessive pain and to understand how finite mixture models work. If you need a video lecture on Gaussian mixtures, MathematicalMonk on YouTube is the place to start (aka Jeff Miller).

On the other hand, if airplanes can be considered as messages passed between nodes with greater concentrations (i.e., cities with airports), then the R package performing affinity propagation, apcluster, offers the more "self-organizing" model shown in the second option with many possible ways of defining similarity or affinity. Ease of use should not be a problem with a webinar, a comprehensive manual, and a link to the original Science article. However, the message propagation algorithm requires some work to comprehend the details. Fortunately, one can run the analysis, interpret the output, and know enough not to make any serious mistakes without all the computational intricacies.

And the true representation is? As a marketer, I see it as a dynamic process with concentrations supported by the seaports, rivers, railroad tracks, roads, and airports that served commerce over time. Population clusters continually evolve (e.g., imagine Las Vegas without air travel).  They are not natural kinds revealed by craving nature at its joints. Diversity comes in many shapes and forms, each requiring its own model with its unique assumptions concerning the underlying structures. More importantly, cluster analysis serves many different purposes with each setting its own criteria. Haven't we learned that one size does not fit all?