Blog Post

For Online Recommendations, One Size Doesn't Fit All

Online product recommendation systems have been around for almost as long as e-commerce. They all share the common goal of recommending items a particular person is most likely to be interested in at a given time. In aiming for this common goal, however, recommendation systems take a wide variety of different approaches. These approaches can be grouped into four main categories. Some systems use only one of these; others use combinations. The key techniques are:

  • Segmentation, which divides users into groups based on characteristics like age, gender, and geographic location;
  • Collaboration, which starts with an individual and attempts to locate others like them;
  • Personalization, which relies on a user’s prior actions to determine what they are likely to do next; and
  • Similarity, which starts with products, rather than users, and models relationships between them to drive recommendations.

The first technique, segmentation, also known as demographic or psychographic profiling, assumes that members of each group are very similar and recommends similar products to all of them. Snow blowers are almost certainly more interesting to middle-class residents of Chicago suburbs than to retirees in South Florida. That being said, segmentation is a blunt instrument. Not everyone is typical of their age, gender, or ZIP code. Online, the challenges can be even greater, as many users arrive anonymously.

Collaboration, also known as collaborative filtering or “wisdom of the crowds,” attempts to improve upon segmentation by using observed behavior, rather than personal characteristics, to determine which users are similar. In this approach, Joe might be recommended a snow blower because he bought a leaf blower in the fall and a lawn mower the previous summer, and many of the people who bought those two products also bought snow blowers. The key drawback of this approach is that it often finds correlations that are not driven by any identifiable underlying factor, which can lead to curious and confusing recommendations.

Personalization, also known as one-to-one marketing or intent modeling, moves away from group behavior and focuses on individual behavior. If users spend time surfing the technical specs of three different snow blowers, there is a very good chance they are interested in purchasing a snow blower, regardless of what they have purchased before. But reliance on only individual behavior at the expense of the other techniques can be unnecessarily limiting. Once the user has put the snow blower into his cart, he might be more than willing to add the latest “James Bond” DVD because people who bought other DVDs the customer bought in the past are buying it in large numbers.

Finally, we have similarity, which identifies users who have interacted with similar products as good candidates for recommendations. This is particularly useful for new products. Say a band has just released an album. A retailer who hasn’t sold any copies of it yet can be reasonably confident that people who bought previous albums by that band are going to be interested. The weaknesses of similarity are that it assumes a site has reliable structured data on products, which not all sites have, and it assumes people are interested in groups of similar items more than they are interested in a diversity of items.

As you can see, each of the four techniques outlined has inherent strengths and weaknesses. The good news is that there is no reason whatsoever why choosing one precludes using any of the others. In fact, if a site is using just one recommendations technique it is, almost without question, leaving sales on the table. Sites like Sears (s shld) and Burton, which use more than one recommendation technique, have been able to boost sitewide sales by up to 30 percent over the long term and drive significant increases in user loyalty and repeat purchase rates. The question every site has to answer is how to combine the techniques to create the best overall experience. After serving more than 20 billion product recommendations, we at richrelevance have found that the best approach is to provide a wide variety of relevant recommendation types, and to continually test their performance in real time in order to optimize our choice of techniques across the various page types and contexts on each individual site.
Darren Vengroff is chief scientist at richrelevance. Previously, Darren was CTO and co-founder of Pelago, principal engineer at, and a vice president at Goldman Sachs.

10 Responses to “For Online Recommendations, One Size Doesn't Fit All”

  1. Darren – Thanks for the post. It’s a great read and highlights the importance of selecting the right logic to power recommendations.

    Todd – Your response to Patricia was a bit curt. We too evaluated several recommendations providers. After beginning with a list of 8 (ATG, Baynote, Omniture, etc), we decided we wanted a provider that could incorporate a wide variety of recommendations strategies and also wanted a best of breed solution focused on personalization. With this criteria, we narrowed that list to RichRelevance, Certona, and iGoDigital. All three were very good companies, but we too chose another provider in the end. You have a good product, but you do have very capable competition, and it’s hard to believe that any company would bat 1000 with proposals.

    Jason – Thanks again for your post. Very informative and well written.

  2. Thanks for this post. I was searching for some inspiration as I analysis the potential of recommendations for a

    Previously I worked with Figleaves to implement recommendations with mixed results. Following on from Patricia Linford comments I am interested in what
    metrics are used in analysis for such assertions and how is this best measured when implemented ?

  3. I find Patricia’s post very interesting. As chief customer officer, I’ve seen every deal that has passed through richrelevance, and I can say that every retailer who has tested our product suite against our competitors has gone on to select us as their partner. Also, an important point of clarification – we only offer only pay-for-performance pricing and have never offered a rate anything like that described in Patricia’s comment.

  4. Patricia Linford

    “Sites like Sears and Burton, which use more than one recommendation technique, have been able to boost sitewide sales by up to 30 percent over the long term”.

    As part of the e-commerce team of a large retailer, we decided we wanted to implement automated recommendations on our site. We looked at a number of players, among which Rich Relevance. We didn’t select them because we did some bucket testing to compare their solution with two other vendors and felt they were better at marketing than at ecommerce.

    If Mr. Vangroff’s assertion is correct, (their solution increases sales by 30%) then why did they quote us a monthly price of $1,500? For an increase in sales by 30%, we’d be happy to pay 10 or 100 times that price.

    My intention is not to criticize Rich Relevance, (their impressive client base indicates that there must be some substance behind their claims), I just want to make sure that other ecommerce managers think carefully when selecting a recommendations vendor, and do not rely on promotional postings of this nature.

  5. amazingamazon

    That was incredibly informative. I finally understand what underlies these recommendation systems. And it makes perfect sense to use the best technique in each particular situation rather than always using the same approach? Why wouldn’t all of the recommendation providers do this?