The biases in online product reviews consumers and managers need to be aware of

Luca-Joel Schäfer and Frank Ohnesorge

Recent figures highlight how critical online product reviews are to consumer behaviour: 93% of people claim online reviews to be a factor in their decision making, while 91% trust online reviews to the same degree they trust personal recommendations. Exposure to online product reviews is likely to grow even further in the future as marketplaces increasingly shift online. Because of the importance of online product reviews for consumers’ decision making, understanding how to derive information from online product reviews is crucial to customers, platform managers and product managers.

A major part of existing literature covers the influence of online product reviews on subsequent reviews and sales without explicitly examining biases. This lack of groundwork results in the emergence of several different approaches incorporating different variables and factors, which severely limits generalisability and leads to very sensitive results regarding the influence of electronic word of mouth and online product reviews.

Combined, review valence, volume and variance describe review populations. The valence of a product review reveals if it is positive, negative, or neutral. While valence is the only of these three metrics assessable for each individual review, valence is usually given as an average of posted ratings. Review volume describes the aggregated number of product reviews. Reviews for a given product on different platforms are aggregated separately. Review variance describes the distribution of ratings. It is represented by statistical variance metrics or other metrics defining dispersion. There is an ongoing debate pertaining which metric asserts the largest influence on market outcomes.

Broadly, biases in online product reviews divide into descriptive biases and explanatory biases, which form arguments potentially explaining the prevalence of the descriptive biases.

Descriptive Biases

Schoenmueller, Netzer, and Stahl (2020, pp. 853–54) “define polarity as the proportion of reviews that are at the extremes of the scale […]” and claim that it “captures how extreme the distribution of reviews is”. The result of polarity bias, which is descriptive in nature, is a J-shaped distribution of online product reviews with most of the reviews accumulating on the far ends of the rating scale.

Not only do reviews amass at the far ends of the rating scale, but reviews mainly concentrate on the positive end of the scale. The proportion of positive to negative reviews allows for describing this positivity bias of online product reviews. Even after introducing measures to potentially debias ratings, online reviews overstate average ratings compared to the average customer experience. The widespread evidence for positivity bias in online reviews highlights that valence alone is an unsuitable metric for comparing online product reviews.

Temporal bias consists of two effects: The progression of time and increasing review volume, which both influence review valence. It seems that review valence decreases over time. However, there is an ongoing debate on the generalisability of these findings and whether the two effects influence valence in different directions.

Explanatory Biases

A potential explanation for the descriptive biases is reviewer self-selection. Chen, Li, and Talluri (2021, p. 7473) quantify expected acquisition bias as “the difference between the expected rating of the customer and the true quality” and infer that niche products exhibit more substantial bias than popular ones. Simply put, acquisition bias results from not all consumers purchasing a product. Out of these, already self-selected consumers, only a fraction submits a review. This results in underreporting bias, which ties in with polarity bias as described above: If reviewers tend to submit more reviews, chances increase that they do not exclusively report extreme opinions. Notably, attempts to shrink underreporting bias by soliciting reviews only have an aggregate effect as the additional solicited reviews do not influence self-motivated reviews.

Other factors inducing bias in online reviews are differences in the respective rating environments. Due to a more considerable discrepancy from the average rating, a further negative review does have a more significant impact on valence as an additional review with positive valence. This social influence bias causes existing reviews to influence subsequent reviews and, therefore, review populations to deviate from true, underlying perception. The design of a rating platform (i.e., the design of the rating scale) and decisions taken by the seller to influence ratings (i.e., managerial response to reviews) insert another type of bias into online product reviews.

Possible review fraud is a vital factor to consider when discussing bias in online product reviews. Researchers find evidence for positive review manipulation and negative review manipulation (of competitors). Fake reviews exhibit more extremity bias. A crucial reason to engage in review fraud is economic incentive because of competition or poor reputation. While effective in the short-term, in the long-term, sellers engaging in review fraud face backlash in the form of lower valence after they stop purchasing fake reviews.

Implications

As the interests of platform and product managers do not always align, the implications for platform and product managers may diverge. Because ratings do not convey objective quality, managers should not solely rely on valence as the measurement for online product reviews. Platform managers should display variance and information relating to volume to boost the displayed information content of online product reviews. Displaying additional metrics may not be in the interests of product managers because they benefit from positivity bias as sales and electronic word of mouth correlate. However, product managers should incorporate these additional metrics in their evaluation of online product reviews as the degree of variance yields important information for consumers.

Additionally, product managers should be aware that a declining trend in ratings does not automatically indicate a decline in underlying true product quality. To reduce self-selection biases, it has been the prevailing understanding that platform managers should consider soliciting reviews by reminding or incentivizing potential reviewers to submit a review. Further, platforms should display information on silent transactions as the decision not to post a review also contains an informational value.

Product managers need to exercise caution when deciding whether they should engage in managerial response and if they decide to do so, plan the approach carefully. Managers should only customise responses to negative reviews and wait for negative reviews to be “buried” in further positive reviews before providing responding. Product managers should not engage in review fraud as, besides being highly unethical, the effects of purchasing promotional reviews are often short-lived and not self-sufficient. One feasible measure that platforms managers can implement to limit review fraud is introducing verified reviews that indicate that the respective reviewer purchased the product.

Many findings described above are so far by-products of research on the effect of online product reviews on market outcomes or specific actions like managerial response to online reviews. Therefore, a general need for further systematic research on biases contained in online product reviews exists. Besides platform and product managers, for example, another interesting angle to assess biases in online product reviews can be consumers themselves, and how they perceive various aspects of these ratings.

References
  • Chen, N., Li, A., & Talluri, K. (2021). Reviews and self-selection bias with operational implications. Management Science, 67(12), 7472-7492.
  • Schoenmueller, V., Netzer, O., & Stahl, F. (2020). The polarity of online reviews: Prevalence, drivers and implications. Journal of Marketing Research, 57(5), 853-877.

Leave a Reply

Your email address will not be published. Required fields are marked *