Skip to content

Department of
Marketing & Entrepreneurship

Marketing Research Seminar Series

Note: Topics and Abstracts will be added to this page throughout the semester

Date Speaker Topic Faculty Host
10/28/2022
CBB 310
11:30-1:00
Wensi Zhang
USC
    Learning to Create on Content-Sharing Platforms
  • Click to read Abstract

    Uncertainty around content monetization poses a persistent challenge for independent creators, highlighting the importance to understand and manage reward uncertainty in content creation. This paper addresses the challenge and studies how monetary reward and reward fluctuations (risk) affect creators' content choices. We build and estimate a model that features the ''learning of learning'': to resolve uncertainty in reward, creators learn about their reward evolvement through their past creation and consumption experience, and utilize the expected reward and reward variance to make content choices. In the empirical context of a live streaming platform, we quantify the impacts of risk and find that, at the mean level, risk brings disutility that amounts to about 1/3 of the average reward. Managerially, we propose an income smoothing policy in a hope to help creators cope with risk and encourage content creation. Our counterfactual simulations reveal that, by collecting a moderate level of reward from each creation and fully subsidizing creators when their reward levels are low, the platform could achieve additional revenue gain while promoting content creation. Interestingly, creators not only produce more under the policy, but will tap into content categories that are less popular, thereby enhancing content diversity on the platform.

Kitty Wang
10/24/2022
MH 140
11:30-1:00
Jangwon Choi
University of Michigan
    Wait For Free: A Consumption‐Decelerating Promotion for Serialized Digital Media
  • Click to read Abstract

    Promotions for digital goods, such as free samples, have typically focused on enticing users to accelerate their consumption. Here, we investigate the implications of a novel consumption-decelerating type of promotion, ''Wait For Free'' (WFF), applied to serialized digital content sequences of interconnected episodes monetized via episode-level paywalls. Specifically, customers can sample early episodes of promoted series for free, and can continue to do so by waiting a pre-specified time or immediately by paying. Because it defies one of the traditional roles of promotions consumption acceleration such a policy may appear counterproductive. However, when applied to serialized content, some users may get ''hooked'' into the plotline, and those who are unwilling to wait may elect to pay in order to progress more quickly. We analyze episode-level viewership data from an international digital comics platform while accounting for the time trend and observed heterogeneity in promotional lifts, as well as the impact of promoting a series on others by constructing dyadic behavioral similarity measures between series. Using a combinatorial Genetic Algorithm, we efficiently search through potential promotion sets, finding that this seemingly counterintuitive type of promotion can in fact boost paid viewership at the platform level, net of cannibalization on unpromoted comics, when applied to an appropriate set of comics. We also discover that the genres and overall popularity within the solution set change considerably depending on the planning horizon. Finally, to understand the individual user-level impact of WFF, we estimate a Cox proportional hazards model to compare pre-and post-promotion the degree of within-series inertia in content consumption and across-series switching behavior, finding evidence of trial effects among users and conversion of some free users into paying ones.

Ye Hu
10/21/2022
MH 140
11:30-1:00
Sherry He
UCLA
    Optimizing Rating Systems for Innovation
  • Click to read Abstract

    I study how rating system design affects innovation incentives. In settings in which product quality cannot be observed prior to purchase, online ratings serve as a signal of product quality for consumers and affect demand. Owing to their impact on sales, ratings also motivate firms to innovate. If firms use displayed ratings to guide their investments in improving product quality, then platform rating aggregation policies can play a key role in increasing or decreasing firms’ innovation incentives. I study the impact of online rating systems on innovation incentives and, more importantly, the implications of the design of the rating aggregation policy. After collecting a unique firm-level dataset from a mobile game app platform, I combined reduced-form analysis and a structural model to show how rating systems can be optimized for innovation. I show that innovation has a positive impact on all key rating system metrics and that a lower rating significantly increases innovation incentives. Building on empirical evidence, I develop a dynamic structural model to represent firms' forward-looking behavior and estimate innovation cost. I then evaluate the impact of alternative rating aggregation policies on innovation incentives. The counterfactual analysis shows that placing greater weight on recent ratings can increase the innovation rate substantially.

Sam Hui
10/17/2022
MH 140
11:30-1:00
Mohsen Foroughifar
University of Toronto
    The Challenges of Deploying an Algorithmic Pricing Tool: Evidence from Airbnb
  • Click to read Abstract

    We study the deployment of an algorithmic pricing tool, Smart Pricing (SP), on Airbnb's platform. SP is a machine learning algorithm that uses past data to predict demand and employs proxies that are correlated with the host's marginal cost to set prices for listings. The success of such deployments depends on how good the performance of the algorithm is and how sellers use the tool for their business decisions. Our analyses suggest that adopting SP is associated with higher benefits for hosts who rarely change their prices compared to those who flexibly adjust their prices before adoption. However, hosts who rarely change their prices are surprisingly less likely to adopt SP. To understand how the platform can overcome this challenge, we propose and estimate a dynamic structural model in which hosts make adoption decisions based on their expectations of the algorithm's behavior. Our estimation results identify a gap between the actual performance of the SP algorithm and the host's prior belief about it. Specifically, hosts with a pessimistic prior belief about SP think they will need to manually correct algorithmic prices if they adopt SP, and this belief is disproportionately stronger for hosts with higher adjustment costs, making the SP adoption less attractive to them. Our counterfactual simulations indicate that the introduction of SP has had a small positive impact on the average host profit and the total platform revenue. But this boost can be significantly raised if Airbnb helps hosts to correct their beliefs about the SP algorithm. This highlights the need for proper communication of how the algorithm works and its benefits in order to successfully deploy a machine learning tool. The counterfactual analyses also demonstrate that, since the platform does not fully capture the host's private marginal cost in training the algorithm, using the estimated costs from the structural estimation to re-train the algorithm can significantly increase the profitability of SP for both hosts and the platform. It suggests that combining the results of structural models and machine learning tools can help platforms design better algorithms.

Sesh Tirunillai
10/14/2022
MH 140
11:30-1:00
Katie Mehr
University of Pennsylvania
    How Does Rating Specific Features of an Experience Alter Consumers' Overall Evaluation of That Experience?
  • Click to read Abstract

    How does the way companies elicit ratings from consumers affect the ratings that they receive? In 11 pre-registered experiments, we find that consumers rate subpar experiences more positively overall when they are also asked to rate specific aspects of those experiences (e.g., a restaurant's food, service, and ambiance). Studies 1-4 established the basic effect across different scenarios and experiences. Study 5 found that the effect is limited to being asked to rate specific features of an experience, rather than providing open-ended comments about those features. Study 6 found that the effect holds even when consumers are told that they are going to rate specific aspects on a subsequent page. Studies 7-10 provided evidence that the effect does not emerge because rating positive aspects of a subpar experience reminds consumers that their experiences had some good features. Rather, it emerges because consumers give less weight to negative aspects of an experience in their overall evaluation when they are invited to directly rate those aspects. Lastly, study 11 found that asking consumers to rate attributes of a subpar experience reduces the predictive validity of their overall rating. We discuss implications of this work and reconcile it with conflicting findings in the literature.

Melanie Rudd
9/30/2022
MH 140
11:30-1:00
Byung Lee
Columbia University
    The Replaced Self: Personalized Recommendations Can Undermine Preference Clarity
  • Click to read Abstract

    We explore an unintended consequence of using personalized recommendations, that is, recommendations that are targeted to an individual consumer (e.g., personalized music playlists). We conceptualize that personalized recommender systems are seen as having the ability to replace the self. Therefore, using these systems can decrease people's preference clarity, which is defined as certainty about their own preferences. For example, people may feel less certain about their own music preferences after listening to auto-generated personalized playlists. This reduced preference clarity, in turn, reduces consumer willingness to generate word-of-mouth (WOM) about their consumption experiences, such as their intent to talk about music they listened to with others, or to post social media content on their favorite musicians. Six studies, using correlational and experimental designs and conducted with consumers who actively use personalization services (in the fashion and music domains), support this theorization. We find that listening to one's own (vs. a matched other's) Spotify-generated music playlist increases satisfaction but decreases WOM due to changes in preference clarity. We end with a discussion of the potential theoretical extensions of this novel finding, as well as its practical implications.

Melanie Rudd