CASE #2
Perception vs. Reality: Unmasking Algorithmic Change Resistance
👤 My role: Senior UX Researcher (First UX Researcher at the company)
🏢 Company: Pikabu↗ — a Russian UGC platform analog of Reddit with 140+ million visits per month
📈 Key result: Enabling confident product decisions despite vocal opposition
🏷️ Tags: #ControlledExperiment, #QuantitativeResearch, #SurveyDesign, #DataDrivenDecisions, #CognitiveEffects
Situation
The product team was testing a new algorithm for ranking posts in users’ feeds. Initial behavioral metrics showed promising results, leading to a gradual rollout to a larger percentage of users. However, as the new algorithm was implemented, a surge of user-generated posts complaining about its functionality began to appear.
- User complaints about the new algorithm were increasing, despite positive behavioral metrics
- The objectivity of these complaints was questionable, as the most active users of Pikabu were historically resistant to changes
- There was a discrepancy between user feedback and behavioral data, creating uncertainty about the algorithm’s actual performance
- The new algorithm reduced user control over content curation, a feature highly valued by Pikabu’s audience
Task
My task was to investigate the discrepancy between positive behavioral metrics and negative user feedback regarding the new feed ranking algorithm. I needed to:
- Determine if the new algorithm was actually causing user dissatisfaction or if other factors were at play
- Identify specific areas of the new algorithm that might need improvement
Action
To address these challenges, I designed a comprehensive survey study:
- Two groups were established: a control group (using the old algorithm) and an experimental group (using the new algorithm)
- Two basic hypotheses:
- User satisfaction (CSAT) would differ between the control and experimental groups
- Areas for improvement would vary between the two groups due to different algorithms
- Alchemer was used for survey distribution, with UTM tags to differentiate between groups
- Data analysis was conducted using R/RStudio
Result
The survey showed several important insights:
- There was no direct correlation between the algorithm used and user satisfaction. Both groups reported similar levels of satisfaction
- Users in the control group (still using the old algorithm) believed they were experiencing the new “smart feed” and complained about it.
Reflection
- Probably, there were the echo chamber effect and the placebo effect
- 💡 The echo chambers are environments in which the opinion, political leaning, or belief of users about a topic gets reinforced due to repeated interactions with peers or sources having similar tendencies and attitudes. The echo chamber effect on social media
- 💡 The placebo effect is a phenomenon that occurs when a sham intervention causes improvement in a patient’s condition because of the factors associated with the patient’s perception of the intervention. National Library of Medicine
- Perception vs. Reality: The study revealed a significant disparity between user perception and actual algorithm performance. This highlights the importance of clear communication during feature rollouts
- Change Resistance: The results suggest that user complaints were more likely due to resistance to change rather than actual issues with the new algorithm
- Confirmation Bias: Users in the control group misattributing issues to the new algorithm demonstrates the power of suggestion and the need for unbiased evaluation methods
Conclusion
This case study demonstrates the complexity of user satisfaction in the face of algorithmic changes. It highlights the importance of combining quantitative behavioral data with qualitative user feedback to gain a comprehensive understanding of user experience. The findings also underscore the value of controlled experiments in UX research, helping to separate actual issues from perceived problems and guiding more informed decision-making in product development.