Explore UCD

UCD Home >

Personalization and Twitter: How do content recommendations respond to ideological behavior?

Personalization and Twitter: How do content recommendations respond to ideological behavior?

Speaker(opens in a new window)Benjamin Guinaudeau (University of Konstanz)

Wednesday, March 8, 14:00–14:45 (Irish time)

Please register (opens in a new window)here to receive the link and password to the online meeting and information on the room at UCD.

Abstract: Although social media only recently emerged, the accumulation of evidence undermining the ‘echo chamber’ hypothesis is striking. While self-selective exposure to congruent content - the echo chamber - is not as salient as expected, the ideological bias induced primarily by algorithmic selection - the filter bubble - has been less scrutinized in the literature. In this study, we propose a new experimental research design to investigate recommender systems. To avoid any behavioral confounder, we rely on automated agents, which 'treat' the algorithm with ideological and behavioral cues. For each agent, we compare the ideological slant of the recommended timeline with the ideological slant of an artificially reconstructed chronological timeline and, hence, isolate the ideological bias of the recommender system. This allows us to investigate two main questions : (1) how much bias is induced by the recommender system? (2) what role is played by implicit and explicit cues, when triggering ideological recommendations?

The pre-registered experiment features 170 automated agents, which were active for three weeks before and three weeks after the 2020 American presidential election. We find that, after three weeks of delivering ideological cues (following accounts and interacting with the content), the average algorithmic bias is about 5\%. In other words, the timeline as structured by the algorithm entails 5\% less cross-cutting content than it does when it is structured chronologically. While the algorithm relies on both implicit and explicit cues to formulate recommendations, the effect of implicit cues is significantly stronger. This study is, up to our knowledge, the first experimental assessment of the ideological bias induced by the recommender system of a major social media platform. Recommendations rely above all on behavioral cues unwarily and passively shared by the user. As affective polarization becomes a greater contemporary challenge, our results raise important normative questions about the possibility of opting-out from the ideological bias of recommender systems. In addition, it points out that more transparency is urgently needed around the recommendation questions: How are algorithms trained? What cues or features do they use? Against which biases have they been tested? In parallel, the results demonstrate the failure of ‘in-house bias correction’ and calls for an external auditing framework, that would facilitate this kind of research and crowd-source the scrutiny of recommender systems.

About the speaker: Benjamin Guinaudeau is a PhD Student in Political Science at the University of Konstanz (Germany). His research lies at the intersection between Political Methodology (with a taste for textual data), Legislative Politics and Online Political Behavior. In his dissertation, he develops new methodological tools to measure the ideological position of MPs and test spatial models of legislative politics.