Research Article: Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model

Date Published: March 5, 2019

Publisher: Public Library of Science

Author(s): Alina Sîrbu, Dino Pedreschi, Fosca Giannotti, János Kertész, Floriana Gargiulo.


The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance fragmentation and polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards opinion fragmentation, which emerges also in conditions where the original model would predict consensus, b) increased polarisation of opinions and c) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Fragmentation and polarization are augmented by a fragmented initial population.

Partial Text

Political polarization and opinion fragmentation is a generally observed, ingravescent negative trend in modern western societies [1–4] with such concomitants as “alternative realities”, “filter bubbles”, “echo chambers”, and “fake news”. Several causes have been identified (see, e.g., [5]) but there is increasing evidence that new online media is one of them [6–8]. Earlier it was assumed that traditional mass media mostly influence the politically active elite of the society and they only indirectly affect polarization of the entire population. The recent dramatic changes with the occurrence of online media, the ubiquity of the Internet with all the information within reach of a few clicks and the general usage of online social networks have increased the number of communication channels by which political information can reach citizens. Somewhat counterintuitively, this has not lead to a more balanced information acquisition and a stronger tendency towards consensus, as argued in [9], on the contrary. One of the reasons may be that the new media have enhanced the reachability of people, which can be used for transmitting simplified political answers to complex questions and thus act toward polarization [10]. Moreover, the stream of news is not organized in the new media in a balanced way, but by algorithms, which are built to maximise platform usage. It is conjectured that this generates an “algorithmic bias”, which artificially creates opinion fragmentation and enhances polarization. This is an artefact of online platforms, also called “algorithmic segregation” [11].

The original bounded confidence model [30] considers a population of N individuals, where each individual i holds a continuous opinion xi ∈ [0, 1]. This opinion can be considered the degree by which an individual agrees or not to a certain position. Individuals are connected by a complete social network, and interact pairwise at discrete time steps. The interacting pair (i, j) is selected randomly from the population at each time point t. After interaction, the two opinions, xi and xj may change, depending on a so called bounded confidence parameterε ∈ [0, 1]. This can be seen as a measure of the open-mindedness of individuals in a population. It defines a threshold on the distance between the opinion of the two individuals, beyond which communication between individuals is not possible due to conflicting views.

In order to understand how the introduction of the algorithmic bias affects model performance, we study the model under multiple criteria for various combinations of parameters ε and γ. We are interested in whether the population converges to consensus or to multiple opinion clusters, whether polarization emerges and how fast convergence appears. We also consider the influence of the size of the population on the behavior observed, both for the original and extended model. Furthermore, the effect of a fragmented initial population is studied. For each analysis we repeat simulations multiple times to account for the stochastic nature of the model, and show average values obtained for each criterion above.

A model of algorithmic bias in the framework of bounded confidence was presented, and its behavior analyzed. Algorithmic bias is a mechanism that encourages interaction among like-minded individuals, similar to patterns observed in real social network data. We found that, for this model, algorithmic bias hinders consensus and favors opinion fragmentation and polarization through different mechanisms. On one hand, consensus is hindered by a very strong slowdown of convergence, so that even when one cluster is asymptotically obtained, the time to reach it is so long that in practice consensus will never appear. Additionally, we observed fragmentation of the population as the bias grows stronger, with the number of clusters obtained increasing compared to the original model. At the same time, the average opinion distance also grew in the population, indicating emergence of polarization. A fragmented initial condition also enhances the fragmentation and polarization, augmenting the effect of the algorithmic bias. Additionally, we observed that small populations may be less resilient to fragmentation and polarization, due to finite size effects.




Leave a Reply

Your email address will not be published.