Paper reading- Neutral bots probe political bias on social media

Link to the paper: Neutral bots probe political bias on social media | EndNote Click

Social media platforms         trying to curb abuse and misinformation have been accused of political bias . We deploy neutral social bots that start following different news feeds on Twitter and track them to explore apparent biases in the platform's mechanics interacting with users . We found no strong or consistent evidence of political bias in newsfeeds. Nonetheless, the news and information that US Twitter users are exposed to depends largely on the political leanings of their early relationships. Conservative accounts' interactions are skewed to the right, while liberal accounts are exposed to moderate content, shifting their experience towards the political center. Partisan accounts, especially conservative accounts, tend to gain more followers and follow more automated accounts . Conservative accounts also found themselves in denser communities and exposed to more low-credibility content.

background

        Online social media can connect more people cheaper and faster than ever before than traditional media. Because a large proportion of people regularly use social media to generate content, consume information, and interact with others1, online platforms are also shaping user norms and behaviors.

        Experiments have shown that simply changing the messages that appear on a social feed can influence a user's online expression and real-world behavior2,3, and that social media users are sensitive to early social influence4,5 .

Introduction of research points: two-level differentiation, users tend to be homogeneous topics; recommendation algorithms tend to recommend false information, which is not conducive to detection

        Meanwhile, discussions on social media tend to revolve around critical but controversial topics such as elections6-8, vaccination9 and climate change10. Polarization is often accompanied by the segregation of users who disagree into so-called echo chambers,11-16 homogeneous online communities associated with ideological radicalization and the spread of misinformation. Combating this undesirable phenomenon requires a deep understanding of its underlying cause mechanisms.

        On the one hand, cybervulnerability is associated with several social cognitive biases in humans, including the tendency to select information that is consistent with beliefs and to seek homogeneity in social relationships.

        On the other hand, online platforms have their own algorithmic biases . For example, a ranking algorithm that favors popular and engaging content can create a vicious cycle that amplifies noise rather than quality . Exposure to engagement metrics may also increase the likelihood of being influenced by misinformation. As a more extreme example, recent research and media reports have shown that, regardless of the starting point, YouTube's recommendation system can lead to more misinformation or extreme views in videos.

        Beyond the social cognitive biases of individual users and the algorithmic biases of technology platforms, our understanding of how collective interactions mediated by social media influence the worldview we acquire through online information ecosystems is very limited . The main hurdle is the complexity of the system - not only do users exchange vast amounts of information with a large number of other people through many hidden mechanisms, but these interactions can be manipulated overtly or covertly by legitimate influencers as well as inauthentic adversaries who have an incentive to influence opinion or Radical behavior32. There is evidence that malicious entities such as social bots and trolls have been used to spread misinformation and influence public opinion on critical issues33-37.

The goal is to reveal the biases people are exposed to in news and information

        In this study, our goal was to uncover the biases people are exposed to in news and information. Social media ecosystem. We were particularly interested in clarifying the role of social media interactions in the polarization process and the formation of echo chambers. Therefore, we focus on American political discourse on Twitter because of the platform's significant role in American politics and its strong polarization and echo chambers. Twitter forms a directed social network where an edge from a friend node to a follower node indicates that content posted by that friend appears on the follower's news feed.

        Our goal is to study ecosystem bias, which includes underlying platform bias and the net effects of interactions with users of social networks (organic or non-organic) , which are mediated by platform mechanisms and regulated by their policies. While we only attempted to separate platform effects from naturally occurring bias in the narrow context of feed curation, our investigation addressed the overall bias experienced by platform users. This requires removing bias from individual users , which is a challenge when using traditional observation methods—it is impossible to separate ecosystem effects from confounding factors that may affect the behavior of tracked human accounts, such as age, gender, race, ideology, and society Economy.

Social media accounts controlled entirely by algorithms (called social bots)

        We therefore turned to an approach that removes the need to control for such confounding factors by utilizing social media accounts (called social bots) that mimic human users but are fully controlled by algorithms. Here, we deploy social bots with neutral (unbiased) and random behavior as a tool to detect exposure bias in social media. We refer to our bots as "drifters" to distinguish their neutral behavior from other types of benign and malicious social bots on Twitter. All castaways have the same behavior model, the only difference being the initial friends.

        In our experiments, each drifter was released into the wild after an initial action representing a single independent variable (treatment) . To be sure, while all drifters behave the same way, their behavior is different depending on their initial conditions. We expect that drifters who initially follow libertarian accounts will be more likely to be exposed to libertarian content, share some of it, be followed by libertarian accounts, and so on. But these behaviors were driven by platform mechanics and social interactions , not by political bias in the independent variables: the behavioral model could not distinguish between liberal, conservative, or any type of content. Therefore, the drifter's behavior is part of the dependent variable (outcome) measured in our experiment.

research problem

        This approach allowed us to examine the combined biases stemming from Twitter's system design and algorithms, as well as organic and inorganic social interactions between Drifters and other accounts. Our research questions are: (i) How does early behavior on social media platforms influence the influence and exposure of inauthentic accounts, political echo chambers, and misinformation?

        (ii) Could this discrepancy be attributed to political bias in the platform's news feed? To answer these questions, we initialize drifters from news sources across different political domains.

        Five months later, we examined content consumed and generated by Drifters and analyzed (i) the characteristics of their friends and followers, including inferring their liberal-conservative political affiliations through shared links and hashtags;

         (ii) automated activities measured by machine learning methods ;

         (iii) Access to information from low-reliability sources identified by news and fact-checking organizations.

        We found that the political affiliation of initial friends has a significant impact on popularity, social network structure, exposure to bots and low-credibility sources, and the political affiliation embodied in the behavior of each drifter. However, we found no evidence that these results could be attributed to platform bias. Our research on political trends in the Twitter information ecosystem provides insights that could aid public debate on how social media platforms influence people's exposure to political information.

result

        All drifters in our experiments followed the same behavioral model, which was designed to be neutral and not necessarily realistic. Each drifter is activated at random times to perform actions. Action types such as tweets, likes, and replies are randomly selected based on predefined probabilities. For each action, the model specifies how to choose a random target, such as a tweet to retweet or a friend to unfollow. The time intervals between actions are drawn from a wide distribution to produce realistic burst behavior.

Implementation of Drifting Robot

        See "Methods" for details. We develop 15 drifting robots, divide them into five groups, and initialize each drifting robot in the same group with the same initial friend. Each Twitter account used as a first friend popular news sources that aligned with left, center left, center, center right, or right of the US political spectrum (see Methods for details). We call drifters by the politics of their original friends. For example, a robot initialized with a center-left source is called a "C.left" drifter.

        From deployment on July 10, 2019 to decommissioning on December 1, 2019, we monitored Drifter's behavior and collected data on a daily basis. Specifically, we measured: (1) the number of followers of each castaway to compare their ability to gain influence; (2) each castaway's echo chamber exposure; (3) the castaway's friends (4) The proportion of low-credibility information that Drifters are exposed to; (5) Political conditioning of content generated by Drifters and their friends to explore political bias.

        Influence. The number of followers can be used as a rough indicator of influence. To measure how political alliances affect influence dynamics, Figure 1 plots the average follower size of drifters in different groups over time.

 (Fig. 1 Follower growth. The x-axis shows the duration of the experiment in 2019, while the x-axis shows the mean number of followers for the different vagrant cohorts. Colored confidence intervals represent ± 1 standard error. Source data are presented in source data Available as a document.Nature Communications | https://doi.org/10.1038/s41467-021-25738-6Articlenature Communications | 6 | www.nature.com/naturecommunications3)

        To compare growth rates across groups, we considered consecutive observations of follower counts for each castaway and aggregated them within each group (n = 387 for left, 373 for C. left, 389 for middle, 387 for C. right, 386 for right). Two trends emerged in the tests (all tests in this analysis and the following analysis were two-sided).

        First, drifters with most partisan sources as initial friends tend to attract more followers than drifters in the middle (df = 774, t = 5.13, p < 0.001 for Left vs Center, df = 773, t = 8.00 for Right , p<0.001) vs center). Second, drifters with a right-leaning initial source acquired a significantly higher proportion of followers than those with a left-leaning initial source (df = 771, t = 3.84, p < 0.001 for right vs. left).

        Differences in influence among castaways are influenced not only by political alignment, but also by other characteristics of their original friends. To disentangle these factors, we measured the correlation between the number of Drifter followers and two characteristics of their initial friends: their overall influence and their popularity among other politically affiliated accounts. While Drifter influence was not affected by the overall influence of initial friends, it was positively correlated with their popularity among politically consistent accounts (see Supplementary Note). This is consistent with evidence that users with shared partisan leanings are more likely to form social ties42, as we explore next.

        echo chamber.

        We define echo chambers as dense and highly aggregated social media communities that amplify the exposure of homogenous content. To investigate whether drifter robots find themselves in such echo chambers, let us consider each drifter's ego network, that is, the network consisting of the drifter and his friends and followers . We can use the density and transitivity of ego networks as proxies for the presence of echo chambers. Density is the fraction of pairs of nodes connected in a network. Transitivity measures the proportion of possible triangles that actually exist in a node. High transitivity means that friends and followers are also likely to follow each other. See "Methods" for details.

        Figure 2a,b show the average density and transitivity of the drifter ego network (see 'Methods' for details). Since these two measures are correlated in ego networks, Figure 2c also plots the transitivity of readjustment through shuffled random networks (see Methods).

 (Fig. 2 The echo chamber structure around the drifter. The a-density, b-transitivity and normalized transitivity of the ego-network of the drifters in different groups. The error bars represent the standard error (n=3 drifters in each group). d Five groups The ego network of Drifters in Medium. Nodes represent accounts and edges represent friend/follower relationships. Node size and color indicate the degree of shared links (number of neighbors) and political leaning, respectively. Black nodes lack alignment scores due to not sharing political content. Source Data are provided as source data files. 3 Distribution of bot scores for friends and followers of drifters. Bot scores are numbers between 0 and 1, with higher scores indicating likely automation. For each group, we consider Union of friends and followers of drifters. Bars represent mean values. For friends, n = 282 (left), 261 (left), 206 (middle), 323 (right), and 414 (right). For followers, n = 172 (left), 118 (left), 65 (middle), 205 (right), and 299 (right). Source data are provided as source data files. https://doi.org/10.1038/s41467-021-25738 -64 Nature Communications | (2021) 12:5580 | https://doi.org/10.1038/s41467-021-25738-6 | www.nature.com/naturecommunications)

        The ego network of right drifters is denser than that of middle drifters (df=4, t=−8.28, p=0.001), while the difference in density between middle drifters and left drifters is not significant (df= 4, t=−2.68, p=0.055). The right-account network is also more transitive than the center network (df=4, t=−9.31, p<0.001); the left-account network is also more transitive (df=4, t=−3.53, p= 0.024). Even accounting for density differences (df=4, t=−8.96, p<0.001), right-wing accounts are more clustered than centrist accounts, while left-wing accounts are not significantly different from middle accounts (df=4, t=−2.73 , p =0.053). Furthermore, the echo chamber was stronger for the right drifter than for the left drifter (df = 4, t = −3.84, p = 0.019 for density; df = 4, t = −3.02, p = 0.039 for transitivity). However, the difference in normalized transitivity between left and right was not significant (df = 4, t = −0.60, p = 0.579), suggesting that the higher clustering on the right is explained by the density of social ties of.

        Automate activities.

        Automated accounts known as social bots actively participated in online discussions about recent US elections33,43,44. Therefore, drifters are expected to encounter bot accounts. We used the Botometer service45,46 to collect bot scores from the drifters' friends and followers. We report the distribution of bot scores for the drifter's friends and followers in Figure 3. Not surprisingly, drifters across the political spectrum were more likely to have bots among their followers than among their friends. Following friends reveals a deeper underlying vulnerability among social media users. We found that accounts followed by partisan drifters were more bot-like than accounts followed by centrist drifters (df = 618, t = −6.14, p < 0.001 (right vs. center) and df = 486, t = −3.67, p < 0.001 (left wing vs. middle center). Comparing partisanship and moderates, right-wing drifters follow more accounts than C. Right-wing drifters are more robotic (df = 735, t = −3.01, p = 0.003), while liberals differ less (df = 541, t =−2.56, p=0.011 (left vs. C. left).Across parties, accounts followed by right-wingers are more bot-like than left-wing accounts (df=694, t=−2.33, p=0.020).

Guess you like

Origin blog.csdn.net/qq_40671063/article/details/132209242