
Nature News · Feb 18, 2026 · Collected from RSS
MainSocial media platforms have fundamentally transformed human lives: a large and growing share of the global population connects with others, gets entertained and learns about the world through social media2. These platforms have also become increasingly important for political news consumption. A quarter of US adults report social media as their primary news source, and one half say they at least sometimes get news from these platforms3. Typically, platforms use feed algorithms to select and order content in personalized feeds for each user4. Before algorithms were introduced, users saw a simple chronological feed that displayed posts from followed accounts, with the most recent posts appearing at the top.Public intellectuals and scholars have raised concerns about the potential adverse effects of social media, particularly feed algorithms, on social cohesion, trust and democracy5,6,7,8. These concerns arise from the spread of misinformation9,10,11, the promotion of toxic and inflammatory content12,13,14 and the creation of ‘filter bubbles’ with increasingly polarized content15,16,17,18. There is substantial rigorous quantitative evidence that internet access and social media indeed have important negative effects19,20,21,22. Research on search engine rankings also shows that the order in which information is presented can influence user behaviour and political beliefs23. However, previous literature on the effects of social media feed algorithms reports zero political effects. A large study of Facebook and Instagram, conducted by academics in cooperation with Meta during the 2020 US election, found that experimentally replacing the algorithmically curated feed with a chronological feed did not lead to any detectable effects on users’ polarization or political attitudes, despite causing a substantial change in political content and lowering user engagement with the platforms1. Similarly, studies on Google’s search engine and YouTube algorithms found little evidence of filter bubbles24,25,26,27. Studies of Meta platforms linking content to user behaviour and attitudes also found no impact, despite prevalent like-minded content and amplified political news28,29,30.Yet, the fact that switching off a feed algorithm does not affect users’ political attitudes does not mean that algorithms have no political impact. If the initial exposure to the algorithm has a persistent effect on political outcomes, switching off the algorithm might show no effects despite its importance. For instance, this could happen because people start following accounts suggested by the algorithm and continue following them when the algorithm is switched off. In addition, different platforms may have different effects, for instance, due to different informational environments or the different objectives of their owners31,32,33,34.We study the effects of X’s feed algorithm and find that switching the algorithm on substantially shifts attitudes on policies and current political news towards more conservative opinions without significantly affecting polarization or partisanship. We conducted a randomized experiment involving actual X users in the United States over a 7-week period in the summer of 2023.Our study departs from the previous literature in several ways. First, we leverage the feature on X that allowed users in 2023 to choose between a chronological feed (the ‘Following’ tab) and an algorithmic feed (the ‘For you’ tab), where content was both added (showing posts from accounts not followed by the user) and reordered (prioritizing some posts while hiding others), compared with the chronological feed setting (as confirmed by our data). This feature enabled us to conduct two experiments simultaneously: examining the effects of switching the feed algorithm on for users who previously used the chronological feed and of turning it off for those who were on the algorithmic feed before the experiment.Second, this feature allowed us to conduct the experiment independently, without cooperation from X. Hence, we avoid potential concerns specific to studies conducted in partnership with platforms35,36. Recently, another independent study used a browser extension to show that re-ranking content within X’s algorithmic feed by moving up or down posts expressing anti-democratic attitudes and partisan animosity influenced affective polarization37. In contrast, we randomize users’ exposure to X’s algorithmic feed as designed and implemented by X or to a chronological feed. We then quantify which posts the algorithm promotes and demotes, estimate its causal effects on users’ attitudes and behaviour, and leverage behavioural responses to provide evidence on the mechanism.Finally, in addition to affective polarization and partisanship, we also study outcomes such as policy priorities and attitudes towards current political events, which may be less rigid and, therefore, more easily influenced by exposure to different social media content.Our experiment took place more than 6 months after Elon Musk’s acquisition of Twitter, a few months following the publication of the platform’s source code and shortly after Linda Yaccarino assumed the role of CEO, yet about 1 year before Musk’s public endorsement of Donald Trump in July 2024. An earlier study examined changes in the content of users’ feeds on Twitter, when the platform introduced the feed algorithm in 2016, well before Musk’s takeover, and found that the algorithm already prioritized right-wing content38, despite different platform ownership. In addition to content analysis, our study focuses on the effects on real users’ behaviour and political attitudes38.DesignWe conducted an experiment with X users during July and September of 2023. This research resulted from a collaboration of academics independent of X. We obtained ethical approval from the Ethics Committee of the University of St. Gallen, Switzerland (see Supplementary Information section 1.1 for a detailed discussion of ethical considerations and the measures we implemented to uphold ethical integrity). We pre-registered the experiment with the American Economic Association’s registry for randomized controlled trials (AEARCTR-0011464).The experiment had several phases: recruitment, a pre-treatment survey (collecting baseline characteristics), randomization into feed settings, a treatment phase (with participants using their assigned feed setting) and a post-treatment survey to gather self-reported outcomes. We also collected data on the content of users’ feeds and data on users’ online behaviour. Participants were recruited through the survey company YouGov, drawing from US-based registered members of the YouGov panel. Participation was voluntary and compensated.YouGov contacted 13,265 participants to enter the pre-treatment survey. Of these, 3,434 were screened out as they were not active X users. By design, we only admitted participants who self-reported being active on X at least ‘several times a month’. A total of 8,363 participants provided informed consent and 6,043 completed the entire pre-treatment survey. During the survey, each participant was assigned randomly to a feed setting, which they were paid to stay on until completing the post-treatment survey. Admission to the pre-treatment survey occurred on a rolling basis in July 2023. At the end of August 2023, all participants were invited for the post-treatment survey, completed by the third week of September. The treatment phase duration varied slightly among participants, with the average time between entering and leaving the study being 7 weeks and 1 day, with s.d. of 3 days. During the treatment phase, participants received two reminders to stick to their assigned feed setting. A total of 4,965 participants completed the post-treatment survey, forming our main sample. Extended Data Fig. 1 presents the flowchart for the experiment, describing its structure and sample size at each stage. Supplementary Information sections 1.2–1.8 provide further details about the structure of the experiment, all collected data, compliance and attrition at each stage.In the pre-treatment survey, we inquired about which feed setting the participants used before the experiment. Initially, 76% of participants were using the algorithmic feed, the default on X; and 24% were using the chronological feed. For the duration of the study, participants were assigned randomly with equal probability to either the algorithmic or the chronological feed.The pre-treatment survey included questions about X usage, including purpose and frequency, as well as questions on life satisfaction, partisanship and feeling thermometers for Democrats and Republicans, which we use to calculate affective polarization. YouGov provided data on participants’ socio-economic backgrounds. All pre-treatment characteristics were fully balanced, except for a slight imbalance in the initial feed setting. The share of participants initially on the algorithmic feed setting was two percentage points higher among those assigned randomly to the algorithmic feed (77% versus 75%; two-sided t-test: P = 0.08; Supplementary Information section 1.7). In our analysis, we always control for the initial feed setting. Participants who had to switch their settings in either direction as a result of the experiment were more likely to drop out (22.7% versus 20.9%; two-sided t-test: P = 0.077). As demonstrated in Methods, selective attrition during the treatment phase does not drive our results.In the post-treatment survey, we collected several groups of outcomes. First, we repeated questions about X usage, partisanship, affective polarization and life satisfaction. Furthermore, we asked participants to rank policy areas by priority and to express their views on current political events, specifically their assessments of the criminal investigations into Donald Trump and the war in Ukraine, using a series of questions. For the baseline analysis, we aggregate