TIKTOK. The TikTok app icon on a smartphone in this illustration taken October 27, 2025TIKTOK. The TikTok app icon on a smartphone in this illustration taken October 27, 2025

Down-ranking polarizing content lowers emotional temperature on social media – research

2025/12/30 10:00
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do.

Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views.

I’m a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time.

Drawing on social science theory, we used a large language model to identify posts likely to polarize people, such as those advocating political violence or calling for the imprisonment of members of the opposing party. These posts were not removed; they were simply ranked lower, requiring users to scroll further to see them. This reduced the number of those posts users saw.

We ran this experiment for 10 days in the weeks before the 2024 US presidential election. We found that reducing exposure to polarizing content measurably improved participants’ feelings toward people from the opposing party and reduced their negative emotions while scrolling their feed. Importantly, these effects were similar across political affiliations, suggesting that the intervention benefits users regardless of their political party.

Why it matters

A common misconception is that people must choose between two extremes: engagement-based algorithms or purely chronological feeds. In reality, there is a wide spectrum of intermediate approaches depending on what they are optimized to do.

Feed algorithms are typically optimized to capture your attention, and as a result, they have a significant impact on your attitudes, moods and perceptions of others. For this reason, there is an urgent need for frameworks that enable independent researchers to test new approaches under realistic conditions.

Must Read

New York to require social media platforms to display mental health warnings

Our work offers a path forward, showing how researchers can study and prototype alternative algorithms at scale, and it demonstrates that, thanks to large language models, platforms finally have the technical means to detect polarizing content that can affect their users’ democratic attitudes.

What other research is being done in this field

Testing the impact of alternative feed algorithms on live platforms is difficult, and such studies have only recently increased in number.

For instance, a recent collaboration between academics and Meta found that changing the algorithmic feed to a chronological one was not sufficient to show an impact on polarization.

A related effort, the Prosocial Ranking Challenge led by researchers at the University of California, Berkeley, explores ranking alternatives across multiple platforms to promote beneficial social outcomes.

At the same time, the progress in large language model development enables richer ways to model how people think, feel and interact with others.

We are seeing growing interest in giving users more control, allowing people to decide what principles should guide what they see in their feeds – for example the Alexandria library of pluralistic values and the Bonsai feed reranking system. Social media platforms, including Bluesky and X, are heading this way, as well.

What’s next

This study represents our first step toward designing algorithms that are aware of their potential social impact. Many questions remain open.

We plan to investigate the long-term effects of these interventions and test new ranking objectives to address other risks to online well-being, such as mental health and life satisfaction. Future work will explore how to balance multiple goals, such as cultural context, personal values and user control, to create online spaces that better support healthy social and civic interaction. – Rappler.com

Tiziano Piccardi, Assistant Professor of Computer Science, Johns Hopkins University

This article originally appeared on The Conversation.

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!