On February 20, 2025, the US Federal Trade Commission (FTC) announced an inquiry into “how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.” In its request for public comment, the FTC further claims that platforms may do so using “opaque or unpredictable internal procedures,” with little explanation, notice, or opportunity for appeal.
Tech Policy Press has a long track record of publishing calls for greater transparency from technology platforms. In December 2024, for example, Sandra González-Bailón and David Lazer criticized the lack of accountability for how Meta specifically, and other platforms generally, act at critical moments. Taking inspiration from former Supreme Court Justice Antonin Scalia, they argue that users are “entitled to understand what speech platforms make visible.” A month later, they argued in a separate piece that the nature of social media is such that “companies are always choosing what people do and do not see.” Content moderation is the process or processes by which platforms make this choice.
However, President Donald Trump and other prominent Republicans frequently equate content moderation to censorship against conservative users. FTC Chairman Andrew Ferguson has himself compared content moderation to censorship on several occasions. This political context has raised concerns from critics across the political spectrum that the FTC’s inquiry will end in a partisan effort to exert greater government control over platform trust and safety.
Science, as it turns out, has already inquired into the question of whether or not social media content moderation unfairly penalizes conservative speech. We asked leading scholars on this issue the following questions:
- Do social media companies disproportionately moderate posts from one side of the political spectrum? If so, is this the result of bias or something else?
- Second, does social science show that one side of the political spectrum is unfairly penalized or rewarded by platform recommendation algorithms? If so, which side and why?
In total, we received fourteen replies. To summarize, science holds that to the extent conservatives experience content moderation more often, it is because they are more likely to share information from untrustworthy news sources, even when other conservative users rate the trustworthiness of those sources. Regarding the second question about algorithmic bias, the evidence largely suggests that conservative sources and accounts tend to receive more engagement, not less—not because of platform bias but more likely because of the nature of the right-wing news ecosystem and the valence of the content shared in it.
February 20, 2025