This is part of Twitter’s announcement in April to study algorithmic fairness and to ascertain whether they cause “unintentional harms.” The company said at the time that it would study the political leanings of the platform’s content recommendations. These new findings detail that its algorithms are somewhat in favor of the right. In this research, Twitter focused on two key aspects. First to determine if the algorithmic timeline helped further political content from elected officials, and secondly to know if some political groups received a greater boost for their content. Researchers leveraged tweets from popular news outlets and election officials in seven countries. These include Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States.
The team said it will need to conduct more research to know what’s causing the bias
“In 6 out of 7 countries, Tweets posted by political right elected officials are algorithmically amplified more than the political left. Right-leaning news outlets (defined by 3rd parties) see greater amplification compared to left-leaning,” Twitter’s Rumman Chowdhury said on Twitter. Chowdhury is a Director of ML Ethics, Transparency & Accountability (META) at Twitter. Speaking to Protocol (via), Chowdhury said that the company currently couldn’t explain this phenomenon. The research points out that some political parties could be using “different strategies” on the platform. However, the team also acknowledges that it will need more research to understand what’s causing this. In any case, the researchers clarify that the research “does not support the hypothesis that algorithmic personalization amplifies extreme ideologies more than mainstream political voices.” You can find the full research paper here. Any tech research, particularly on AI or machine learning, is bound to cause some uproar. But this research does illustrate Twitter’s concern about bias in its algorithms that could favor one side over the other. Twitter recently launched a bug bounty program to help detect bias in its platform. The company also published its research on image cropping algorithms. Meanwhile, Facebook is facing some scrutiny over its algorithms that reportedly led to divisiveness. Moreover, a former employee-turned whistleblower encouraged the company to make its research more transparent.