How to unsubscribe someone on Instagram

The Insta-Mafia: petty criminals report users en masse for profit

Lire en français: Comment de petits escrocs font fermer des comptes Instagram

Emma * is a Dutch teenager from a small town on the North Sea coast. In the past two years, she built up an Instagram presence that brought her over 20,000 followers by November 2020. Then, on November 18th, it was over.

We contacted Emma through Instagram to learn more about her experience dealing with abuse on the platform. Like many other content writers, especially women, she is regularly reported on Facebook, Instagram's mother company. Once, her account was deactivated by Facebook due to a message that she was impersonating someone else. A "stupid" suggestion, given the amount of selfies she posts, thinks Emma. After speaking to a representative from Facebook, Emma finally managed to get her account back. In total, she was reported "five or six times" within two years and always managed to get her account back. But not this time.

Petty criminals

Two days after we spoke to her, Emma's account was gone (it was still unavailable on January 29, 2021). We knew she was in danger because her Instagram username appeared on a Telegram group we monitored. Thousands of miles from home, she was the target of a group of petty criminals from the Middle East.

Behind it are several dozen male teenagers who, according to the Arabic they use, live mainly in Iraq and Saudi Arabia and are proud to push Instagram users off the platform. In the stories on their own Instagram accounts, they brag about their successes and sometimes post several such “achievements” every day.

Your method is simple. With the help of simple scripts available on Github, an exchange platform for computer code, they automatically report Instagram users. Using multiple Instagram accounts, a single criminal can report a user hundreds of times with just a few clicks. Most of the time, users are reported under the guise of spam, identity abuse, or "suicide or self-harm," an option that Instagram introduced in 2016. After receiving several such reports, Facebook blocks the account.

A 16-year-old named "Zen", who is involved in this criminal activity, described the usual behavior of the group to AlgorithmWatch. After an account was blocked, he contacted Facebook via email claiming he was the rightful owner of the account. In a seemingly automated process, Facebook asks him to send a photo of himself with a piece of paper with a handwritten 5-digit code on it. After receiving the photo, Facebook transfers the right to use the account that was subsequently reactivated. Ultimately, the hijacked account is resold between $ 20 and $ 50, depending on the number of followers. According to Zen, buyers simply want more followers, but the accounts are likely to be resold afterwards. Accounts with a few thousand followers can fetch up to $ 200 on platforms like SocialTradia.

Instead of an official answer to our specific questions, Facebook sent us this statement: "We do not allow people to abuse our reporting systems to harass others, and we have invested heavily in technology to detect accounts that have coordinated or automated reports There will always be people trying to abuse our systems. We focus on staying one step ahead and preventing these activities as much as possible. "

Chrome extension for bulk messaging

We examined the programs used by the criminals. Their use does not require any special knowledge and they can usually be operated from a smartphone. The automatic and false reporting of users on Facebook and Instagram is so easy that it is widely used by petty criminals all over the world.

Another group operating in Pakistan has built a Chrome extension that enables users to be reported en masse on Facebook. A license for the tool costs $ 10. The unlicensed version gives access to a target list that is updated about twice a month. In total we were able to identify around 400 different target persons in forty countries.

It is unclear how the target people will be selected by the Pakistani group. They primarily include supporters of Ahmadiyya Islam, a religious movement that arose in Punjab, Balochic and Pashtun separatists, as well as LGBTQ + advocates, atheists, journalists and feminists - in Pakistan and Furthermore. The criminals usually report them under the terms "fake account" or "religious hatred".

Mahmud *, an avowed atheist living in Belfast, appeared on one of their target lists. He does not know whether the group actually attacked him, as Facebook does not give any information about it to the reported people. However, he has already been the victim of a mass report, stating that "hundreds" of his posts have been blocked in the past seven years.

Thousands of victims

Mass reports are widespread. Given the dozens of code snippets available on Github and the thousands of YouTube tutorials, it's likely that there are a lot more groups out there operating similarly to the two of us monitored.

We surveyed a non-representative sample of 75 Facebook users in the UK. 21 of them told us that they had already been blocked or suspended by a platform. A 56 year old woman from Scotland ran a group called Let Kashmir Decide. She said Facebook deactivated the group in September 2020 on the grounds that it was "violating community standards," likely the result of a mass report.

People who are repeatedly bullied and mass reported on Facebook may feel that it takes too much energy to stay active on the social network. A 30-year-old Berliner told us that she had given up social media after she was repeatedly insulted and, among other things, wrongly reported. As much of public life has shifted to social media, their ability to participate in social life has been severely affected by this choice. The impact could be even greater, however. As more and more social life is shifting to Facebook as a result of the Covid pandemic, not being on this platform can become a serious disadvantage. For example, some German hospitals have moved their public sessions to Instagram Live, making it impossible for non-members to attend.

The effects of mass reporting can be invisible, but still powerful. According to some users, a report can lead to a so-called "shadow ban" (i.e. one's own contributions appear with a significantly lower probability of appearing in other users' newsfeeds or in search results). In a test, an Instagram user found that her posts no longer appeared in the platform's "Explore" tab after she was reported. An online marketing expert reported to AlgorithmWatch that the "shadow banning" of frequently reported accounts was confirmed by a Facebook representative.

Although mass reports are likely to affect thousands of Europeans, the scale of the problem is less significant than other forms of online harassment, such as insults.

The DSA as a savior in an emergency

On December 15, the European Commission proposed new rules within the framework of the Digital Services Act (DSA) in order to remedy deficiencies in the "Report user in" function. The bill has yet to go through the European Parliament and the European Council before it becomes law.

The European Commission's proposal would force very large platforms such as Facebook and Instagram to be more transparent about users whose content has been reported and removed. In particular, the platforms would have to set up procedures for handling complaints, in which reported users can contest the decision.

Facebook has already introduced such a system. However, everyone we spoke to who was the victim of mass reporting by criminal groups has stated that any solution through this mechanism is illusory. Emma, ​​the Dutch teenager, said that she had to first switch her Instagram account to a "work" account before she could contact anyone on Facebook. It also turned out that her contact on Facebook was in France, although she comes from the Netherlands.

Trista Hendren, who lives in Norway, reports similar arbitrary procedures. As the editor of feminist books, she has been targeted repeatedly over the past eight years, most recently by the Pakistani group we watched. She said official Facebook procedures never resulted in any action and face-to-face contact with Facebook employees was the only way to recover an account or post.

Free speech for the rich

The European Commission's Digital Services Act proposes to create additional structures to resolve these conflicts. It is planned to set up out-of-court arbitration boards that are supposed to take on insoluble disputes through internal mechanisms.

Josephine Ballon works as an attorney for HateAid, a nonprofit that helps victims of hate speech online. She told AlgorithmWatch that although the independent arbitration boards were a good thing, the DSA did not say anything about the determination of their locations. Victims may have to initiate legal proceedings outside their country, where different legislation may apply. She believes the ordeal victims go through is not really improved by the DSA.

Money is another problem with the arbitration boards, as the users have to pay to use these private arbitration tribunals (reimbursement is only made if the case is decided in their favor). With such a system "the fight for the right to freedom of expression could become a privilege for those who can afford it," said Tiemo Wölken, Member of the European Parliament who was rapporteur on the DSA on the Legal Affairs Committee.

The DSA will introduce a condition to prevent the abuse of the reporting function on social platforms, such as by the petty criminal groups we monitor. However, it is unlikely that this mechanism will change much in the current situation. Any report alleged to have been made in good faith would continue to be deemed valid.

From Mr Wölken's point of view, the DSA is introducing urgently needed protective measures, but it does not provide any incentives for platforms to end their practice of “removing first and asking questions later”.

* The names of the people we spoke to have been changed to protect their privacy.

The production of this research was supported by a grant from the IJ4EU fund. The International Press Institute (IPI), the European Journalism Center (EJC) and all other partners in the IJ4EU fund are not responsible for the published content or its use.


"Trustworthy AI" is not an appropriate framework


Continue reading