MMfD launches series of investigative reports highlighting lack of corporate accountability in Asia

MMfD launches series of investigative reports highlighting lack of corporate accountability in Asia

EDITORIAL: Online violence against women, misinformation campaigns, and Big Tech accountability: The case of Asia

By Zebunnisa Burki


As social media and digital platforms served the essential task of connecting people, allowing the exchange of information, who would have thought they would eventually become both the most important sites of activism, dissent and indie journalism as well as turn into spaces for bullying, dangerous dis-and-misinformation and state suppression?

The darker side of Big Tech has revealed bias, discrimination, misogyny and blatant corporate greed — characterized by online violence and hate against women, and potent misinformation.

For this project, we aimed to highlight how lack of corporate accountability has affected online gender-based violence and misinformation-based hate campaigns in different countries in Asia. From Pakistan to Turkey to Palestine to Egypt, reporters delved into the issue of Big Tech accountability in addressing and mitigating these challenges. By examining the impact of online violence and hate against women, as well as state-led misinformation campaigns that utilize violence and hatred, our guest reporters sought to explore potential strategies and recommendations to foster a safer, more inclusive digital landscape in these Asian countries.

Political misinformation online

A report by Eric Jardine for the Centre for International Governance Innovation (CIGI) [‘Beware fake news’, 2019] says that “Influence operations, whether launched by governments or non-state actors, existed long before social media, but what is new about contemporary influence operations is their scale, severity and impact, all of which are likely to grow more pronounced as digital platforms extend their reach via the internet and become ever more central to our social, economic and political lives[1].”

And this is precisely what has been happening over the past few years in particular: state-led campaigns targeting dissenting voices in countries across the world. Our reports from Asia cite case studies that also show the role of social media platforms — Big Tech — in the spread and amplification of such misinformation.

Women online targets for abuse, hate, violence

From harassment to doxing to stalking, women users of social media platforms have increasingly had to face hate that has spilled over into their personal lives. The psychological and societal impact of such abuse should have been enough for tech companies to take action – which has been slow to come.

Taking an example from the Global North, in 2014 GamerGate happened. GamerGate was a campaign of organized online violence aimed at women in the video game industry, starting with the harassment of indie game developer Zoe Quinn and gaming critic Anita Sarkeesian who were both harassed online[2].

Post GamerGate, in 2021, social media platforms like Facebook, Google, Twitter and TikTok promised to put an end to online abuse against women. The pledge had been a result of the World Wide Web Foundation’s publication of a letter signed by over 200 influential women regarding online violence and abuse against women[3].

Dangerous nexus

It is obvious that the intricate nexus of Big Tech, online violence, hate against women, and state-led misinformation campaigns, this report aims to stimulate dialogue, foster awareness, and contribute to the collective effort of creating a more responsible and accountable digital ecosystem.

Unesco’s Internet of Trust

“The call is now coming loud and clear from all quarters. It is time to address one of the defining questions of our age, with implications for democracy and human rights worldwide: the challenge of how to support states in developing principles and rules for digital platforms so that they enable freedom of expression and promote the availability of accurate and reliable information[4].” This is how Audrey Azoulay, Unesco’s director general chose to explain today’s challenge for users of digital platforms, as they navigate a space that has become abusive, restrictive, and dangerous.

In February this year, Unesco hosted its ‘Internet for Trust’ conference trying to figure out a set of global guidelines for regulation of digital platforms, and how to improve the reliability of information and protect freedom of expression and human rights.

Guidelines for state, digital platforms and civil society

Among the set of guidelines offered by the conference, states are urged to respect Article 19 of the International Covenant on Civil and Political Rights (ICCPR); ensure that restrictions imposed on digital platforms have a legitimate basis and are specific; not take disproportionate measures on the pretext of combating disinformation; should stay away from criminalising staff of digital platforms; push for media and information literacy regarding digital platforms[5].

Digital platforms are urged to respect human rights while moderating content; be transparent about how they operate; empower users to make informed decisions about the digital services being used; and be accountable to relevant stakeholders[6]. While organizations and the civil society are urged to help counter abusive behaviour online and challenge unnecessary internet regulations; identify patterns and causes of abusive behaviour; support audits and assessments etc[7].

Maria Ressa has explained the challenge simply but effectively. She says: “When we focus only on content moderation – it’s like there’s a polluted river. We take a glass. We scoop out the water. We clean up the water, and dump it back in the river[8].” Ressa feels – as she put it at the Unesco conference – that instead we need to look at how these systems operate and make sure they adhere to our human rights.

What happens when being online is a threat?

The consequences of online attacks and abuse and targeted misinformation along with inadequate response by social media companies leads inevitably to an erosion of trust in online platforms and democratic processes.

This project’s reports are a glimpse into the rampant rise of misinformation in some countries in Asia — misinformation mainly run by powerful states with the resources to not just twist facts and misrepresent them but also to clamp down on freedom of speech and expression and get tech companies to adhere to moderating content according to narratives set by states and governments.


The dispatch from Pakistan focuses on the online hatred faced by the trans community in the country. Writing the Pakistani report is journalist Amel Ghani who pegs her essay around the hate campaign run by religious political parties, the Jamaat-e-Islami (JI) and the Jamiat Ulema-e-Islam-Fazl (JUI-F) against Pakistan’s Transgender Act.

The parties have sought to create controversy over terminology – wanting there to be a differentiation between ‘trans’ and ‘intersex’ and seeking to do away with ‘trans’ altogether. The report cites the case studies of Shahzadi Rai and Mehrub Moiz Awan, trans activists who have had to face targeted attacks online as part of an organised hate campaign.

Amel reports that “an analysis of some of the more prominent hashtags using InVid’s Twitter Analysis Tool shows that there were over 7000 tweets associated with the campaign against the trans community generated by a handful of accounts but which were amplified through retweets and likes. The 7, 195 original tweets were retweeted over 88 thousand times and received over a hundred thousand likes.”

Big Tech has added to the problem, says the report. “Twitter has some very clear rules about hate speech online — some of these hashtags would qualify under those but they continue. Some of the main accounts, which put out the most number of tweets, are still active on Twitter. They have not been shut down.”

One of the hindrances facing countries in the Global South and digital rights activists there is that most times the local context has to be explained to tech companies, with language barriers also a factor. Digital rights activists tell Amel that Big Tech often fails to understand “how real the threat is to the individuals being targeted”.

Shahzadi Rai sums it up when talking to Amel: “I’ve had to go into therapy because I really did not know how to deal with the hate coming my way and am learning to put my phone away more often and not engage with everyone.”

Adding to Amel’s report, which features the challenges of the trans community in Pakistan, one can easily also cite the challenges faced by women journalists from Pakistan as they navigate social media sites. In 2020, more than 100 women journalists in Pakistan signed a petition, submitted to the government, demanding an end to online assaults.

According to the petition, “The online attacks are instigated by government officials and then amplified by a large number of Twitter accounts, which declare their affiliation to the ruling party….. In what is certainly a well-defined and coordinated campaign, personal details of women journalists and analysts have been made public.”


Reporting from Turkey is Ceren Iskit, who paints a picture of “a country where polarization is getting sharper than ever, and which has one of the largest prisons in the world for journalists, students, and opinion leaders”.

At heart, says the report, is the new disinformation bill, under which the public prosecutor can request an investigation, and a judge can order a suspension or ban of content or account on social media platforms. Ceren writes that cybercrimes in Turkey are regulated by law No. 5651 of 2007. The law, which has had significant changes since 2014, was expanded to the so-called ‘disinformation bill’ in October 2022.

Twitter’s transparency report from July to December 2021 says that there was a significant increase in content removal requests from Turkey, the country ranking fourth among countries with the highest legal content removal demands by official authorities.

Citing various case studies, the report Turkey says that since the Gezi Park protests in 2013, with Twitter becoming a heavily used social media platform in the country, many dissenting political figures, journalism even artists and entertainment persons have been targeted by pro-state and government trolling campaigns, even through misinformation via pro-government media.

According to Reporters Sans Frontiers (RSF), almost 90 per cent of Turkish media is under state control, things having worsened over the past five years under Erdogan’s rule. This may be the primary reason many turn to independent media outlets or social media platforms for news that may show a different picture from what is relayed by state-controlled media groups.

In Turkey too, writes Ceren, these organized trolling campaigns do not remain confirmed to the online world. They can have dangerous consequences for their targets — for example, there have been instances of the disclosure of the addresses of those being targeted, mostly women.


Dalya Masri reports on the Palestinian battle with misinformation-based hate campaigns by the Israeli state. She leads the way with the biggest example of attacks on press freedom and impunity over social media coverage of these in 2022: the day Shirin Abu Akleh, a Palestinian-American journalist and veteran Al Jazeera correspondent, was shot dead on camera by Israeli forces. She was merely doing her job — reporting the news.

Given the political situation in the Occupied Territories of Palestine, Palestinians both at home and abroad have relied on social media sites for news and information. But this has increasingly become difficult. The report says that users have complained how “content to related Palestine is removed, whether a Tweet or an Instagram story, and slapped with tags of ‘sensitive’ and ‘violent’. Sometimes the accounts are suspended altogether.”

Dalya writes that according to social media users, the “relationship between Big Tech corporate heads and the Israeli government is the reason behind this targeting.” This has led to human rights groups worrying that any time Palestinians post news or content showing Israeli oppression – arrests, raids, home demolitions and worse — that can be termed ‘terrorism’ by Israel and then censored by Big Tech companies. The Israeli Cyber Unit has in the past as well managed to be quite successful in getting Palestinian content removed on Meta.

Instagram — owned by Meta — for instance, takes “its cues from its current head, Israeli businessman Adam Mosseri” writes Dalya. Things reached a point where Meta and Instagram employees even accused the platforms of an anti-Arab bias. And it doesn’t stop there, says the Palestinian report. Even WhatsApp — probably a near-indispensable platform in today’s world — has been accused of targeted censorship of journalists and Palestinian groups.

Dalya reports that in 2021 social media companies had admitted to “some takedowns and account blockages in the past”. Instagram had apologized for the fact that many accounts couldn’t post content related to Palestine on May 6, and had called it “a broader technical problem that affected posts from several countries about different global issues”.

This targeting does not stay online but spills over into real lives as well. The report says that “in January 2022, 7amleh released another report called ‘Hashtag Palestine’, which claimed that digital harassment transfers to real life surveillance, the report claiming that: ‘use of surveillance technologies significantly increased, evident in the proposal of an Israeli law to allow the use of facial-recognition cameras in public spaces.’

In sum, Dalya writes that Big Tech has failed to “monitor Israeli bots and Israeli-state sponsored mis- and dis-information”, resulting in consequences that range from social media account suspensions to arbitrary detentions and arrests.


Our Egyptian reporter Hosam Elsayed writes that Egyptian social media is “the most important means of expressing and defending freedom in Egypt” but that information suppression has meant that many Egyptians, “including women and communities with Gender and Sexual Diversities (GSD), have faced oppression.” The Egyptian report has given varying case studies illustrating this oppression, focusing on women bearing the brunt of being targeted online – and the spillover effects of this in their personal lives.

Hossam writes that “while women face violence and threats online, transpeople face most of the threats”. Privilege and intersectionality play a huge role in the discrimination and violence, says Malak Al-Kashif, Egyptian feminist and queer activist. The report says that transgender persons are attacked socially, institutionally and legally.

Per the report, “women are the most affected in conflict zones because without protection, they face all the risks” And on top of that, women’s issues are always at the bottom of the priority list.

When it comes to Big Tech, Egypt faces much the same challenges Big Tech has afforded most countries — particularly non-Western countries of the Global South. Egyptian tech observers tell Hossam for example that “Facebook’s lack of application of social standards [can be attributed] to the poor mechanics of its application. Instead of monitoring Arabic content, the company appointed a company to monitor content as a subcontractor for informal work.”

Activists in Egypt have reported the lack of safe spaces — even closed groups not being safe since no one knows when social media companies will change their policies. The report quotes one person saying: “We could wake up one day to find that Facebook has decided to reveal our gender identities just like when it recently decided to show user feedback on stories suddenly without user permission.”


The countries may change but the stories all have an all-too-familiar pattern: dissenting voices suppressed, misogynistic attacks, online violence having consequence sin the non-virtual world, states arbitrarily ‘regulating’ content and indulging in misinformation, and Big Tech either ignoring or being slow on the uptake regarding the situation.

The Unesco guidelines would be a good starting point for tech companies to indulge in some self-accountability. Countries will need their governments to strengthen legal frameworks and have practical regulatory oversight mechanisms — that do not impinge on human rights values. Can we then dream of a future of responsible digital platforms that encourage transparency?


[6] ibid
[7] ibid

Leave a Reply