Digital Accountability Symposium: Social Media Accountability or Privacy? A Socio-Political Perspective from India

Digital Accountability Symposium: Social Media Accountability or Privacy? A Socio-Political Perspective from India

[Shakuntala Banaji is an Associate Professor and Director of Graduate Studies in the Department of Media and Communications at the London School of Economics and Political Science and Ram Bhat is a PhD candidate in the department of media and communications at the London School of Economics and Political Science and co-founder of Maraa, a media and arts collective in India.]

In late 2018, WhatsApp awarded us one of 20 misinformation and social science research awards for an independent study of the different types of misinformation leading to mob lynching in India, the users of WhatsApp who pass on or flag disinformation and misinformation, and the ways in which citizens and experts imagine solutions to this problem. We have summarised our research elsewhere. In this post, we will focus specifically on the entanglement between issues of algorithmic and platform accountability, censorship, and privacy in the context of India. Currently, some of these issues are being heard in the Supreme Court of India, with important implications for users, industry and the role of misinformation-linked violence and hate speech in the future of democracy.

Social media accountability, along with privacy and free speech/censorship issues, have to be seen within broader socio-political contexts. Indian society suffers deep social fractures related to divisions along caste, religion, gender and class lines. After the post-independence decades when a broad socialist vision guided government planning and anti-discrimination laws began to threaten centuries of male and Brahmin privilege, a political backlash has ensured that discrimination and inequality are not only practised but upheld through various wings of the state. The privatisation of core public sector units and the gradual shift from an agricultural to a service-based economy since the late-1980s exacerbated pre-existing poverty, creating a slow-burn anger across swathes of the public different to the anger felt by those who had seen their privileges potentially eroded through pro-poor and caste-reparative policies. The far-right Hindu majoritarian Bharatiya Janata Party (BJP), winner of the 2014 and 2019 elections, has harnessed, manipulated and mobilised both types of anger against vulnerable citizens – Dalit Bahujan (lower caste groups), Adivasis (indigenous groups), women, Muslims, Christians, other ethnic and linguistic minorities, and LGBT+ groups – leading to a spate of atrocities so deep and lasting that it can only be called a ‘culture’.

At the same time, since the early 2000s Internet connectivity has increased due to a coincidence of various factors, including but not limited to: telecom service providers competing and driving down tariffs; spectrum being awarded at a fixed, low cost; and a drop in the price of mobile phones. India has emerged as one of the largest markets for Facebook, Google, WhatsApp, and other social media applications. Even though internet penetration in India is a relatively small percentage of its population, India still represents a 400 million user base for WhatsApp and promises to grow even higher.

In this context, social media platforms and cross platform applications such as Facebook, Twitter, and TikTok are heavily used by far-right political trolls and millions of individuals who systematically engage in hate speech and disinformation. The tactics of these far-right trolls include: targeting and abusing specific citizens who are influential on social media – including pro-democratic or anti-misogyny activists, student leaders, journalists, academics, film personalities, and fact-checkers (especially through threats of sexual violence against women); the manipulation of trends and the deliberate misuse of hashtags; and mass complaints against rights-based and anti-discrimination social media users to get their posts taken down or accounts suspended.

This systematic, politically motivated abuse of social media takes place very openly. Nevertheless, neither social media companies nor government agencies have acted against the worst offenders (indeed, many of the offenders are close to the ruling party and even followed by ministers or the prime minister himself). Since little to no action is taken by social media companies even when the identities of the disseminators of hate speech and disinformation are clearly known, we very much doubt that removing encryption from some cross-platform applications (such as WhatsApp) or diluting user privacy in others would lead to positive changes. On the contrary, removing encryption would serve the interests of anti-democratic forces and adversely affect the valuable work of human rights activists and investigative journalists, whilst also invading ordinary users’ privacy and affecting numerous businesses that depend on encryption for security and privacy for their clients/users.

However, we also argue that social media companies must take stronger measures against hate speech and disinformation since their technologies play a direct role in the loss of hundreds of lives from vulnerable groups and the degradation of the quality of life of millions of women. Although some of these companies have been registering tremendous profits they have given little back to the citizens who have contributed to their growth. In the same vein in which these companies have committed to actively engage in fighting child pornography, they need to commit to actively engage in fighting hate speech, incitement and violent misinformation against women and minoritized citizens.

Possible steps could include specific changes to the applications (e.g. restricting sharing to one group), cross-stakeholder collaboration (e.g. banning unauthorised versions of applications), making it easier to report hate speech, and taking more proactive punitive action against those found guilty of hate speech. Technology companies must invest in long-term strategies in partnership with grassroots civil society organisations that are committed to constitutional and human rights values, including investment in fact-checking at the local level through diverse dissemination channels, investment in critical digital media literacy for both children and adults, and creating human and social infrastructures in countries that have a large user base – for example, editorial teams of qualified and well-paid moderators, crisis managers and personnel to liaise with both vulnerable user groups as well as local law enforcement agencies.

Too often, the debate around privacy and security vs. accountability is framed as a binary choice. It is entirely possible for governments and social media companies with the political will to do so to take a wide range of steps to effectively challenge the spread of violent disinformation and hate speech without having to resort to the removal of encryption from their services.

The detailed report on WhatsApp and misinformation linked to mob violence in India is available here. The authors may be contacted by email, S.Banaji@lse.ac.uk and R.Bhat2@lse.ac.uk.

Print Friendly, PDF & Email
Topics
Featured, General, International Human Rights Law, Symposia, Themes
No Comments

Sorry, the comment form is closed at this time.