This article by Special Advisor at ICT4Peace and ZHET, Sanjana Hattotuwa, was originally published on the website of Institute for Human Rights and Business.
###
The world’s biggest social media platform, Facebook, recently unveiled its human rights policy. It is a step in the right direction. But declaring a policy is one thing; implementing it is quite another.
In a recent podcast with the Institute for Human Rights and Business, Facebook’s director for human rights, Miranda Sissons, discussed these challenges. How it changes its practices remains to be seen.
For well over a decade, Facebook and other social media platforms have too often served to normalise hate and violence, including through algorithmic amplification of harmful content. Politicians and their powerful proxies are now far more adept at instrumentalising social media for partisan or parochial gain than civil society for rights-based advocacy. More fundamentally, human rights have been parenthetical, at best, in social media companies. For example, in its early days, Facebook chief executive Mark Zuckerberg spoke of users who share data on the platform in contemptuous terms. From sexist, expletive-riddled beginnings, how much has Facebook really changed?
“Time will reveal how far Facebook prioritises principles over profit.”
With more users than India and China’s populations combined, Facebook is the internet in many regions outside the West. The company’s varied technologies allow for unprecedented contact between geographically proximate individuals and more distant communities. However, the resulting content does not automatically lead to healthy, democratic and rights-respecting conversations or outcomes.
In 2013, Zuckerberg was only interested in a very limited, self-serving vision of human rights that linked connectivity to Facebook’s market share. A few years hence, in countries like India, Sri Lanka and Myanmar, the company had to unsurprisingly contend with the offline violence fomented by this policy. Contemporary threats to human rights and fundamental freedoms are of course greater than the sum of Facebook’s frequent failures. However, the company’s various products and platforms are inextricably woven into governance, commerce, culture and society, making this new corporate human rights policy particularly significant.
I am cautiously optimistic that the new policy can lead to better results, but there are two dominant issues around Facebook’s approach to human rights that need to be addressed.
First, Facebook needs to own the significant harm and violence it contributed to when it did not have such a policy. That historical perspective matters. Steven Levy, author of Facebook: The Inside Story, said in a conversation with David Kirkpatrick, author of The Facebook Effect: The Inside Story of the Company that’s Connecting the World, that when executives alerted Zuckerberg against implementing specific changes that might violate privacy or uncovered a vulnerability, Zuckerberg overruled them and did it anyway. Later, when Levy asked the company about the harms he had identified which could be attributed to Zuckerberg’s veto, the company told Levy his understanding was ‘accurate.’
“A close reading of the new human rights policy shows the problem persists.”
I researched and wrote about how Facebook’s platform contributed to ethno-political unrest, majoritarianism, religious extremism and violent nationalism as far back as 2013, and studied stark evidence around how the company’s lip-service to rights didn’t sync with corporate imperatives. Zuckerberg was interested only in growth until much of what was experimented with or entrenched in Sri Lankan and other Global South markets metastasised at scale in the West, with greater velocity and ferocity. While releasing its independent Human Rights Impact Assessment (HRIA) on Sri Lanka in 2020, Facebook deplored the ‘misuse of our platform’, and apologised for ‘the very real human rights impacts that resulted’. It did not say if it had done enough to stop the generation and spread of content inciting hate. Instead, the company placed the blame on those who misused the platform. It is as if there were, at the time, well-established governance, guidelines and guardrails against platform harms and abuse. Not so.
A close reading of the new human rights policy shows the problem persists. In Section 4, Facebook says it recognises that “human rights defenders are a high-risk user group” but the same paragraph deflects and downplays significant risks HRDs face on Facebook’s products and platforms. The human rights policy explicitly names the company when there is a new promise or an anodyne statement of corporate due diligence. But when specific harms are noted, the blame shifts to ‘social media’ writ large. For example, in the paragraph about the risks HRDs face, Facebook notes that “on social media, these risks can include digital security risks; online attacks against individuals or groups; surveillance; and censorship demands from governments or their proxies (Emphasis added).” Even in the most progressive document on human rights to emerge from the company, Facebook shies away from corporate responsibility of significant contributions to offline and online violence.
“Facebook’s human rights commitments will be tested most not in the US or Europe but in significant markets like India and the Philippines with authoritarian leaders.”
This brings me to the second problem. When Facebook joined the Global Network Initiative (GNI) in 2013, the company called “advancing human rights, including freedom of expression and the right to communicate freely” core to the company’s mission of “making the world more open and connected”. And yet, five years later, the HRIA on Sri Lanka discovered that platform harms extended to minority rights, women’s rights, child rights, the LGBTQI+ community and most significantly, HRDs. The HRIA noted that while the platform is a powerful tool for activism, “aspects of its use present ongoing risks to human rights defenders who may face harassment and surveillance of their online activity, including in relation to Facebook.” The report concludes that this “includes online harassment of human rights defenders by other users, as well as a potential overreach by government agencies seeking to monitor defender activity online.” Contrast this direct attribution to the more evasive language employed in the new human rights policy highlighted above. The assertion, which Facebook reiterates often, is that joining GNI in 2013 showed the company’s avowed interest in and investments around human rights, including privacy. That claim must be scrutinised and challenged.
Despite these concerns, I am cautiously optimistic that Facebook may be turning the corner. The track record of human rights advocacy by Facebook’s Human Rights Director Miranda Sissons, along with colleagues such as Alex Warofka and Iain Levine, is unimpeachable. This moral leadership matters. The greatest challenge for Sissons and her team will be within Facebook, to expand and entrench a policy that applies to all personnel and all aspects of a company that prioritised profit and growth over respect for fundamental rights and democracy, including the health and safety of its users.
The company’s new human rights policy is a pathway that can lead to fundamental revision of the corporate ethos, and a better legacy than the one Zuckerberg is currently responsible for.”
Sissons is acutely aware of these challenges and in the recent IHRB podcast acknowledged that the company “will be judged based on our actions, not our words.” With so much left to be fleshed out in the human rights policy, including unprecedented offline commitments such as a fund to support HRDs, there are high expectations around implementation and meaningful engagement which risk outpacing complicated negotiations within the company and senior management. For example, a recent report revealed how Facebook’s artificial intelligence algorithms amplify disinformation and misinformation in ways that no one, including those who created it, know how to rein in. The new policy will have to address the resulting risks and harms, growing at pace, globally.
Finally, Facebook’s human rights commitments will be tested most not in the US or Europe but in significant markets like India and the Philippines with authoritarian leaders. In a nod to realpolitik, the policy notes that “when faced with conflicts between such laws and our human rights commitments, we seek to honour the principles of internationally recognised human rights to the greatest extent possible.” Time will reveal how far Facebook prioritises principles over profit.
Ultimately, Sissons and her team have presented a bold new direction for Facebook. The company’s new human rights policy is a pathway that can lead to fundamental revision of the corporate ethos, and a better legacy than the one Zuckerberg is currently responsible for.