Removing Anonymity Online Would Risk The Most Vulnerable Users

Apr 7, 2021 | Privacy & Security

It’s a sad reality that Web 2.0 and social media have made it easier for online abuse and hate speech to flourish.  

In 2017 Pew Research revealed 41 percent of Americans had been harassed online and 66 percent hadwitnessed abuse of others. Updated 2021 data show roughly the same rates of abuse but more severe abusive encounters, such as physical threats, stalking and sexual harassment. Significantly, 75 percent of the most recent encounters for those affected happened on social media platforms.  

We all know online abuse can be incredibly damaging and Pew puts it like this: “In its milder forms, [online abuse] creates a layer of negativity that people must sift through as they navigate their daily routines online. At its most severe, it can compromise users’ privacy, force them to choose when and where to participate online, or even pose a threat to their physical safety.”   

For years solutions to this growing problem have largely centred on ridding the internet of anonymity—forcing people to reveal their real names and identities when they engage online. But as Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, recently pointed out on the Lock and Code Malware Bytes podcast, removing anonymity unacceptably risks exposing the world’s most vulnerable populations. And that’s where we start to see that anonymity isn’t all evil for its shielding of malicious actors (a common charge); it’s also an essential protection for those who need it most. 

Ms Galperin says: “Being able to speak anonymously is extremely important for people in vulnerable populations, for people with unpopular opinions, for minorities, for women, for anybody whose identity is non-standard.” Included in that, she says, are survivors of domestic and sexual abuse, whistleblowers, and human rights activists working against oppressive governments. For these people, anonymity can be essential because, she says, “the systems are not there to protect vulnerable communities.” 

Ms Galperin points out anonymous speech and pseudonyms are “an extremely important tool for people who want to speak truth to power.” Journalists can do more dangerous reporting; whistleblowers can lift the lid on corporate malfeasance and so on. For domestic abuse survivors, anonymity allows vital sharing of experiences. “Often speech is the only outlet survivors have because often abuse is difficult to prove … and accusations often come a long time after the abuse. Getting legal action or even action by police on sexual abuse is incredibly difficult.” She says this action often doesn’t result in consequences for the abuser that are proportional to what they’ve done, “so all survivors can do is tell their stories.” 

If we remove anonymity online, we remove that vital outlet. “Not only will they not be able to tell their stories, they also won’t be able to continue hiding from their abusers. One major concern is ‘how am I going to continue my life online without continuing to be harassed by [my] abuser’? Starting new social media, getting a new phone … having a pseudonym, hiding … are all important to these communities,” Ms Galperin says. 

Ms Galperin says using real identities online won’t make the internet a safer place for anyone. She argues many people happily post harassing content under their own names all the time so revealing their identitywouldn’t deter their behavior. Further, users need to be able to compartmentalize their personal and professional lives so they can express themselves safely in different forums. Ms Galperin says: “We don’t have one single incorporated identity. We’re often different people to co-workers, school friends, church etc. That’s fine. But jumbling all those identities together [to use a single, real online identity] can result in really negative consequences.” 

Another long mooted solution to online abuse is to abandon S. 230 of the Communications Decency Act, which protects companies that publish or republish third party content and, as Ms Galperin points out, “has enabled social media platforms and our current media environment to thrive.”  

CDA 230 says “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). This means, any platform that allows users to post content online (e.g. Facebook, Instagram, Tik Tok), is not legally responsible for that content. They don’t have to take down defamatory/abusive/inflammatory content (except where content is copyrighted) and they can’t be sued for what others say on their platforms.  

“People see [CDA 230] as problematic, because they think [removing ]it would rein in Big Tech and social media companies … but it is not a special law just for SM companies. These protections are essential to everyone and to competitors to Facebook, Twitter etc. Getting rid of it is not going to help.” 

Instead, Ms Galperin points out that “issues with CDA 230 are really issues with the First Amendment.” When you engage on a platform, it’s not your free speech rights that apply, it’s the platform’s. This means the platform owners can say what they want and remove whatever content they want. Abolishing CDA 230 won’t matter because platform owners can still apply their rights under the First Amendment. 

“Most of our speech online now takes place on platforms controlled by someone else. Companies are entirely within their free speech rights to edit, ban, leave out whatever they want. Users’ rights are largely limited.”  

In a nutshell, if we change the laws and make companies, particularly Big Tech, liable for their users’ content, they could become “extremely cautious and hyper moderated”, and again this removes one of the few safe outlets vulnerable people have to express their experiences. 

“Survivors need to publish their story on someone else’s platform. It’s often the only option survivors have for anything even vaguely resembling justice … Then if there’s no CDA 230 … now the survivor is completely silenced,” Ms Galperin said. 

As Lock and Code podcaster David Ruiz asked: So if using real names and changing the law won’t work, what will? What would make the internet a safer space for survivors? 

Ms Galperin says it’s about makers of tools and platforms taking responsibility for their products’ effects on vulnerable users.  

“Primarily the onus for making safe platforms is on the makers of platforms and the onus for safer tools and IoT objects is on their makers … I encourage them to think about how their tool will be used for harassment, how this tool will be used by a domestic abuser, to think about empowering the user to get away from someone they used to trust who also used to use this tool with them.”  

Importantly, she points to messaging: “One of the most important things is if you have a way of sending messages through your platform please make it possible for people not to use their real names. Ideally make it possible so they don’t have to hand over their phone number … and to block other users or mute certain keywords. If you give the power to the user, they can decide what is harassment and what is abuse and it really takes the onus off the makers of the platform to be judge, jury and executioner for every communication that somebody has online.” 

We’re already seeing a lot of positive movement in making it better for users online. Ms Galperin points to organizations like EFF and Malware Bytes, and we’d add Anonyome Labs to the list. 

At Anonyome Labs, our solutions help mitigate tech-facilitated violence and abuse. We empower people to be able to determine what information they share, and how, when and with whom they share it. We provide survivors and those at risk of harassment, violence and abuse, real and effective tools for protecting their personal information.  

In May 2020, we proudly became a partner of the Coalition Against Stalkerware, a global working group uniting advocacy groups, software developers, security firms, victims and survivors, in the fight to protect consumers against stalkerware and to eliminate abusive technology and software. 

We are also growing our own safety initiative, known as Sudo Safe. Through Sudo Safe, we partner with organizations that promote safe use of the Internet as well as those organizations that support at-risk people who may have a specific need for privacy and security in their life. Our goal is to support at-risk users, and to be a privacy resource for organizations supporting their members. Sudo Safe encourages the use of our privacy and cyber safety app, MySudo, to help people keep their personally identifiable information private. 

We’re meeting the real need for users not to have to expose their real name or phone number online. MySudo is the world’s only all-in-one privacy solution with strong privacy and security built in. Find out more. You can listen to the Lock and Code podcast with Eva Galperin here

Photo by Keagan Henman on Unsplash

You May Also Like…