October 13, 2015

Social Media Abuse Stories to Shrivel Your Soul

Recently in "Research Request: Seeking Facebook or Other 'Real Name' Identity Policy Abuse Stories" -- https://lauren.vortex.com/archive/001131.html -- I requested that readers send me examples of social media abuses that have targeted themselves or persons they know, with an emphasis on "identity" issues such as those triggered by Facebook's "real name" policies.

These are continuing to pour in -- and please keep sending them -- but I wanted to provide a quick interim report.

Executive summary: Awful. Sickening. I knew some of these would be bad, but many are far worse than I had anticipated anyone being willing to send me. It seems very likely -- though obviously I couldn't swear to this under oath -- that these abuses have resulted in both suicides and homicides.

And if we as an industry don't get a handle on these issues, we ultimately risk draconian government crackdowns that will simply enable more government censorship and create even more problems.

Here are some of the more obvious observations I can derive from the messages I'm being sent (not in any particular order for now):

There is no longer any realistic dividing line between the online and offline worlds. Abuse taking place online can quickly spill offline, affecting targeted persons' physical lives directly and devastatingly.

Most forms of social media abuse are interconnected. That is, we cannot realistically demarcate between "identity policy" abuses (e.g., Facebook's "real name" requirements), and other forms of social media abuse (such as comment trolling, Gamergate, and far more).

Women are disproportionately targeted by social media abuse (as a male I find this fact to be personally offensive), but yes, many men are also attacked as well.

A lack of realistically useful and advanced moderation and abuse report/flagging tools, and/or insufficient surfacing of these tools to users, combined with "lackadaisical" (that's the most polite term I can use) attention to these reports in many cases, exacerbates existing problems.

Social media systems with strict "real name" requirements are especially problematic and can be extremely dangerous. This particularly relates to the 800-pound gorilla of Facebook in this context (Google+ wisely dropped its real name requirements quite a ways back).

Facebook's identity "real name" policies have been effectively "weaponized" by abusers. Many FB users who are already targeted and marginalized in their offline lives (domestic violence victims, LGBT, racial and religious minorities, and so many more) still need to use FB to stay in contact, but (in an attempt to protect themselves) are using "real appearing" pseudonyms instead of their real names. If one of their protagonists discovers their FB identity, it is not uncommon for the abuser to report the victim to FB (for example, as a twisted form of "revenge") in an attempt to expose them online and offline, and to destroy their ability to be safely online.

Social media firm reactions to flagging and abuse complaints -- particularly in the case of Facebook -- can be erratic and seemingly arbitrary. Complaints that in one instance might target an innocent person might cause an account suspension, but one targeting a guilty person may be ignored. Innocent parties may be required by FB to jump through a series of humiliating and embarrassing hoops to try regain access, including persons whose protective pseudonyms have been exposed and persons whose actual, real names have been falsely flagged as fakes. In some cases, Facebook actually suggests to affected users that they go to court and change their name legally to match FB's rules!

Governments in general (which tend to see censorship as a solution rather than the problem it actually is) and law enforcement in particular, usually make these matters worse, not better. The police tend to be clueless at best, and often explicitly "stop wasting our time" antagonistic. Victims of bullying and online threats to their offline lives who go to the police are usually informed that there's nothing to be done to help them, or victims are told to just "stop using the Internet" as a proposed (inane) solution.

We could go on with this list, but I'm sure you get the idea.

I'm forced to add that not all of the reaction to my research request on these topics has been positive. I've received some responses that attempt to minimize the entire controversy. They've told me I'm wasting my time. They've suggested that in a relative sense "so few" people are actually victimized by these problems (compared with the billions using these system) that it would be ridiculous for the companies involved to make significant changes just to cater to to a small group of actual victims and a much larger group of supposed malcontents.

I can't emphasize how forcefully I categorically reject that entire line of reasoning.

The inherent suggestions that because "relatively" few persons might be affected (and that still means vast numbers of warm bodies at these scales) could somehow excuse the abysmal status quo -- are entirely and completely unacceptable, untenable, and unethical.

It's true that we can't put precise numbers on the victims. After all, most of these vulnerable persons are already trying to protect themselves from exposure, being forced into essentially a "shadow" universe of social media identities. And we'd expect that most would also be understandably unwilling to discuss their situations with a stranger such as myself.

But many have been so willing, and I thank them for their trust. And I believe we can safely extrapolate to the reality that there are one hell of a lot of people being victimized by these issues.

And in fact, the numbers shouldn't really matter at all. How many deaths or lives otherwise ruined attributable at least significantly to social media abuses are tolerable? I would assert that the answer in an ethical sense at least is zero.

Does this mean we can quickly solve all these problems? Is there a magic wand?

Of course not. But that doesn't mean we shouldn't try. And remember, once politicians get their claws into these controversies, you can bet that the kinds of "solutions" they push will aim to further their agendas more than anything else.

These are problems we must ourselves work toward eliminating.

Obviously, education outreach must be a major part of this effort, especially to law enforcement and other government agencies.

But we also need to have a much better handle on these situations as an industry, because the problems are ultimately not isolated to single firms.

There need to be individuals and teams within the involved firms who not only are working internally on these issues, but who also participate broadly in related public communications efforts. These companies need to work together toward understanding the impacts of their ecosystems in these contexts -- a formal or informal industry consortium to specifically further such interactions would seem a useful concept for consideration.

Most of all, it's crucial that we as individuals -- not just those of us who have built and used the Internet for many years, but also users who have so far only barely gotten their feet wet on the Web -- recognize that it is intolerable for the Net to be turned into a tool for the destruction of lives, and that it's up to us to pave the path toward changes that will truly help the Net to flourish for the good of our societies, rather than allowing the Net (and ourselves) to be shackled by politically shortsighted restrictions.

Take care, all.

--Lauren--
I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at October 13, 2015 10:08 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein