The conventional wisdom is that, when it comes to social platforms, open is good and closed is bad.
Facebook used to be slammed as a walled garden. But after Google+ came out in 2011 as a social site with public posts that you could link to and find from ordinary search, Facebook followed suit, and Facebook-as-a-walled-garden was no more. Now Facebook is mostly open (albeit with a flawed real-names policy and proprietary formats like Facebook Instant Articles).
The reason we need to bring back the wall around our social gardens is as simple as it is obvious: harassment is ruining the Internet.
Twitter has a harassment problem. And so does Periscope, the live-streaming site owned by Twitter.
Most abuse on Periscope comes in the form of comments. A typical scenario is when a woman or girl is live-streaming -- say, expressing a political opinion about the upcoming U.S. presidential election -- and abusive commenters make requests of them as if they were on an adult chat site. Other categories of abuse are the usual suspects: racism, shaming, mockery, threats and so on.
Periscope last week enabled an innovative anti-harassment process called "flash juries."
During a live stream, anyone can report a comment as abusive. A report of abuse triggers a process whereby a few random viewers of the stream are selected to be part of a "jury" that votes on the report. If a majority say it's abusive, the person who posted that comment gets banned from commenting for a minute. If the same user makes another comment deemed abusive by another flash jury, that user is blocked from further commenting during that stream.
Nice, but not perfect. Periscope streaming audiences can be very small, so a group of trolls can easily overwhelm the comments and dominate the voting, casting "no" votes when asked if their own comments are abusive and possibly even reporting and voting on non-abusive comments by regular viewers.
What's most interesting about flash juries is that it's the first time Twitter has allowed users to take direct action against abuse.
Former Twitter CEO Dick Costolo famously admitted that, "We suck at dealing with abuse and trolls." And that’s still true.
Twitter is one of the smallest social networks.
While Facebook has more than 1.65 billion monthly active users, Twitter has only 310 million.
Twitter has only 140 million daily users (and Facebook over a billion). That ranks Twitter now behind even Snapchat, and behind four Facebook social properties (Facebook, WhatsApp, Messenger and Instagram).
Twitter is tiny, but it has the biggest harassment problem.
In fact, Twitter is a dream site for misogynists who want to silence women. In recent months, rich, famous or otherwise powerful women have been silenced by harassment that's so intolerable they're driven off the social network. These include Girls creator Lena Dunham, U.K. members of Parliament Jess Phillips and Nadine Dorries, Great British Bake Off presenter Sue Perkins, the singer Halsey and others.
One writer likens Twitter to a park filled with perverts and bats.
Yes, men quit too, including comedians Stephen Fry, Louis C.K. and Jeff Garlin. So did director Joss Whedon and U.K. MP Andrew Percy.
A recent study found misogyny rampant on Twitter, with a surprising 50% of those misogynistic tweets posted by other women.
Twitter's low user numbers hide its cultural influence. It’s the clear favorite social network among journalists, celebrities, politicians and other influencers.
Yet nobody seems to understand why Twitter harassment is so intractable, so I'll try to explain it very clearly.
Twitter has one unique feature that makes it friendly to trolls, haters, misogynists and abusers: You can't delete other people's comments.
On Facebook and Google+, for example, you create a space for a conversation in the form of a post. People comment. If someone harasses you in the comment, you delete that comment and block the user from participating in any future comments in any of your posts.
Twitter is the only major social network that doesn't allow you to delete the comments of other users.
Instead, you have to use Twitter's reporting tools to ask Twitter to delete comments or accounts. Twitter decides.
In some cases, Twitter deletes a comment or profile based on a report. But in many other cases, including death threats, sexual harassment and identity theft, Twitter doesn't.
Trolls evolve their behavior to learn how to make specific threats without violating Twitter's terms. Instead of telling a woman, "I'm going to kill you," an abuser might say "You need to be killed," which is just as threatening to the victim but which Twitter may not deem to be a violation of its terms of use.
On all other major social networks, the acceptability of any given comment is determined by the user who posted the item eliciting the comments. So if I believe someone has abused me in a comment on one of my Facebook posts, I delete that comment.
Also: Blocking doesn't do much on Twitter. It keeps you and the harasser from following each other, and it prevents the other user from directly mentioning you. But that's it. That user can continue to harass you, but you won't see it. And it's possible for the user to continue to "follow" you by logging out of Twitter.
When you block on Facebook or Google+, the troll leaves the room.
When you block on Twitter, YOU leave the room.
There's just one problem: As with Twitter, when a blocked user logs out of Facebook or Google+, that user can still see your public posts.
Harassment is a big problem.
In a 2014 survey of Internet users by the Pew Research Center, 8% of the respondents said they had been physically threatened, 8% reported being stalked, 7% said they had been "harassed for a sustained period," and 6% reported being sexually harassed. Overall, some 40% of U.S. Internet users reported being harassed online.
Those are significant numbers. They mean that within the U.S., more than 22 million people have been physically threatened online (based on an estimated number of total users).
Everybody seems to have a different solution to the problem of online harassment.
The European Union sees the solution in an online "code of conduct" to guide what Facebook, Twitter, YouTube and Microsoft should to do to fight online racism and xenophobia in Europe. The "code" is weak and not legally binding, and it essentially asks the companies to respond to most reports of harassment coming from European law enforcement.
Advocacy groups in France are threatening a lawsuit against Twitter, Facebook and Google over those companies' refusals to delete posts that the advocates say violate French law against hate speech.
The Cyberbullying Research Center publishes a list of of places for people to report harassment.
Lady Gaga even launched a #HackHarassment campaign that seeks to apply public pressure on tech companies to do a better job of fighting harassment. That approach probably makes sense, given the Association for Progressive Communications' observation that Silicon Valley companies have a "reluctance to engage directly with technology-related violence against women until it becomes a public relations issue."
All of these so-called "solutions" share the same fatal flaw: They depend on the social networking companies to identify, judge, track down and deal with every single social media message deemed harassment.
One problem is that people disagree about what constitutes harassment. Another is that there are too many messages to sort through (Facebook alone would have to process billions of posts each day), and the subtleties of language and human relations lie far beyond the scope of what algorithms can deal with.
We can't wait for every single Internet user to become virtuous.
We can't wait for Internet companies to become omniscient and omnipotent.
The solution has already been revealed by the voluntary actions of millions of Internet users. The biggest trend in social media is the rise of messaging apps in place of social networks. What's that all about
A messaging app can be viewed as a private social network. You're not socially networking with the world (trolls and all). You're socially networking exclusively with invited guests who are signed in to the service.
In other words, messaging services are walled gardens. That's why people like them.
Social sites, including and especially Twitter, need to build in tools that enable you to post semi-publicly, ban harassers and rule over your own private social network like an autocrat by giving you the power to delete other users' comments for any reason you choose, and to truly block other users from participating in the conversation spaces that you create.
We need nothing less than a complete rethinking of what a social network is. Instead of thinking of social posts as available to absolutely everyone with an Internet connection, we need to be able to post publicly to everyone -- minus the people we've disinvited and minus the people who are not signed in to the service.
That model looks less like Twitter and more like Snapchat, less like Facebook and more like Whatsapp. That model is a walled garden.
Social sites like Facebook, Google+ and Twitter have great features the messaging apps don't have. Not all of us want to join the exodus to the messaging world.
When the conventional wisdom about walled-garden social networks was formed, the social Internet was a very different place. Social sites were more exclusive. Conversation was more civil. But the growing harassment problem has gotten so bad that it's ruining the experience of using a social network.
It's time to bring back the walled garden.