Identity, Anonymity, and Controlling Trolls

Multiple personalities

Flame wars and jihadist rants and generally worthless behavior in the comments: that’s the problem I’m trying to solve here.

And I’m trying to do it while preserving anonymity. Internet conversation can get nasty when the participants are anonymous, which has led to proposals of tying all online identities to “real” identities. This is the wrong solution to the troll problem, because it destroys privacy in a serious way. I want to build discussion systems that allow anonymous comments, yet remain orderly, civil, and enlightening. I think this can be done with filtering systems based on reputation.

Reputation is a thing that sticks to an identity. Historically most people had only one identity, closely tied to their physical presence. But now, online, every one of us has multiple identities: think of how many user names and logins you have. There’s some consolidation going on, in the increasing acceptance of Google, Twitter, and Facebook logins across the web, and this is mostly a good thing.  But I don’t think we want to aim for a world where each person has only one online identity. Multiple identities are good and useful.

Multiple identities are closely related to anonymity. Anonymity doesn’t mean having no identity, it means not being able to tie one of my identities to the others. I want to be very careful about who gets to tie the different parts of me together. I’m going to give two arguments for this, which I’ll call the “does your mother know” and “totalitarian state” arguments. They’re both really important. I’d be really if sad if we lost anonymity in either case. And after I’ve convinced you that we need anonymity, I’ll talk about how we get people to behave even if they don’t leave a name.

Keeping the different facets of ourselves apart is the essence of privacy. We’ve always been different people in different contexts, but this was only possible because we could expect that word of what we did with our friends last night would not get back to our mother. This expectation depends upon the ability to separate our actions in different contexts;  your mom or your boss knows that someone in the community is going on a bender/having kinky sex/voting Republican, but she doesn’t know it’s you. The ability to have different identities in different contexts is intricately tied to privacy, and in my mind no different than setting a post to “friends only” or denying Amazon.com the details of your personal life. Although the boundaries around what is “personal” are surely changing, if you really think we’re heading toward a world where everybody knows everything about everyone, you’re mad. For one thing, secrets are immensely valuable to the business world.

And then there’s China. I live right next door to the most invasive regime in the world. The Chinese government, and certain others such as Korea, are trying very hard to tie online and corporeal identities together by instituting real name policies. This makes enforcement of legal and social norms easier. Which is great until you disagree. Every damn blog comment everywhere is traceable to you. Every Wikipedia edit. Everything. China is trying as hard as it can to make opposing speech literally impossible. This is not theoretical. As of last week, you can’t send dirty words through SMS.

When the digital panopticon is a real possibility, I think that the ability to speak without censure is vital to the balance of power in all sectors. Anonymity is important to a very wide range of interests, as the diversity of the Tor project shows us. Tor is a tool and a network for anonymity online, and it is sponsored by everyone from rights activist groups to the US Department of Defense to journalists and spies. Anonymity is very, very useful, and is deeply tied to the human right of privacy.

Right, but… how do we get sociopaths to play nice in the comments section if they can say anything they want without repercussions?

The general answer is that we encourage social behavior online in exactly the way we encourage it offline: social norms and peer pressure. We can build social tools into our online systems, just like we already do. A simple example is the “flag this” link on many commenting systems. Let’s teach people to click it when they mean “this is a useless post by troll.” Collaborative moderation systems — such as “rate this post” features of all kinds — work similarly.

Collaborative moderation is a really big, important topic, and I’ll write more about it later. There are voting systems of all kinds, and the details matter. Compare Slashdot versus Digg versus Reddit. But all of these systems rate comments, not users, and I think this makes them weaker than they could be at suppressing trolls and spam. Identities matter, because identities have reputations.

Reputation is an expectation about how an identity will behave. It is built up over time. Crucially, a throw-away “anonymous” identity doesn’t have it. That’s why systems based on reputation in various forms work to produce social behavior. There are “currency” systems like StackOverflow‘s karma where one user can give another credit for answering a question. There are voting systems such as the Huffington Post‘s “I’m a fan of (comment poster)” which are designed to identity trustworthy users. Even Twitter Lists are a form of reputation system, where one user can choose to continuously rebroadcast someone else’s tweets.

And in the context of online discussion, you use reputation to direct attention.

That’s what filtering is: directing attention. And this is how you deal with trolls without restricting freedom of speech: you build collaborative filters based on reputation. Reputation is powerful precisely because it predicts behavior. New or “anonymous” identities would have no reputation and thus command little attention (at least until they said a few interesting things) while repeat offenders would sink to the bottom. Trolls would still exist, but they simply wouldn’t be heard.

NB, none of this requires tying online identities to corporeal people. Rather than being frightened of anonymity and multiple identities, I think we need to embrace them. We need to trust that we can evolve the right mixes of software and norms so that collaboration overwhelms vandalism, just as Wikipedia did. This field is mostly unexplored. We need to learn how identity relates to trust and reputation and action. And we need to think of social software as architecture, a space that shapes and channels the behavior of the people in it.

Simply trying to make it impossible to do anything bad will destroy much that is great about the internet. And it lacks imagination.