What I learned at Build Peace, the first conference for technology and conflict resolution

The organizers of Build Peace tell me it was the first conference specifically on peace and technology, and they should know. I don’t know the peace building field very well, but I could see that some of its leading lights were in attendance. I learned quite a bit, and I am very glad I went.

I have to start by saying I don’t think “technology for peace” is a sure win. My understanding is that peace building is incredibly difficult work, and rarely truly successful, and I don’t see why technology necessarily changes that. Yet I am also a technologist and I presented some of my own data-driven peace work at the conference. Clearly I believe it might be good for something.

There is a great need for conversations between capable conflict resolution workers and thoughtful technologists — hence this conference. Here are some of the things I think I learned.

 

Try existing social networking platforms first 

In the 5-minute long ignite talks I watched speaker after speaker present their work on “online discussion platforms,” “spaces for dialog,” and “peaceful online interaction.” Increasingly, I was bothered by a simple question: what do existing social media platforms lack for peace-building uses?

On the assumption that cross-cultural dialogue is key to peace (more on that below) the Internet seems to hold infinite potential, if we can just get people talking to each other the right way. This simple logic drives the explosion of online experiments. Which is great. But I rarely heard anyone talking about what makes one platform better than another — and if we don’t know what a peaceful platform should look like, why not just use Facebook?

This is what The Peace Factory does. The concept began with a Facebook page called Israel Loves Iran and quickly spawned other “X loves Y” pages which have reached millions of people. It progressed to the Friend Me 4 PEACE program which encourages people to friend someone from “the other side.”

Founder Ronie Edry described the logic:

People ask me, “Why would I ‘friend’ someone from Ramallah? What would I say?” Nothing. But you’ll see their stupid selfies in your feed.

Will selfies bring peace? I don’t know. They do seem humanizing, which is probably important. Also there is a natural escalation channel on Facebook, towards greater interaction and engagement. But what I really like about this work is that the experiment is cheap and easy to replicate.

It has become a staple of the crisis mapping community that crowdsourced crisis response must rely on already-deployed technology, not on crisis-specific apps. No one is going to install your app when the network is down and they can’t find their family. Similarly, do you really want to be in the position of convincing people involved in a civil war that they should switch social networks? My sense is, let’s find out where Facebook etc. fall short as a peace platform, before we go attempting to build an alternative — and get masses of people to use it, which is even harder than building it!

 

Do No Harm

One of the most significant things I learned about is the existence of a Do No Harm movement within peace and conflict work. This seems like a basic principle when working in a dangerous area, but its articulation is surprisingly young. I was referred by multiple people to the 1999 book by Mary B. Anderson. The book has spawned a sub-field both academic and practical.

I haven’t read the book, so I can’t claim to understand the details. But the powerful idea that well-intentioned peace builders might make matters worse will stay with me.

 

Online interaction done right

Waidehi Gilbert-Gokhale of Soliya gave one of the most impressive presentations at the conference. Like a lot of other projects, Soliya aims to build peace through online discussion. Unlike a lot of other projects, Soliya can articulate why conversation alone is not enough. In Gilbert-Gokhale’s words: “unmoderated chat polarizes.” Here she is referencing a wide body of work that shows that bringing people with conflicting opinions together to talk can actually reinforce pre-existing divisive beliefs, not moderate them.

 

Soliya sees their online cross-cultural interactions as a new form of “exchange” program and even calls their new platform Exchange 2.0. Online interactions typically take place in a school setting, which gives teachers the chance to moderate and guide the discussion.

Most interestingly, Soliya seems serious about knowing whether any of this works — aka evaluation. Gilbert-Gokhale said,

The biggest thing we have to do is run control groups. Without that we have no validity to our findings.

And I love her for saying that. To me, this emphasis on evaluation seems way ahead of everyone else doing dialog programs — even though Soliya’s evaluations to date don’t seem to include a control group. Soliya also has produced is a lengthy 2009 report “covering the past 60 years of research into the impact of media on attitudes and behavior.” Certainly worth checking out!

That report also includes some very interesting neuro-imaging studies of conflict by Emile Bruneua of MIT, who also spoke at the conference. Brunuea has shown that our brains react differently when considering the suffering of members of an in-group versus an out-group. This is remarkable; however I have not included the pretty brain scan images because I know that brain scan images are very persuasive, whereas this work is very young. It’s a promising line of research, but it has not been reproduced by other researchers and it’s not clear how you might use it in the field. Evaluation of peace work is never simple.

 

Measurement and evaluation are key

I suspect that most peace building efforts don’t end up helping very much, and all the experienced peace workers I’ve spoken to agree. If this seems harsh, consider that there are good reasons to believe that much international aid is ineffective, and quite plausibly that a wide range of non-profit work in general is ineffective. Preventing or resolving violent conflict is probably even harder than those things.

There seems to be very little solid evidence that conflict resolution work does any good at all — certainly not anything up to the standards of a controlled study, because you can’t really do a controlled study in conflict areas. You go in and try to stop the violence because not attempting to stop it would be unethical (assuming, of course, you Do No Harm.) Then the violence diminishes or it does not. But there is no counterfactual to compare against. That is, we don’t know what would have happened had we done nothing.

It was my favorite session, and well-attended too, though much of the younger, hipper set seemed to be elsewhere. That saddens me. If we can’t figure out what works and what doesn’t we have nothing at all. If we can figure out how to do good evaluation then we can learn.

I came away with several big ideas from the evaluation working group.

First, control groups might be nice but qualitative explanations count! Say you held a bunch of mediation sessions between community leaders in different communities. Then the conflict seemed to settle down. You theorize that it was your work organizing these meetings that caused things to get better. Are you right?

Sometimes people who practice ethnography and other qualitative research methods get into arguments with data people about what can be learned from only one specific case, only one historical experience. I experienced this at the conference in the conversations around metrics and data. Personally I believe that the well developed theory of causation says you can’t know the magnitude of a causal effect unless you have lots of cases, divided between control and non-control groups. But obviously being in the time and place of a conflict and trying to shape it can teach you something deep about what happened. The question is, what?

I learned during this session that there’s a whole body of knowledge about this kind of single-case causation analysis under the name of process tracing. For example, you need to test your proposed explanations against historical facts, and certain types of tests provide more evidence than others, and in fact there’s a whole theory of case study selection and inference. See also analysis of competing hypotheses, developed in intelligence work, which I now see as closely related. Process tracing won’t get you results equivalent to a large number of controlled cases, but you can get immensely valuable knowledge anyway, and it will be even better than statistical analysis in certain ways.

But suppose you really need the kind of evidence that only a controlled study can provide, such as estimates of the magnitude of your effect (also estimates of the uncertainty in your estimates, which can be just as important.) I learned that there are several different controlled designs that might work in peace building. Instead of comparing against doing nothing, you could compare against doing something else. You can do the same thing in different places (say, different villages) at different times, and look for a time correlation, which is called a stepped wedge design. Or you do the program only in places where some metric of need is above a certain threshold, which is called a regression discontinuity design.

Effectiveness is important for many reasons, one of which is that there are many more things that could be done than there is funding to do them. So someone has to make hard choices, and effectiveness has to be a key factor.

But the biggest idea I took from the measurement workshop is that even this seemingly airtight logic is suspect, because peace programs so very often end up doing something completely different from what they set out to do! The reality of conflict assures that. It does no good to charge in with an evaluation strategy that measures the wrong thing… and maybe you can only know if you’re measuring the right thing after you’ve started the project.

In other words, learning on the ground might (and probably should) convince you that your goal should change. I was delighted to discover that this idea of questioning your goals as you move toward them has a name: double-loop learning.

 

Open and closed, the crowd and the authorities

There was a fundamental tension at the conference between open and closed approaches to peace. If you like, there were two narratives about how projects were constructed. Some speakers presented about explicitly open projects (“Peace is everyone’s business: Mass SMS to prevent violence.”) Other projects involved a small group of outsiders working with existing authorities (“Elections data for the people in transitioning MENA countries.”) And more than a few projects have chosen to keep their data completely private, to prevent their human sources from coming to harm.

Is the future of technology-enabled peace building open or closed? I think both. There is great potential for open, flattened, peer-to-peer projects because ultimately it is people who must be at peace, not their governments. But not all processes can include all people, for some very good reasons. Even a “consensus” processes almost always has to exclude someone, either for logistical reasons or to deal with spoilers. Quinn Norton’s scathing dissection of Occupy Wall Street’s General Assembly is a beautiful example of the failure of an open system.

Because the [General Assembly] had no way to reject force, over time it fell to force. Proposals won by intimidation; bullies carried the day. What began as a way to let people reform and remake themselves had no mechanism for dealing with them when they didn’t. It had no way to deal with parasites and predators.

Of course I’m not arguing that we are currently at ideal levels of openness, either for peace building projects or anything else. Just that the ideal is some careful hybrid.

 

Visualizing Polarization

I have done a little bit of work on data-driven ways to understand conflict, which stems from my interest in visualizing communities. It’s possible to see the political divisions of the U.S. population in many different types of data: political book sales, who talks to who on Twitter, geographical voting patterns, and more. My own contribution to this is an interactive visualization of the gun control debate on Twitter from one week in February 2013, published in The Atlantic. In that visualization, which shows “people who Tweeted this link also tweeted that link,” you can clearly see that there are two poles of thought on the matter, led by (for this particular week of Twitter conversation) the White House on one side and The Blaze on the other.

Here are the slides I showed at the beginning of the session, which became a very lively group discussion (yay!)

Polarization Build Peace 2014
Click to see my slides for this session

It’s striking to me that conflict dynamics show up so clearly in big data visualization… but I’m really not sure how helpful that is, if your concern is peace. Yes, plots like these could help in conflict analysis, but anyone who’s actually paying attention to a conflict already knows who the sides are. A more interesting possibility is a time-based analysis where you animate these association patterns through time, to see if anything is changing. This type of network analysis could also be used just as marketers use it, to identify influential people and groups for the purposes of media planning.

Various people including myself suggested that maybe peace builders should look at these networks to find people who bridge between the sides. But Ethan Zuckerman made a very interesting counter-suggestion: maybe we need to look outside of the conflict divisions entirely, to find completely unrelated identities that many people can agree on. He pointed to the Harry Potter Alliance, which was founded to address the conflict in Sudan.

Location data might help us to find bridging spaces, literally spaces in the physical world. Mobile phone companies have location history for each subscriber, so it should be possible to figure out a) which “side” each person is on by where they travel and who they associate with and b) where people from the two sides come together. What if we discover that otherwise mortal enemies drink in the same bar every Friday night? Is that useful? Does it violate privacy in that creepy location-data way if we just know the name of the bar, not the names of the people who go there? I have no answers.

I am awed by the potential for data in peace work, but I am also very cautious. Technologists tend to get lost in technological fantasies, while peace workers may not have the technical imagination to see what is possible. If my experience bringing computer science to journalism is at all typical, then we have lot of work to do to bridge this gap.

 

A young field with a lot to learn

I’ve been hanging around  journalism innovation and crisis mapping and ICT4D and related things for six years now, which is (surprisingly?) long enough to see several generations of projects come and go. I feel like peace technology is currently making some of the classic mistakes: people are making without considering what already exists, technologists with no knowledge of peace building are going to suck at understanding user needs, and there’s not really any talk of tech project sustainability.

But I am also elated. This conference was a unique confluence of enthusiasm, expertise, and experiments. It has made me optimistic that if there there is a role for technology and technologists in peace building, we will find it. It will probably take a few more years for all of it to settle down into useful practices. I certainly came away with some things to try — and I’d go on that data scientist conflict zone exchange program in a heartbeat. Or at least back to Build Peace next year.

 

Questions about the NYPD I cannot answer

Recently, the NYPD started a Twitter hashtag campaign, and it backfired.

Several of my friends — actual, real life good friends — shared this story on Facebook in a, let’s say, somewhat triumphant mood. And I wasn’t sure what to think. This is what I wrote.

I’m having trouble understanding what all this signifies. Here’s what I come up with that I am sure about:

I’m having trouble understanding what all this signifies. Here’s what I come up with that I am sure about:

  • my friends do not like cops
  • clearly there are other people who do not like cops
  • people who do not like cops are either more common on Twitter or more vocal than those who like them
  • the NYPD sure have beaten up a lot of people

But, these are the questions I remain unable to answer:

  • I think we probably want a police force that engages with people on social media. How should they have engaged?
  • Were any of these beatings “proportionate?” This is horrible language, I know, but give it a pass for a moment.
  • Is any beating ever proportionate? How could we even know the answer to this in principle, let alone in specific cases?
  • What is the overall record of the NYPD? Is this a question that even has meaning given the multidimensional nature of the problem? Can the answer be anything other than “terrible” if there are incidents like these?
  • What would I do if I was king of the NYPD?
  • Will my friends perceive this post as “defending the cops”? Will there be social sanctions of some sort for expressing these ideas? Is my echo chamber just as pernicious as the echo chambers of those that belong to my perceived “other”?

– Yours in sadness and inquiry.

The post has not received any “likes.”