Introducing the CJ Workbench

Some of you may have heard about by new data journalism project — The Computational Journalism Workbench. This is an integrated platform for data journalism, combining scraping, analysis, and visualization in one easy tool. It works by assembling simple modules into a “workflow,” a repeatable, sharable, automatically updating pipeline that produces a publishable chart or a live API endpoint.

I demonstrated a prototype at the NICAR conference — and it’s now in beta at (contact me for an invite.)

I’ll be working on CJ Workbench for at least the next few years. My previous large data journalism project is the Overview document mining system, which continues active development.

Defense Against the Dark Arts: Networked Propaganda and Counter-Propaganda

In honor of MisinfoCon this weekend, it’s time for a brain dump on propaganda — that is, getting large numbers of people to believe something for political gain. Many of my journalist and technologist colleagues have started to think about propaganda in the wake of the US election, and related issues like “fake news” and organized trolling. My goal here is to connect this new wave of enthusiasm to history and research.

This post is about persuasion. I’m not going to spend much time on the ethics of these techniques, and even less on the question of who is actually right on any particular point. That’s for another conversation. Instead, I want to talk about what works. All of these methods are just tools, and some are more just than others. Think of this as Defense Against the Dark Arts.

Let’s start with the nation states. Modern intelligence services have been involved in propaganda for a very long time and they have many names for it: information warfare, political influence operations, disinformation, psyops. Whatever you want to call it, it pays to study the masters.


Russia: You don’t need to be true or consistent

Russia has a long history of organized disinformation, and their methods have evolved for the Internet era. The modern strategy has been dubbed “the firehose of falsehood” by RAND scholar Christopher Paul.

His recent report discusses this technique of pushing out diverse messages on a huge number of different channels, everything from obvious state sources like Russia Today to carefully obscured leaks of hacked material — leaks which are tailored to appeal to sympathetic journalists.

The experimental psychology literature suggests that, all other things being equal, messages received in greater volume and from more sources will be more persuasive. Quantity does indeed have a quality all its own. High volume can deliver other benefits that are relevant in the Russian propaganda context. First, high volume can consume the attention and other available bandwidth of potential audiences, drowning out competing messages. Second, high volume can overwhelm competing messages in a flood of disagreement. Third, multiple channels increase the chances that target audiences are exposed to the message. Fourth, receiving a message via multiple modes and from multiple sources increases the message’s perceived credibility, especially if a disseminating source is one with which an audience member identifies.

And as you might expect, there is a certain amount of outright fabrication — often mixed with the truth:

Contemporary Russian propaganda makes little or no commitment to the truth. This is not to say that all of it is false. Quite the contrary: It often contains a significant fraction of the truth. Sometimes, however, events reported in Russian propaganda are wholly manufactured, like the 2014 social media campaign to create panic about an explosion and chemical plume in St. Mary’s Parish, Louisiana, that never happened. Russian propaganda has relied on manufactured evidence—often photographic. … In addition to manufacturing information, Russian propagandists often manufacture sources.

But for me, the most surprising conclusion of this work is that a source can still be credible even if it repeatedly and blatantly contradicts itself:

Potential losses in credibility due to inconsistency are potentially offset by synergies with other characteristics of contemporary propaganda. As noted earlier in the discussion of multiple channels, the presentation of multiple arguments by multiple sources is more persuasive than either the presentation of multiple arguments by one source or the presentation of one argument by multiple sources. These losses can also be offset by peripheral cues that enforce perceptions of credibility, trustworthiness, or legitimacy. Even if a channel or individual propagandist changes accounts of events from one day to the next, viewers are likely to evaluate the credibility of the new account without giving too much weight to the prior, “mistaken” account, provided that there are peripheral cues suggesting the source is credible.

Orwell was right: “We have always been at war with Eastasia” really does work, if there are enough people repeating it.

Paul suggests that the counter-strategy is not to try to refute the message, but to reach the target audience first with an alternative. Fact checking, which is really after-the-fact-checking, may not be the most effective plan.  He suggests instead that we “forewarn audiences of misinformation, or merely reach them first with the truth, rather than retracting or refuting false ‘facts.'” In this light, Facebook’s plan to show the fact check along with the article seems like a much better strategy than sending someone a fact checking link when they repeat a falsehood.

He also suggests that we “focus on guiding the propaganda’s target audience in more productive directions.” Which is exactly what China does.


China: Don’t argue, distract and disrupt

China is famous for its highly developed network censorship, from the Great Firewall to its carefully policed social media. The role of the government “public opinion guides,” China’s millions of paid commenters, has been murkier — until now.

The Atlantic has a readable summary of recent research by Gary King, Jennifer Pan, and Margaret E. Roberts. They started with thousands of leaked Chinese government emails where commentators report on their work, which became the raw data for an accurate predictive model of which posts are government PR. A surprising twist: nearly 60% of paid commenters will just tell you they’re posting for the government when you ask them, which allowed these scholars to verify their country-wide model. But the core of the analysis is what these posters were doing.

From the paper:

We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime.

And here’s the breakdown of what these posters were doing. “Cheerleading” dominates for every sample of government accounts. Arguments are rare.

Screen Shot 2017-02-24 at 2.52.26 PM

Note that this is only one half of the Chinese media control strategy. There is still massive censorship of political expression, especially of any post relating to organized protest, which is empirically good at toppling governments.

All of this without ever getting into an argument. This suggests that there is actually no need to engage the critics/trolls to get your message out (though it might still be worthwhile to distract and monitor them.) Just communicate positive messages to the masses while you quietly disable your detractors. A counter-strategy, if you are facing this type of opponent, is organized, visible resistance. Get into the streets and make it impossible to talk about something else — though note that recent experiments suggest that violent or extreme protest tactics will backfire.

But China has a tightly controlled media and the greatest censorship regime the world has ever seen. If you’re operating in a relatively free media environment, you have to manipulate the press instead.


Milo: Attention by any means necessary

The most insightful thing I have ever read about the wonder that was Milo Yiannopoulos comes from the man who wrote a book on manipulating the media, documenting the strategies he devised to market people like Tucker Max. Ryan Holiday writes,

We encouraged protests at colleges by sending outraged emails to various activist groups and clubs on campuses where the movie was being screened. We sent fake tips to Gawker, which dutifully ate them up. We created a boycott group on Facebook that acquired thousands of members. We made deliberately offensive ads and ran them on websites where they would be written about by controversy-loving reporters. After I began vandalizing some of our own billboards in Los Angeles, the trend spread across the country, with parties of feminists roving the streets of New York to deface them (with the Village Voice in tow).

But my favorite was the campaign in Chicago—the only major city where we could afford transit advertising. After placing a series of offensive ads on buses and the metro, from my office I alternated between calling in angry complaints to the Chicago CTA and sending angry emails to city officials with reporters cc’d, until ‘under pressure,’ they announced that they would be banning our advertisements and returning our money. Then we put out a press release denouncing this cowardly decision.

I’ve never seen so much publicity. It was madness.

. . .

The key tactic of alternative or provocative figures is to leverage the size and platform of their “not-audience” (i.e. their haters in the mainstream) to attract attention and build an actual audience. Let’s say 9 out of 10 people who hear something Milo says will find it repulsive and juvenile. Because of that response rate, it’s going to be hard for someone like Milo to market himself through traditional channels. His potential audience is too spread out, and doesn’t have that much in common. He can’t advertise, he can’t find them one by one. It’s just not going to scale.

But let’s say he can acquire massive amounts of negative publicity by pissing off people in the media? Well now all of a sudden someone is absorbing the cost of this inefficient form of marketing for him.

(Emphasis mine.)  That one’s adversaries should be denied attention is not a new idea. Indeed, this is central to the “no-platforming” tactic. But no-platforming plays right into an outrage-based strategy if it results in additional attention (see also the Streisand effect). Worse, all the incentives for media makers are wrong. It’s going to be very hard for journalists and other media figures to wean themselves off of outrage, because strong emotional reactions get people to share information (1, 2, 3, etc.) and information sharing has become the basis of distribution, which is the basis of revenue. We are in dire need of new business models for news.

But this breakdown of the mechanics of outrage marketing does suggest a counter-strategy: before you get mad, or report on someone getting mad, do your homework. Holiday called to complain about his own content, put out false press releases, etc. A smart journalist might be able to uncover this deception. In a propaganda war, all journalists should be investigative journalists.

Attention is the currency of networked propaganda. Attention is the key. Be very careful who you give it to, and understand how your own emotions and incentives can be exploited.


But even if you’ve uncovered a deception, it’s not enough to say that someone else is lying. You have to tell a different story.


Debunking doesn’t work: provide an alternative narrative

Telling people that something they’ve heard is wrong may be one of the most pointless things you can do. A long series of experiments shows that it rarely changes belief. Brendan Nyhan is one of the main scholars here, with a series of papers on political misinformation. This is about human psychology; we simply don’t process information rationally, but instead employ a variety of heuristics and cognitive shortcuts (not necessarily maladaptive in general) that can be exploited. The classic experiment goes like this:

Participants in a study within this paradigm are told that there was a fire in a warehouse and that there were flammable chemicals in the warehouse that were improperly stored. When hearing these pieces of information in succession, people typically make a causal link between the two facts and infer that the fire was caused in some way by the flammable chemicals. Some subjects are then told that there were no flammable chemicals in the warehouse. Subjects who have received this corrective information may correctly answer that there were no flammable chemicals in the warehouse and separately incorrectly answer that flammable chemicals caused the fire. This seeming contradiction can be explained by the fact that people update the factual information about the presence of flammable chemicals without also updating the causal inferences that followed from the incorrect information they initially received.

Worse, repeating a lie in the process of refuting it may actually reinforce it! The counter strategy is to replace one narrative with another. Affirm, don’t deny:

Which of these headlines strikes you as the most persuasive:

“I am not a Muslim, Obama says.”

“I am a Christian, Obama says.”

The first headline is a direct and unequivocal denial of a piece of misinformation that’s had a frustratingly long life. It’s Obama directly addressing the falsehood.

The second option takes a different approach by affirming Obama’s true religion, rather than denying the incorrect one. He’s asserting, not correcting.

Which one is better at convincing people of Obama’s religion? According to recent research into political misinformation, it’s likely the latter.


The role of intelligence: Action not reaction

Let’s return to China for a moment. Here’s a chart, from the paper above, on the number of government social media postings over time:

Screen Shot 2017-02-24 at 4.29.32 PM

Posts spiked around political events (CCP Congress) and emergencies that the government would rather citizens not talk about, such as riots and a rail explosion. This “cheerleading” propaganda wasn’t simply a regular diet of good news, but a precisely controlled strategy designed to drown out undesirable narratives.

One of the problems of a free press is that “the media” is a herd of cats. There really is no central authority — independence and diversity, huzzah! Similarly, distributed protest movements like Anonymous can be very effective for certain types of activities. But even Anonymous had central figures planning operations.

The most successful propagandists, like the most successful protest movements, are very organized. (Lost in the current “diversity of tactics” rhetoric is the historical fact that key battles in the civil rights movement were carefully planned.) Organization and planning requires intelligence. You have to know who your adversaries are and what they are doing. Intelligence involves basic steps like:

  • Pay attention to the details of every encounter. Who wrote that story or posted that comment?
  • Research the actors and their networks. Who are they connected to? What communication channels do they use to coordinate? Who directs operations?
  • Real-time monitoring. When a misinformation campaign begins, you need to get to your audience before they do (with something more than just a debunk, as above.)

Although there may be useful technological approaches to tracing networks, there is no magic here; anyone can keep a spreadsheet of actors, you can do real-time monitoring with little more than Tweetdeck, and investigative journalists already know how to investigate. But centralization may be important. The Russian approach of “many messages, many channels” suggests that an open, diverse network can succeed at individual propaganda actions, and I bet it would succeed at counter-propaganda actions too. But intelligence is different, and it’s an unanswered question whether the messy collection of journalists, NGOs, universities, and activists in a free society can do effective counter-propaganda intelligence, or even agree sufficiently on what that would be. I don’t think a distributed approach will work here; someone needs to own the database and run the show.

Update: The East StratCom Task Force seems to be exactly this sort of centralized actor for the EU.

But one way or another, you have know what your propagandist adversary is doing, in detail and in real-time. If you don’t have that critical function taken care of, you’re going to be forever reactive, which means you’re probably going to lose.


PS: Up your security game

Hacking and leaking — which is one of the more effective ways to dox someone —  has become a propaganda tactic. If you don’t want to be on the wrong end of this, I recommend immediately doing the following easy things:

  • Enable 2-step logins on your email and other important accounts.
  • Learn to recognize phishing.

I suspect this would prevent 70%-90% of hacking and doxxing attempts. It would have saved John Podesta. Here’s lots more on easy ways to protect yourself.

Stay safe out there, and good luck.

What do Journalists do with Documents?

Many people have realized that natural language processing (NLP) techniques could be extraordinarily helpful to journalists who need to deal with large volumes of documents or other text data. But although there have been many experiments and much speculation, almost no one has built NLP tools that journalists actually use. In part, this is because computer scientists haven’t had a good description of the problems journalists actually face. This talk and paper, presented at the Computation + Journalism Symposium, are one attempt to remedy that. (Talk slides here.)

This all comes out of my experience both building and using Overview, an open source document mining system built specifically for investigative journalists. The paper summarizes every story completed with Overview, and also discusses the five cases I know where journalists used custom NLP code to get the story done.

Stories done with Overview

The talk is more focussed on the lessons learned — all the things I wish I had known when I started writing NLP code for journalism six years ago. I recommend six research themes for computer scientists who want to help journalists:

Robust import. Preparing documents for analysis is a much bigger problem than is generally appreciated. Even structured data like email is often delivered on paper.

Robust analysis. Journalists routinely deal with unbelievably dirty documents. OCR error confounds classic algorithms. Shorthand and jargon break dictionaries and parsers.

Search, not exploration. Reporters are usually looking for something, but it may not be something that is easy to express in a keyword search. The ultimate example is “corruption,” which you can’t just type into a search box.

Quantitative summaries. Journalists have long produced stories by counting the number of documents of a certain type. How can we make this easy, flexible, and accurate?

Interactive methods. Even with NLP, document-based reporting requires extensive human reading. How do we best integrate machine and human intelligence in an interactive loop?

Clarity and Accuracy. Journalists are accountable to the public for their results. They must be able to explain how they got their answer, and how they know the answer is right.

I am currently compiling test sets of real-world documents that journalists have encountered, to help researchers who want to work on these problems. Contact me if you’re interested! I’d also like to take this opportunity to point out that Overview has an analysis plugin API, so if you’re doing work that you want journalists to use, this is one easy way to get a UI around it, and get it shipping with a widely-used tool.

The Dark Clouds of Financial Cryptography

I feel we’re on the precipice of some delightfully weird and possibly very alarming developments at the intersection of code and money.  There is something deep in the rules that is getting rewritten, only we can’t quite see how yet. I’ve had this feeling before, as a self-described Cypherpunk in the 1990s. We knew or hoped that encrypted communication would change global politics, but we didn’t quite know how yet. And then Wikileaks happened. As Bruce Sterling wrote at the time,

At last — at long last — the homemade nitroglycerin in the old cypherpunks blast shack has gone off.

That was exactly how I felt when that first SIGACT dump hit the net, by then a newly hired editor at the Associated Press. Now I’m studying finance, and I can’t shake the feeling that cryptocurrencies — and their abstracted cousins, “smart contracts” and other computational financial instruments — are another explosion of weirdness waiting to happen.

I’m hardly alone in this. Lots of technologists think the “block chain” pioneered by bitcoin is going to be consequential. But I think they think this for the wrong reasons. Bitcoin itself is never going to replace our current system of money transfer and clearing; it’s much slower than existing payment systems, often more expensive, uses far too much energy, and don’t scale well. Rather, bitcoin is just a taste, a hint: it shows that we can mix computers and money in surprising and consequential ways. And there are more ominous portents, such as contracts that are actually code and the very first “distributed autonomous organizations.” But we’ll get to that.

What is clear is that we are turning capitalism into code — trading systems, economic policy, financial instruments, even money itself — and this is going to change a lot of things.

The question I always come to is this: what do we want our money to do? Code is also policy, because it constrains what people can and cannot do, and monetary code is economic policy. But code is not all powerful, which is where the bitcoin techno-libertarian ethos goes wrong. What I’ve learned since my Cypherpunk days is that we need to decide now what happens when the code fails, because eventually there will be a situation that will have to be resolved by law and politics. We should design for this rather than trying to avoid it. And this time around, there’s an even weirder twist: when we start describing financial contracts in code, we lose the ability to predict what they’ll do!

What does bitcoin get you anyway?

We’ve had electronic cash since well before I was born. We use it every day: bank balances, credit cards, and all the rest. Here are some things that these systems do:

  • Transfer money without anything physical changing hands.
  • Security. It’s not possible to take back spent money unilaterally, or spend the same money twice.
  • Controlled money supply. You can’t mint your own.

These are the new features that bitcoin adds:

  • Pseudo-anonymity. Parties to a transaction are identified only by a public key.
  • A public ledger of all transactions that provides cryptographic proof of ownership. This is the “block chain.”
  • Decentralization. The security of the system does not depend on the honesty of any single authority, but only honest action by the majority of nodes.

This is private secure global digital money without governments or banks, and you never need to trust in the honesty and competence of any one institution! It’s a really neat trick, exactly the sort of magic that first drew me to cryptography. You could learn how the trick is done places like here, but part of the trick is that it’s not just cryptography. It’s also clever alignment of incentives. It’s financial cryptography.

Here’s the core innovation: it is possible to use your computer to “mine” bitcoin — that is, create new money for yourself — and this mining operation simultaneously maintains the integrity of the global distributed ledger. This is a profound thing. It means that the distributed network operators (bitcoin “nodes”) get paid, and indeed bitcoin miners collectively made something like six billion dollars in the last year. It also means that control of the money supply is distributed, which makes it very unlike central bank money. This is the first place that the politics gets weird.

Control of the money supply

If you’re a government, you want to control your money. Traditionally, central banks do this to balance economic outcomes like unemployment and inflation. They’ve also created money for more drastic and often disreputable purposes like funding wars, inflating away debts or influencing the balance of trade. There’s a lot of destructive stuff you can do with the power to print money, which is one reason why states guard their monopoly closely. There are laws against counterfeiting.

But in the bitcoin scheme of things, money is created by anyone who can solve a specific type of computational puzzle. More specifically, you have to invert a hash function, a problem that can only really be solved by brute force guessing — a massive amount of guessing, something like 400 years on a standard PC. In other words, you pay for freshly minted bitcoins with computer time, an extremely capital and energy intensive process. This is in no way challenges the primacy of accumulated wealth; it’s a fundamentally conservative amendment to capitalism.

The whole point of this is to put an inviolate limit on how fast new coin can be created. You need a lot of resources to mine a little bitcoin — just like mining gold. In fact, the protocol automatically adjusts the difficulty of the hash problem so that new coins always get created at about the same rate, which means blocks are added to the chain at a constant rate, about ten minutes per block, no matter how many computers people throw at the problem, or how fast our computers get. And today, the bitcoin mining industry uses data centers full of custom chips that collectively dwarf the largest supercomputers. All doing essentially nothing except being expensive, which turns out to be a foolproof method of trust-less distributed control.

Sometimes a little trust gets you a lot, like a stable money supply without using as much electricity as a small country. Using trust as a design element is a hard concept for hard-core cryptographers, whose protocols are suspicious by design. But of course finance has always run on trust; there would be no credit without it, and there’s no credit in bitcoin either. No one ever has a negative balance, or even a balance at all, just ownership of tokens.

For the moment, mining is a profitable business and both the quantity of mining and the price of bitcoin are increasing. Which is good news for bitcoin users, because the work of mining is also the work of keeping the transaction processing network running; that’s why anyone bothers to process your bitcoin payments. That’s the sort of incentive engineering you get to do when money and code mix.

So the whole system is releasing new coins at a more or less constant rate, no one can speed it up no matter how much they spend, and it would be impossible to stop it without shutting computers down all over the world. Given bitcoin’s libertarian leanings, perhaps it’s not surprising that this is very much in line with Milton Friedman’s theory that a steady percentage increase in the money supply is the best policy. Then again, a constant coin mining rate also means a constant transaction processing rate, so perhaps this is merely a convenient choice for a payments system.

Either way, this arrangement is economic policy written in code. If the whole world ran on a bitcoin, there would be no way to manage recessions — for example to increse employment — by adjusting the money supply

The geopolitics of the block chain

All cryptograpy has politics, and bitcoin is no exception. It appeals particularly to a certain sort of techno-libertarian: why should the banks say what is money and when we can trade it? Why should they make all of our financial privacy decisions for us? And why should we have to trust any one person with our money?

But then again, these may not be particularly motivating problems for most people. Although a wide variety of merchants will now accept bitcoin for a wide variety of goods and services, like cash it’s well suited for shady deals — especially given its global and anonymous nature. It’s impossible to say for sure,  because that’s what anonymity is, but gray markets are likely the predominant use. But we do know that 70% of global trading volume and more than 50% of mining also occurs in China. This may be nothing more than peer-to-peer commerce, or it may indicate that bitcoin is at the center of a Renminbi-denominated halawa network of underground money transfers.

China, and other governments, have unsurprisingly taken steps to discourage bitcoin use, and bitcoin is now restricted or officially banned in many countries. There are potentially good reasons for this, such as the ability to prevent terrorist financing and money laundering, just as the international financial system has implemented progressively tighter controls. There are also potentially bad reasons to restrict bitcoin, depending on your politics: state mismanagement of the money supply, protections for incumbent banks, pernicious regulation of capital flows, or authoritarian surveillance of commerce.

But if you have an uncensored internet connection and the right software, no one can stop you from trading in bitcoin. Once again, the network proves to be a great equalizer between citizens and states. The Cypherpunks understood very early on that encrypted communication enabled uncensorable distributed coordination, and that this would challenge the power of states. But cryptographic money promises something even more revolutionary: state-free trade, economically significant transfers of cold hard currency. It’s a much bigger hammer.

What we didn’t think carefully enough about, back then, was who would be using these tools. Encrypted communication has supported the toppling of autocratic regimes, but it also supports terrorism. Bitcoin miners, too, might have diverse goals.

The cryptographic consensus algorithm currently in use by every bitcoin node dictates that the majority defines which transactions get added to the global ledger and hence validated. Which means that Chinese miners now effectively control bitcoin. In principle, all Chinese operators could collude to allow double spending of their coins. Or they could “hard fork” the protocol at any moment simply by adopting a new standard. Everyone else would have to go along, or their existing bitcoins would be worthless.

Thus the bitcoin protocol is already the subject of international diplomacy, as when American entrepreneurs visited China to lobby for capacity-enhancing changes (they failed). Running a specific version of the bitcoin software and maintaining a specific version of the ledger data is in effect a vote. However, this hasn’t prevented vigorous arguments and campaigns about how those votes should be cast.

Meanwhile the “bitcoin core” developers also have substantial but not absolute influence, as they maintain the standard open source implementation of the protocol. They can’t make anyone go along with changes, but it sure could be inconvenient if you didn’t want to. And what happens if they don’t all agree?

From cryptocurrency to crypto-contracts

The cryptographic innovations of bitcoin are public and easily copied, and profitable if you get in first on a successful new currency. So naturally there has been a dizzying array of “altcoin” implementations with varying degrees of adoption and stability. The most interesting altcoins add new features, such as extended capacity or new transaction types.

But there’s one altcoin that does something truly new and interesting: Ethereum allows software-defined transactions. That is: a transaction can contain code which executes to determine who gets paid what, or more generally to perform any computation and store the results in the public ledger (block chain). The Ethereum Foundation, a Swiss non-profit, says that Ethereum is a “decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.”

This is science-fiction stuff. First computational contracts, then AI lawyers, all executing on the open source Justice operating system… We’re not quite there yet, but Ethereum is the proof of concept. Like bitcoin, it extrapolates legitimately interesting technical innovation into a soaring anti-authoritarian dream. A “smart contract” is a financial contract defined by code. You cryptographically sign onto it, perhaps by making a payment, and then the contract then executes on the network, does its calculations, and ultimately makes payouts. As long as the majority of the Ethereum network is operating honestly, you get paid exactly what the code says you will get paid; neither the seller or anyone else can alter the terms after the fact. No courts are needed to enforce the terms, no intermediaries are involved, no trust is required.

If bitcoin is state-free money, smart contracts are state-free financial instruments that are fully transparent and make fraud impossible. Except no, of course not, just like encryption didn’t make anonymity easy, and for the same reason: there are systems outside the computer.

Cryptocurrencies are made of people

In one sense Ethereum’s libertarian promise is all true: just as bitcoin nodes validate transactions by consensus, all Ethereum nodes collectively enforce the code-as-contract guarantee. In another sense it’s completely bogus: the computers are still controlled by people, as a significant hack demonstrated.

The story goes like this: the Ethereum community crowd-funded an initial investment of $150 million dollars to seed a “distributed autonomous organization,” or DAO. This was one of the core visions that excited Ethereum proponents, and a DAO is probably the most cyberpunk entity of all time. As CoinDesk described it:

It’s likely best to think of The DAO — which is sometimes also referred to as The DAO Hub — as a tightly packed collection of smart contracts written on the Ethereum blockchain.

Taken collectively, the smart contracts amount to a series of by-laws and other founding documents that determine how its constituency — anyone around the world who has bought DAO tokens with ethers — votes on decisions, allocates resources and in theory, creates a wide-range of possible returns.

Unlike a traditional company that has a designated managerial structure, The DAO operates through collective voting, and is owned by everyone who’s purchased a DAO token. On top of that structure is a group of so-called Curators that can be elected or removed by DAO token holders.

Got that? It was a fund where the choice of investments, the election of officers, and other such matters were all done by submitting votes as transactions on the Ethereum block chain, to be interpreted by code previously placed there. And just to make it even more like a William Gibson novel,  nobody knew exactly who created this entity! Of course there was an Ethereum address attached to code that created the DAO, but Ethereum addresses are anonymous.

And then it was hacked. Maybe. Depending on your point of view. What actually happened is someone found and exploited a subtle bug in the DAO’s code and caused it to pay them the equivalent of $60 million to their Ethereum account.

Software is a subtle thing, and it’s extremely difficult to write bug-free code on the first go. It’s even harder to write code that will stand up against a malicious attacker who stands to make a life-altering amount of money if they break your system. In the event, the bug involved a problem with re-entrancy in the payout function, for which the proposed solution was to guard your disbursement code with mutexes. If you’re not a computer scientist, this is just technical detail. If you are a computer scientist, your skin is crawling. Long experience has shown that correct reasoning about these types of problems is nearly impossible for mere mortals. If smart contracts require super-human intelligence to prevent fraud, we’re in trouble.

What happened next is even more interesting. The human-language contract explaining the terms of buying into the DAO explicitly stated that the code was the final legal word. So perhaps the DAO code was buggy, but caveat emptor, the hacker played by the rules of the game and got paid. Maybe they weren’t even a hacker, but merely a savvy investor who understood a little-known clause in the contract. If the code is law, then whatever the code allows is legal by definition. Ultimately, the morality of this move is a question outside of the code itself. And that’s the problem.

The majority of people involved in Ethereum felt that investors should get their money back. But a sizable minority disagreed — they truly believed in this “code as law” model. And so both the community and the block chain split: there was a hard fork. To this day, there are two parallel Ethereum ledgers. In one world, the DAO hack was reversed. In the other universe, now called Ethereum Classic, the hacker got to keep their money.

Real financial contracts have lawyers and courts and precedents that provide a procedure for resolving disputes, and give investors reasonable safeguards. Discarding those institutional frameworks has a cost.

Turing’s Demons

Computer code can do arbitrarily subtle and weird things. This is a deep property of computation that we’ve known about even before the first electronic computers were actually built. Once a programming system reaches a minimal threshold of complexity, known as Turing completeness, there are a set of inter-related theorems that say, basically, you will never be able to tell what a program does without actually running it. It follows that it will always be possible to hide something malicious in financial code. If code is law, you’re going to get scammed. Legally.

One solution is to avoid general purpose code. There are strong parallels to recent computer security research that argues that all user inputs to system need be limited to very restricted, non-programmable languages. It’s just impossible to secure arbitrary code. And indeed, Wall Street already has purpose-specific languages for specifying financial contracts (such as derivatives) without invoking the disturbing power of general computation.

Turing completeness is a gateway to the dark powers. It freaks me out to imagine traders submitting contracts written in code to an exchange. It’s already tricky to untangle the web of counter-parties, derivatives, and legal triggers that can lead to cascading crashes in the financial system — just wait until we throw code in there.

But we’re not going to able to avoid all code. Even if traders aren’t allowed to use it to create new contracts, we need it for infrastructure. Every stock market, every financial exchange has code, and algorithmic trading is an entire industry. An increasing fraction of global transactions are handled by computers without any human review or intervention. This has led to weird behavior such as flash crashes — which are still unexplained, even in very simple, usually very stable markets like US Treasury securities. There isn’t even a single master audit record of every trade made, and there won’t be for years.

It gets even weirder when you add incentivized humans to the mix: financial players are going to exploit every edge case they can find. There’s a passage from Michael Lewis’ Flash Boys which describes the difficulty in setting up an exchange that can’t be gamed:

Creating a new stock exchange is a bit like creating a casino: Its creator needs to ensure that the casino cannot in some way be exploitable by the patrons. Or, at worst, he needs to know exactly how his system might be exploited, so that he might monitor the exploitation— as a casino monitors card counting at the blackjack tables. “You are designing a system,” said Puz, “and you don’t want the system to be gameable.” … From the point of view of the most sophisticated traders, the stock market wasn’t a mechanism for channeling capital to productive enterprise but a puzzle to be solved.

The designers of this new exchange, now known as IEX, spent months studying every type of order that could be submitted to other stock markets, and how these were exploited to rip off productive investors to the benefit of high-frequency traders (HFT). This is of course a moral judgement, and a judgment about what type of investor to privilege — and there are massive ongoing arguments about whether the current market structure that allows HFT is “fair.” But even if you know what you want your code to do, there’s no guarantee you’re going to get it. IEX found it incredibly difficult to avoid loopholes that could advantage high-frequency traders.

We are now, today, in our lifetimes, undertaking the process of turning capitalism into code. The code running our markets determines, literally, what is possible and who gets paid. Already, the cutting-edge of finance is basically nothing like “investing” as we usually think of it. It’s far more like hacking: find the properties of a complex system that get you the most money. Anything the exchange lets you do is legal, more or less. There are laws against market “manipulation” such as spoofing, but these terms are poorly defined ideas of fairness that don’t have simple technical definitions. Anyway, that’s only a problem if lawyers and regulators get involved. The code allows it.

I want our financial markets to be stable, transparent and fair. I want them to reward something other than the clever manipulation of an abstract system. And so I would argue for extreme simplicity in our electronic markets. Even very simple rules can spiral into complex consequences. Chess has more complex rules then Go, but it took computers 20 years longer to beat humans at Go. Recent game-theoretic work on algorithmic trading suggests that it’s going to be very hard to stabilize even very simple programs interacting with each other and with greedy humans.

Politics always wins in the end

Bitcoin and Ethereum are a kind of counter-power to established systems of money and finance. In the sense that many things are wrong with the current system and the powers-that-be are very hard to challenge, this is exciting. But the mistake is to think that code is enough. Wikileaks was premised on using cryptographic anonymization to protect their sources, but then Manning confessed to a freelance journalist. And all the encryption in the world could not protect Snowden from the NSA’s long reach; that required the cooperation of the Russian government.

The modern, automated stock market has already been gamed. In April 2016, an individual from Pakistan uploaded a fake document to the SEC’s EDGAR website, where public companies post their legally-required disclosures. Automated bots read the document and immediately traded on the false information, moving the stock price. The “hacker” made $425,000 in a matter of minutes, before anyone realized what was going on.

The Pakistan case was straightforward fraud, but it’s only going to get weirder. I have the Cypherpunk premonition again. Crypto-contracts shift the balance of power, and once again, a small group of people — this time, financial cryptographers — is playing with home-made nitroglycerine. Eventually it will blow up. Eventually, someone will do something with global consequences. Maybe they’ll make off with billions; maybe they’ll crash the economy of an entire country, or the world.

This will start a really big fight. And the lesson I’ve learned is that the code, while powerful, never has the last word. Eventually, there will have to be a legal and political settlement about what code the global financial markets should run on.  Ultimately, the code runs on people, not the other way around.

Yet code still has enormous influence. Financial technologists are now engaged in writing the code that will determine the future shape of the economy. Code is like architecture: it’s a built environment that determines where you can and cannot go. The code that the markets run on implicitly determines our economic policy. It sets the shape of financial hacking. It very literally decides who gets what.

So what economic policy do we want our code to embody? And given the complexity of computation, how can we be sure that this is what our code actually does? The answer is that we probably can’t, and the only solution is to get clear about our goals, and the legal and political mechanisms for resolving our arguments, before we inevitably discover that our software allows something we never intended.

I can do no better than to end with a quote from security researcher Eleanor Saitta:

Repeat after me: all technical problems of sufficient scope or impact are actually political problems first.


The Origin of Banking

There is a just-so story that explains the existence of money. Before money, the story goes, we all had to barter for the goods we wanted. If I wanted wheat and had chickens, I needed to find someone who wanted chickens and had extra wheat. Money solves this “double coincidence” problem by letting me sell my chickens to buy your wheat. If we didn’t have money we’d invent it immediately.

The problem with this simple story is that it may not match history. There has never been a pure barter economy, according to anthropologists. Pre-money economies were organized in a variety of other ways, including central planning, informal gift economies, and IOUs denominated in cows.

So it’s with both delight and skepticism that I read the chapters from Hicks’ A Market Theory of Money for Prof. Mehrling’s Understanding Global Money course. Delight, because Sir John Hicks was a major figure in 20th Century economics who eventually won a Nobel, and here at last is a straightforward story that explains why we have banks at all. Skepticism, because it’s not clear to me that this account is historically grounded – or that we can understand what a modern bank does, or should do, on the basis of historical parable.

With that cautionary note, here’s Hicks’ story of banking. He begins in a world where money is already the usual form of payment, and breaks down a transaction into three pieces:

  1. Buyer and seller reach an agreement on what is to be sold at what price
  2. Buyer delivers the goods
  3. Payer delivers the cash

Step 1 has to come first, but payment and delivery may come in any order at any time after that, depending on the agreement that the parties made. The gap between contracting and payment is credit. Credit is a very old idea, and central to modern economies. Hicks argues that “payment on the spot” is actually the uncommon case, at least for orders over a certain size:

I may pay spot for a newspaper as I walk along the street, but I may also give an order to a newsagent to deliver a copy to my house each morning. I should not then pay for each issue as I received it; I should wait until the end of the month when he sent in his bill. … It is probably true that only for small transactions – small that is, from the point of view of one or other of the parties concerned – that the spot method of payment is ordinarily preferred. People are not, and never have been, in the habit of carrying about them a sufficient quantity of coin or notes to pay for a house or pay for furnishing it.

The key observation is that credit is typical, not extraordinary. Any time we pay a bill – whether a at restaurant or for a credit card — we have been extended credit.

In the gap between contracting and payment there is debt, and debt is measured in money (at least on the buyer’s side; for the seller, debt is measured in goods or services owed.) There has long been argument over what exactly money is, or more usefully what it does, but in a credit-based theory of the economy it has two clear roles:

We seem thus to be left with two distinguishing functions of money: standard of value and medium of payment. Are they independent, or does one imply the other? It is not easy to see that there can be payment, of a debt expressed in money, unless money as a standard has already been implied in the debt that is to be paid. So money as a means of payment implies money as a standard. But could a debt expressed in money be discharged other than in money? Surely it could.

It could for instance be set off against another debt, the debt from A to B being cancelled against a debt from B to A.

This is Hicks’ entry into the concept of an IOU, which seems to be fundamental to modern finance – perhaps the fundamental idea, the notion underlying every financial instrument of every kind. Yes, you can pay money to settle a debt, but you can also cancel one debt against another, netting the debts. This means that a debt owed to you has monetary value! From there, it’s a small step to the idea that a third party debt can be used as a form of payment. Suppose B owes A a debt, and C owes B a debt of a different amount.

A is then asked to accept part payment in the form of a debt from C to B, which is to offset the balance of debt between A and B, a balance we take to be in favour of A. But A can hardly be expected to consent to such an arrangement unless he considers that C is to be trusted. So there is a question of trust, or confidence, as soon as a third party is brought in.

This short paragraph states a pattern that has been at the core of trade for centuries, and is at the core of finance today: the transferability of debts made possible by the assurance of good credit. This was a common pattern in the trade fairs of Renaissance Italy, where merchants would meet to settle tangled webs of IOUs with each other and with the banks. It happens today when a bank B lends money to A to buy a house, creating a mortgage debt from A to the bank, then sells the right to collect that debt to another bank C. For this to happen, B has to guarantee to C that A is creditworthy enough to repay.  It’s less obvious, but equally applicable, when A pays B by check. B doesn’t have “money” when they have the check, but a promise from A’s bank to pay. But we’re not there yet. Here’s how Hicks builds up to tradable debt:

The quality of debt from a particular trader depends on his reputation: it will regularly be assessed more highly by those who are in the habit of dealing with him, and know that his a accustomed to keeping is promises, than by those who do not have the advantage of this information.

Thus the value of a debt is sensitive to information. It’s not clear to me whether anyone would have used this language in, say, Renaissance Italy. Hicks, writing in 1989, would have been influenced by recent, eventually Nobel-wining work on information in economics. The information view of value explains how formerly solid debt-based assets – for example, mortgage-backed securities – can evaporate almost instantly if there is a credible threat of non-payment. Although debt is tradable like money in the good times, in bad times everyone wants hard currency, not promises to pay currency.

Hicks says that this information problem – really a reputation problem – leads to the creation of a market for guarantees that a bill will be paid.

Thus we may think of each trader as having a circle of traders around him, who have a high degree of confidence in him, so that they are ready to accept his promises at full face value or near it; there is no obstacle to offsetting of debts within that circle from lack of confidence in promises being performed. If he wants to make purchases outside his circles he will not be so well placed. Circles however may overlaps: though C is outside A’s circle, he may be within the circle of D, who himself is inside the circle of A. Then though A would not accept a debt from C if offered directly, he may be brought to accept it if it is guaranteed by D, whom he knows D is then performing a service to A, for which he may be expected to charge.

This is a market for acceptances of bills of exchange, a financial instrument that seems to be uncommon today, but was quite common in 19th century merchant banking, as described by Bajehot. From the merchant’s point of view, the objective of all of this is to get paid sooner. Suppose you sell goods to B in exchange for a bill with a specified due date, say 60 days from now. That’s a debt, and one way to use it is just to wait 60 days until B pays cash. Or you could trade that debt immediately with any person C who thinks B trustworthy – perhaps for cash, perhaps to settle a debt with C, perhaps to make a purchase. If you wanted to trade with someone who didn’t know B, you could first buy an acceptance and then trade the bill together with the acceptance – the debt together with its guarantee.

An acceptance is basically insurance: a guarantee that a bill will be paid, in fact the promise to pay the bill if the debtor defaults. The guarantor is willing to do this because they know, or have some way to evaluate, the reputation of the debtor, and because they charge a fee for the service. The equivalent modern instrument would be something like a “credit default swap.”

Notice that we don’t have banks yet. Instead, this story tells of the creation of the first markets for debt, facilitated by intermediaries who guarantee repayment. And here’s where I start to hesitate. It’s a nice story, certainly an account of how this could have happened. But I’m not enough a historian to know if this is how it did happen. Like barter, this could be an attractive myth. Still, it’s a useful account of a problem to be solved – how do I trade my debt assets outside the small circle of people who know the debtor personally? For if debt is tradable, then I suddenly have a lot more capital at my disposal: not just the cash I’m currently holding, but all of the cash that is owed me.

According to Hicks, the next step in the story of banking is the creation of two special kinds of intermediaries. The first sells guarantees (acceptances) on debt. They either know many debtors well, like a credit rating agency, or they know where to buy acceptances from someone who does know. The other kind of intermediary pays cash for debts along with their acceptances, at a discount from the face value of the debt, that is, a fee.

Until that point, the principal reason why the market value of one bill should differ from another is the difference in reliability; but bills, between which no difference in reliability is perceived, may still differ in maturity. A trader who is in need of cash needs it now, not (say) six months hence. So there is a discount on a prime bill which is a pure matter of time preference – a pure rate of interest.

Here Hicks is distinguishing between what would today be called credit risk, the risk of non-repayment which is solved by acceptances, and “time preference,” that is, the advantage of having cash now instead of later, which costs interest. These two components of price (and more besides such as liquidity risk) are implicit any time debt is sold. Hicks believes that separating these out is a necessary step to explaining how banking arose:

The trouble is that the establishment of a competitive market for simple lending is not at all a simple matter. The lender is paying spot, for a promise the execution of which is, by definition, in the future. Some degree of confidence in the borrower’s creditworthiness – not just his intention to pay but his ability to pay, as it will be in the future – is thus essential to it. There cannot be a competitive market for loans without some of this assurance.

It’s a fine argument, and indeed there is always both credit risk and time preference in lending (or buying debt for cash, which is nearly the same thing.) But I’m not convinced this is a historical account. Did the biblical money lenders really distinguish between credit risk and time preference when they set their interest rates? I suspect the answer is no. Yet surely there were reasons that the first money lenders came into existence – that is, motivating problems that offer hints as to what banks do. It may be possible to offer an account of the creation of banking which is both simpler, more motivating, and more historical than Hicks’ story of acceptances and discounts.

From a network of discounters for bills – intermediaries who are willing to lend cash against a guaranteed debt – we finally come to banks proper. Hicks explains the origin of banking by asking how trade volume could ever increase when there’s a fixed supply of cash among merchants:

What then is to happen if trade expands, so that more bills are drawn, and more come in to be discounted? Where is the extra cash that is needed to come from? Any one of the dealers could get more cash by getting other dealers to discount bills that he holds. But the whole body of dealers could not get more that way. They must get cash from outside the market. They themselves must become borrowers.

The solution was to combine this business with another sort of business, which in the days of metallic money we know to have already made its appearance.

This other business is goldsmiths, or perhaps moneychangers, both of which would store a customer’s coins in their vault. There has long been a need for secure money storage, something better than cash under the mattress. The innovation of modern banking is to recognize that not every customer will withdraw all their coins all at once.

Then, once that happens, there will be a clear incentive to bring together the two activities – lending to the market, and ‘borrowing’ as a custodian from the general public – for the second provides the funds which in the first are needed. At that point the combined concern will indeed have been becoming a bank.

This, then, is the essence of banking: take deposits, make loans. Crucially, a loan is in fact borrowing from the depositors and lending at different time scale (maturity transformation). The debtor pays back the loan at a later date, or perhaps little by little as with a mortgage, but the depositors can demand the whole of their account as cash at any moment. It is only by hoping that not everyone wants their cash back all at the same time that a bank can exist. A bank which did not borrow from its depositors would be incapable of extending credit, at least beyond the capital that its owners are putting in.

This is the “fractional reserve” system which has existed for centuries. Banks are by law allowed to lend at most some large fraction of their deposits, hedging against many depositors asking for their money back all at once (though note, today banks can always borrow more reserves from the central bank, so capital reserve requirements don’t really constrain lending.)

On this account, the central features of a bank are:

  • Taking deposits, which can be withdrawn at any time
  • Making loans, which are repaid slowly

In other words: short-term borrowing to finance long term lending. There’s nothing surprising in this description, and it captures the inherent risk-taking in banking, since it may happen that everyone wants their deposits back as cash all at once. Today, US banks are insured up to $250,000 per account by the FDIC, which simultaneously pays out if needed and makes payout less necessary since the guarantee makes bank runs less likely.

But something key is missing: the banks’ central role in the payment system allows them to create high quality tradeable debt — that is, bank deposits. For most people, bank balances are money, therefore a bank can create money. To explain how, Hicks examines the evolution of a bank’s debt to its depositors.

It would however always have happened that when cash was deposited in the bank, some form of receipt would be given by the bank. If the receipt were made transferrable, it could itself be used in payment of debt, and that should be safer [than moving cash around physically.]

Hicks uses this idea of a “receipt” to trace the development of the bank check, and from there the modern reality that bank deposits are money as far as you and I are concerned. Consider how one person pays another through the banking system:

It would at first be necessary for the payer to give an order to his bank, then to notify the payee that he had done so, then for the payee to collect form the bank. Later it was discovered that so much correspondence was not needed. A single document, sent by debtor to creditor, instructing the creditor to collect from the bank, would suffice. It would be the bank’s business to inform the creditor whether or not the instruction was accepted, whether (that is) the debtor had enough in his account in the bank to be able to pay.

The key point is that with a check – or with any of our modern means of electronic payment – no cash is ever withdrawn from the bank! We have simply rewritten the amounts owed by the banks to each customer. That is, if I pay you, my bank owes me less cash and your bank owes you more. It would be no problem if in fact my cash had been loaned out to someone else during this whole time.

In fact, banks don’t really loan cash either.

When the bank makes a loan it hands over money, getting a statement of debt (bill, bond, or other security) in return. The money might be taken from cash which the bank had been holding, and in the early days of banking that may have often happened. But it could be all the same to the borrower if what he received was a withdrawable deposit in the bank itself. The bank deposit is money from his point of view, so from his point of view there is nothing special about this transaction. But from the bank’s point of view, it has acquired the security without giving up any cash; the counterpart, in its balance-sheet, is an increase in its liabilities. … But from the point of view of the rest of the economy, the bank has ‘created’ money. This is not to be denied.

Bank deposits are not cash. They are debt to customers.  But we are happy to have bank deposits because banks have become the way in which we pay each other. When someone owes us money, we are satisfied with “money in the bank” rather than cash in hand. This all goes back the the tradability of debt that Hicks started with: Banks create debt of high quality, that is, debt which can be traded from hand to hand nearly as well as cash, or perhaps better. In that very real sense, banks create money.

Hicks has tried to explain how this combination of features we call banking arose. The story he gives is a progression from debt trading, to acceptances, to loaning cash against bills, to merging with the money storage industry, to a central role in the payment system, to creating money by lending some large fraction of customer deposits. I am not convinced that banks actually arose in this sequence. However, it does highlight some of the needs and problems that led to the creation of banking. Namely: consumers need a place to store money, businesses need credit, and everyone needs a payment system.

But the advantage of Hicks’ telling is that it highlights the central role of credit/debt. As soon as debt can be freely traded to a third party, it is a kind of money. It is this trust that allows a bank to create money. How much money a bank should create is a different question, which Hicks’ parable cannot tell.  Rather, I find this pattern of tradable debt useful for thinking about all the different forms that banking takes, including by the many financial institutions that don’t call themselves banks.


What Data Can’t Tell Us About Buying Politicians

Corruption in the classic sense is when a politician sells their influence. Quid pro quo, pay to play, or just an old fashioned bribe — whatever you want to call it, this is the smoking gun that every political journalist is trying to find. Recently, data journalists have begin to look for influence peddling using statistical techniques. This is promising, but the data has to be just right, and it’s really hard to turn it into proof.

To illustrate the problems, let’s look at a failure.

On August 23, the Associated Press released a bombshell of a story implying that Clinton was selling access to the US government in exchange for donations to her foundation. I’m impressed by the AP’s initiative in using primary documents to look into a serious question of political ethics. But this is not a good story. It’s already been criticized in various ways. It’s the statistics I want to talk about here — which are, in a word, wrong. (And perhaps the AP now agrees: they changed the headline and deleted the tweet.) Here’s the lede:

At least 85 of 154 people from private interests who met or had phone conversations scheduled with Clinton while she led the State Department donated to her family charity or pledged commitments to its international programs, according to a review of State Department calendars

There’s no question this has the appearance of something fishy. In that sense alone, it’s probably newsworthy. But the deeper question is not about the appearance, but whether there were in fact behind the scenes deals greased by money, and I think that this statistic is not nearly as strong as it seems. It’s fine to report something that looks bad, but I think news organizations also need to clearly explain when the evidence is limited — or maybe not make an ambiguous statistic the third word in the story.

So here, in detail, are the limitations of this type of data and analysis. The first problem is that these 154 are a limited subset of the more than 1700 people she met with. It only counts private citizens, not government representatives, and this material only covers “about half of her four-year tenure.” So this isn’t really a good sample.

But even if the AP had access to Clinton’s complete calendar, counting the number of Clinton foundation donors still wouldn’t tell us much. There would still be no way to know if donors had any advantage over non-donors. If “pay to play” means anything, it must surely mean that you get something for paying that you wouldn’t otherwise get. In this case, that “something” is a meeting with the Secretary of Sate.

The simplest way to approach the question of advantage is to use a risk ratio, which is normally used to compare things like the risk of dying of cancer if you are and aren’t a smoker, or getting shot by police if you’re black vs. white. Here, we’ll compare the probability that you’ll get a meeting if you are a donor to the probability that you’ll get a meeting if you aren’t a donor. The formula looks like this:paytoplayThis summarizes the advantage of paying in terms of increasing your chances of getting a meeting. If 100 people paid and 50 got a meeting, but 1000 people didn’t pay and 500 of those still got a meeting, then paying doesn’t help get you a meeting.

The problem with the AP’s story is that there was no way for them to compute a risk ratio from meeting records. Clinton met with 85 people who donated to her foundation, and 154-85 = 69 who did not. This gives us:


We’re still missing two numbers! We can’t compute the advantage of paying because we don’t know how many people wanted a meeting, whether they paid, and whether or not they got a meeting. In other words, we need to know who got turned down for a meeting. The calendars and schedules that reporters can get don’t have that information and never will.

Can we conclude anything at all from the AP’s data? Not much. We can say only a few fairly obvious things. If many more than 85 people donated, then the numerator gets small and there appears to be less advantage. On the other hand, if many more than 69 people wanted a meeting but didn’t donate, the denominator gets small and it looks worse for Clinton.

We might be able to get some idea of who got turned down by looking at the Clinton Foundation contributors list. That page lists 4277 donors who gave at least $10,000. (Far more gave less, but you have to figure that a meeting costs at least some minimum amount.) Reading through the list of donors, almost all of them are private citizens, not governments. If we imagine that any substantial number of those 4277 donors hoped for a meeting with Clinton, the 85 private donors who did meet with her are at most 2% of those who tried to get a meeting. The numerator in the relative risk formula is small. The denominator might be even smaller if many thousands of people tried to get a meeting using exactly the same channels as the donors but ¯\_(ツ)_/¯ we’ll never know.

In other words, there is no way of finding evidence of “pay to play” by looking only at who got to “play,” without also looking at who got turned down.

The inability to calculate a risk ratio is a problem with many types of data that journalists use, but not others. Imagine looking for oil industry influence in a politician’s voting records. If you have good campaign finance data you know how much the oil companies donated to each politician. You also know how each politician voted on bills that affect the industry, so you know when oil money both did and didn’t get results. Meeting records are not like this, because they don’t record the names of the people who wanted to meet with a politician but didn’t.

Then there’s the problem of proving cause. Even when you can compute a relative risk, and the data suggests that more donors than non-donors got a meeting, corruption only happened if the payment caused the meetings. There are all sorts of possible confounding variables that will cause the risk ratio to overestimate the causal effect, that is, overestimate what money buys you. What sort of factors would cause someone both to meet with Clinton and donate to the Clinton Foundation, which does mostly global health work? All sorts of high-level folks might have business on both fronts. For example, there are plenty of people working in global health at the international level, coordinating with governments and so on.


Of course, people working together without the influence of money between them can still be doing terrible things! That is a different type of crime though. It’s not the pervasive money-as-influence-in-politics story that data journalists might hope to find statistically, and that’s the kind of story the AP was after.

Unfortunately, most people don’t think about the influence of money in this way. They only see evidence of an association between money and outcomes, without thinking about 1) those who wanted something and never got it, and 2) factors that would align two people without one paying the other, like shared goals. It’s all guilt by association.

In short, political science is hard and we can’t conclude very much from looking at meetings and donors! Yet I suspect it will still be quite difficult for many people to accept that the AP story is largely irrelevant to the question of whether Clinton was selling access. It is the association that seems suspicious to us, not the relative advantage. Suppose we know that half of the people who got promoted brought a bottle of wine to to the boss’s garden party. That means nothing if half of the company brought a bottle of wine to boss’s the garden party. But suppose instead that half of the people who got promoted slept with the boss. Now that seems like an open and shut case of “pay to play,” no? Not if the boss also slept with half of the rest of the employees. While that would be wildly inappropriate, it’s not trading favors.

It seems that our perception of the association between acts and outcomes depends far more on our judgment of the act than whether or not it actually gives you an advantage. Yet “advantage” is the whole idea of quid pro quo.

Which is not to say that Clinton wasn’t influenced by donations to her foundation. Who can say that it was never a factor? In fact she wouldn’t even need to give actual advantage to donors. Just the appearance, promise, or hope of advantage might be enough to shake people down, and that could be called corruption too. All I’m saying here is that we’re not going to be able to see statistical evidence of pay-to-play in meeting records.

We can, however, look in the data for specific leads about specific fishy transactions. To the AP’s credit much of the long story was exactly that, though having a meeting about helping a Nobel Peace Prize winner keep his job at the head of a non-profit microfinance bank may not feel like much of a smoking gun.

The AP, being the AP, was extremely careful not to make factually incorrect statements. It’s merely the totality of the piece that implies malfeasance. Or not. Let the readers make up their own minds, as editors love to say. I find this a monumental cop out, because the process of inferring corruption from the data is subtle! Readers will not be equipped to do that, so if we are using data as evidence we have to interpret it for them. The story could have, and in my opinion should have, explained the limitations of the data much more carefully. The statistics are at best ambiguous, and at worst suggest that donors got no special treatment (if you compare to the total number of donors, as above.) The numbers should never have been in the lede, much less the headline.

But then, would there have been a story? Would have, should have, the AP run a  story saying “here are some of the people Clinton met with who are also donors?” That’s not nearly as interesting a story — and that is its own kind of media bias. The tendency is towards stronger results, even sensational results. Or no story at all, if not enough scandal can be found, which is straight up publication bias.

The broader point for data journalists is that it is extremely difficult to prove corruption, in the sense of quid pro quo, just by counting who got what. To start with, we also need data on who wanted something but didn’t get it, which is often not recorded. Then we need an argument that there are no important confounders, nothing that is making two people work together without one paying the other (of course they could still be co-conspirators doing something terrible, but that would be a different type of crime.) The AP counted only those who got meetings and didn’t even touch on non-corrupt reasons for the correlation, so the numbers in the story — the headline numbers — mean essentially nothing, despite the unsavory association.


Sometimes an algorithm really is (politically) unbiased

Facebook just announced that they will remove humans from the production process for their “trending” news section (the list of stories on the right of the page, not the main news feed.) Previously, they’ve had to defend themselves from accusations of liberal bias in how these stories were selected. Removing humans is designed, in part, to address these concerns.

The reaction among technologically literate press and scholars (e.g. here, here, and here) has been skeptical. They point out that algorithms are not unbiased; they are created by humans and operate on human data.

I have to disagree with my colleagues here. I think this change does, or could, remove an important type of bias: a preference along the US liberal-conservative axis. Further, relying on algorithmic processes rather than human processes leads to a sort of procedural fairness. You know that every story is going to be considered for inclusion in the “trending” box in exactly the same way. (Actually, I don’t believe that Facebook’s trending topics were ever politically biased — the evidence was always thin — but this is as much about appearance and legitimacy as any actual wrongdoing.)

Of course algorithms are not at all “unbiased.” I’ve been one of many voices saying this for a long time. I’ve written about the impossibility of creating an objective news filtering algorithm. I teach the students in my computational journalism class how to create such algorithms, and we talk about this a lot. Algorithmic techniques can be biased in all sorts of ways: they can be discriminatory because of the data they use for reference, they can harm minorities due to fundamental statistical problems,  and they can replicate the biased ways that humans use language.

And yet, removing humans really can remove an important potential source of bias. The key is recognizing what type of bias Facebook’s critics are concerned about.

There are many ways to design a “trending topics” algorithm. You can just report which stories are most popular. But this might hide important news behind a wall of Kim Kardashian, so most trending algorithms also include a “velocity” component that responds to how fast a story is growing (e.g. Twitter.) Facebook’s trending topics are also location-specific and personalized. None of this is “objective.” These are choices about what it is important to see, just as an editor makes choices. And perhaps Facebook is making choices that make them the most money, rather than the supposedly neutral and public-service oriented choices of an editor, and that’s a type of bias too. It’s also true that algorithmic systems can be gamed by groups of users working together (which is either a feature or a bug, depending on what you feel deserves coverage.) Users can even work together to suppress topics entirely.

But none of this is left-right political bias, and that’s the kind of bias that everyone has been talking about. I can’t see anything in the design of these types of trend-spotting algorithms that would make them more favorable to one political orientation or another.

This doesn’t mean the results of the algorithm — the trending news stories themselves— are going to be politically neutral. The data that the algorithms operate on might be biased, and probably will be. Facebook monitors the articles that are being shared on their platform, and there is no guarantee that a) news sources produce and promote content in some “neutral” way and b) the users that share them are unbiased. If it turns out that more Facebook users are liberal, or liberal Facebook users are more active, then liberal-friendly articles will be more popular by definition.

However, this is a bias of the users, not Facebook itself. Every social software platform operates under a set of rules that are effectively a constitution. They define what can be said and how power is distributed. And some platform constitutions are more democratic than others: the administrators have power or the users have power in varying degrees over various things. Facebook has previously made other changes to reduce human judgment; this can be seen as a continual process of devolving control to the users, although it’s probably more to do with reducing costs through automation.

By removing humans entirely from the trending topics, Facebook is saying that the trending algorithm itself — which is very likely neutral with regard to the liberal/conservative axis — is the governing law of the system. The algorithm may not be “objective” in any deep way, but it is democratic a certain sense. We can expect the trending stories to mirror the political preferences of the users, rather than the political preferences of Facebook employees. This is exactly what both Facebook and its critics want.

Personally, I think that humans plus machines are terrific way to decide what is newsworthy. The former trends curators did important editorial work highlighting stories even when they weren’t popular: “People stopped caring about Syria … if it wasn’t trending on Facebook, it would make Facebook look bad.” This is exactly what a human editor should be doing. But Facebook just doesn’t want to be in this business.

Startups vs. Systems: Why Doing Good with Tech is Hard

It’s not easy to make social change with technology. There’s excitement around bringing “innovation” to social problems, which usually means bringing in ideas from the technology industry. But societies are more than software, and social enterprise doesn’t have the same economics as startups.

I knew all this going into my summer fellowship at Blue Ridge Labs, but my experience has given me a clearer idea of why. These are the themes that kept coming up for me after two months working with 16 other fellows on the problem of access to justice (A2J) for low-income New Yorkers.

You have to engage the incumbents

The culture of tech startups is not well adapted to taking on big systems. Startups have traditionally tried to enter the wide open spaces created by the new possibilities of technology, or use technical advantage to bypass incumbents. They generally try avoid engaging with major institutions, yet institutional reform is a key part of the “structural change” that so many of us want.

Uber does an end-run around the taxi system, but you can’t simply do an end run around the court system, the state Bar, or the local police.

Instead, tech startups who want to address social issues will need to work within very complex legacy systems. The first task is learning what’s already there. An issue like housing or immigration has a complex arrangement of parts around it: institutions, funding, practices, laws, incentives, and above all the people who work within the system.

Our first month was dedicated to a series of week-long “deep dives” into different areas. I think everyone agreed that there was no way that you could get deeply into a major social issue in a week. Longtime civic hacker Joshua Tauberer says that “until you’ve worked 5–10 years in government or advocacy, you can’t see what needs change.”

Technologists understand that mastering programming takes 10 years, so they should imagine that grappling with social issues also takes years, not months. I’ve worked on technology-enabled social efforts before (mostly around investigative journalism) but I’ve never worked on access to justice, which makes me a complete beginner in the space. After two months of hard work, I can make a very rough sketch of the ecosystem, and I might be able to list the major issues. I can barely see the outlines of what it is that I don’t know.

I don’t find any of this discouraging. If these problems were easy, they would have been solved already. There are people who have been working on them their whole lives. While fresh minds always have fresh insights, there’s also the real possibility that my best idea is ridiculously off the mark.

This doesn’t mean that you or I shouldn’t attempt a startup that aims to change a complex system. It just means we need someone on the team who really, really understands how to work within that system as it stands today, whether that’s a founder or merely a devoted advisor. This is where Blue Ridge Labs shines as an incubator: by virtue of being embedded in the Robin Hood Foundation, and because the fellowship included subject matter experts, we had phenomenal access to the players in this space. You say you want to talk with the woman who runs the A2J program at the New York State court system? How about Tuesday?

The complexity and inertia of the systems we are trying to change is a huge challenge, but it can also be an advantage. Startups traditionally run screaming from heavily regulated areas with entrenched incumbents, but during my research I ran into one founder who asks, “Why start a company in a regulated industry?” For him, “the answer is three-fold: 1) solving real problems, 2) solving hard problems, and 3) unlocking huge opportunities. A heavily regulated market is a clear signal for all three.”

If you are able to tame the systemic complexity in a given area, you will find yourself standing on the good side of a huge barrier to entry. “No one else wants to touch this” can be a very real competitive advantage.

Have you met the people you’re trying to help?

User-centered design. Build with, not for. Community engagement. Don’t solve other people’s problems. There are many ways to talk about the idea of collaborating with the people you are trying to help, but they all boil down to contact.

If you want to work on poverty, at some point you have to have a conversation with someone who is poor.

Really, you need lots of deep conversations, and I had perhaps a dozen during my time at Blue Ridge. One of the big successes of the fellowship program is the Design Insight Group, essentially a database of people who have the types of problems we’re trying to solve. We met people in many different contexts, such as interviews, focus groups, and site visits. It was an absolutely essential part of the work, as user contact always is. Even so, it was sometimes uncomfortable for me. What do I say to a mother who has just told me about getting getting thrown out onto the street with her 4 year old son because she couldn’t afford rent? That sort of thing will probably never happen to me or my friends – which is precisely the point of talking to her.

These experiences made me realize how little my life crosses class boundaries. I have close friends of every race and gender identity, and from many different countries too, but I don’t really have low income friends. Fundamentally, I don’t understand poverty because I have very little occasion to talk to poor people.

And I suspect I’m not the only one with this blind spot. For whatever reason, the progressive politics of the moment center on discrimination around race and gender. Those are worthy problems, but ending discrimination will not end poverty (just ask poor white men.) Just as knowing a gay person makes straight people much more likely to support gay marriage, I fear that the problems of low income people will not get the attention they deserve until those who speak loudest spend more time with those who make less. Class segregation seems every bit as pernicious to me as racial segregation, and it’s getting stronger as inequality rises. This doesn’t even require any personal prejudice; the housing market efficiently sorts people of different incomes into different neighborhoods.

Blue Ridge Labs mediated my contact with people outside of my class, and meeting them was the highlight of the experience for me. Context matters hugely for honest conversations: I can’t simply ask someone about their credit card debt at a party. I can, and did, ask them during a private and anonymous interview, in a situation where they are paid for their time.

Which doesn’t mean I always knew how to ask. Different groups came up with wonderful tools for learning from people the people they talked to. Some teams asked people to use cards with titles like “received document” and “court appearance” to create the story of their legal journey. Another team intentionally spoke Spanish to a bewildered tester, so they could try out a translation product idea. I love these different interaction strategies, and we need more.

Even so, there were questions I didn’t get answered. Where are the boundaries of what it’s reasonable to ask? One team I was on was not comfortable asking what price someone would have paid to solve their legal issue, but another team had no problem asking people to price their potential product. While we should always to strive to make people comfortable when they’re talking to us, I don’t think we can or should protect people from all difficult conversations, if we believe that the conversation might lead to crucial insights to help others. This is a subtle issue of respect and ethics, and I could have used more guidance.

What’s a social enterprise anyway?

Zappos is famous for their approach to social responsibility, but any shoe company makes something that improves lives of millions of people. In that sense almost any successful company might be a social enterprise, which seems to make the term meaningless.

So here’s my definition: a social enterprise devotes itself to a mission, even if that mission isn’t the most profitable. “Mission over margin,” as one startup in incubation at Blue Ridge Labs put it.

Not every socially transformative idea is going to be wildly profitable. There’s every reason to believe that many worthwhile social enterprises aren’t going to be profitable at all, at least not through typical market strategies. If your customers can’t cover your costs in the long run, you will need funding elsewhere. The options boil down to various kinds of internal subsidy (e.g. Google’s 20% time), a complementary product (e.g. journalism and advertising), and philanthropy in one form or another.

This raises the whole for-profit vs. non-profit issue. My sense is that this distinction is widely misunderstood. Contrary to wide misconception, non-profits can charge money for services. Nor is there a definitive moral difference; in my work as a journalist I have seen plenty of scammy non-profits, and a solid number of commendable capitalists too. As one editor put it to me, “non-profit is just a tax status.” However, our user interviews revealed that “non-profit” can be hugely important for communication: it signals that the organization is mission-driven, and – rightly or wrongly – people generally trust non-profits more.

The more fundamental point is your sustainability plan, and the mix of market and subsidized revenue you plan to tap. You also need to decide how to measure the success of your mission. This is where metrics come in.

Impact metrics are not universally loved. We spoke to one city-funded credit counselor who asked, “why does my client’s credit score need to improve by 35 points before I can count them as someone I helped? Doesn’t a 34 point increase also move the needle?“ It’s an important question, but I don’t see this sort of arbitrariness as a problem with the idea of metrics in general.

You get to choose what you count as impact. Or perhaps your funders choose – whether your funders are social impact investors or straight philanthropists — but I would hope that funders will take you seriously when you tell them why you should count one thing and not another. But even the wisest metrics will not capture everything you care about. I prefer to think about evaluation rather than metrics. Ultimately, any social enterprise has to ask itself is this working? Counting something is a great way to compare alternatives, but only if you’re counting something that’s worth basing decisions on.

In short, non-profits and for-profits are both compromised, but in different ways: a non-profit might depend on arbitrary metrics, but a for-profit faces continual pressure to turn toward whatever grows the business fastest. The useful distinction is not the legal status or even where the money comes from, but what your definition of success is and how that influences your choices.

Impact metrics can also tell you where opportunity lies. Perhaps a social entrepreneur should be thinking about the number of people they might be able to help, and what that help is worth to those people. The Effective Altruism movement suggests that philanthropy should focus on doing the most good for a given amount of money. It’s an appealing moral idea, and focusses attention on the key concept of efficiency. Unfortunately this principle gets alarmingly complicated in practice – what is “good” and can it really be measured? Still, there is something attractive about sizing opportunities numerically.

For example, one team of Fellows investigated tools to help tenants organize to fight landlords who intend to illegally evict everyone so they can raise the rent. It’s a neat idea, but I found myself wondering how often this problem occurs. Suppose New York landlords wrongfully evict the tenants of 100 buildings every year, meaning perhaps a few thousand people would benefit from organizing. Is this a large number of people? Or is it a small number in a city of 8 million? It’s less than 1/10 of one percent. One of the best responses to this question is to ask what other problems you could you address with your time and funding, and how many people might be helped in each case.

Even if you do succeed as a social entrepreneur, you might not know it. It can take a long time for impact to become clear. In the fall of 2014, I scraped nearly two million Missouri court records for ProPublica, to help answer the question: who filed the most wage garnishment cases? The answer turned out to be a non-profit hospital. Reporter Paul Kiel visited the hospital and the patients in Missouri and wrote the story. Time passed and I moved on to another job. This was just one story of many. Then there was congressional inquiry, and nearly two years after my work on this project, the hospital stopped suing so many people.

But not every win has a straight line between the work and the outcome, and there usually isn’t a follow up story reporting it. The experience makes me wonder how much good I may have done that I will never know about. Something between lots and none at all – and maybe I’ve even harmed some people along the way. I’ve written about these difficulties before.

Imagine if a CEO only got intermittent, unreliable glimpses into revenue. That’s often the situation the mission-driven entrepreneur is in when they try to evaluate the success of their work. And yet, glimpses are better than no information at all – there’s no excuse for not trying to know our impact.

Stubborn optimism

I’ve moved on from low-income access to justice work at Blue Ridge Labs, but I have high hopes for my fellow Fellows who are starting three exciting new projects. I believe tech has a very important role to play in addressing social problems. Obviously I do, or I wouldn’t be in the business of making software for investigative journalism.

But it seems we’re still thinking about the possible strategies in very limited terms. We’re imagining something that looks like a traditional VC-funded tech startup, or perhaps something that looks like a community-supported open-source tool. The reality of successful projects is going to be a lot more complicated. Critical institutional systems are resistant to startups, the economics of social change may look nothing like the economics of venture capital, and the people who can build technology might not even know any of the people who are supposed to benefit from it.


Words and numbers in journalism: How to tell when your story needs data

Update: A more recent version of this material appears in my book, The Curious Journalist’s Guide To Data.

I’m not convinced that journalists are always aware when they should be thinking about numbers. Usually, by training and habit, they are thinking about words. But there are deep relationships between words and numbers in our everyday language, if you stop to think about them.

A quantity is an amount, something that can be compared, measured or counted — in short, a number. It’s an ancient idea, so ancient that it is deeply embedded in every human language. Words like “less” and “every” are obviously quantitative, but so are more complex concepts like “trend” and “significant.” Quantitative thinking starts with recognizing when someone is talking about quantities.

Consider this sentence from the article Anti-Intellectualism is Killing America which appeared in Psychology Today:

In a country where a sitting congressman told a crowd that evolution and the Big Bang are “lies straight from the pit of hell,” where the chairman of a Senate environmental panel brought a snowball into the chamber as evidence that climate change is a hoax, where almost one in three citizens can’t name the vice president, it is beyond dispute that critical thinking has been abandoned as a cultural value.

This is pure cultural critique, and it can be interpreted many different ways. To start with, I don’t know of standard and precise meanings for “critical thinking” and “cultural value.” We could also read this paragraph as a rant, an exaggeration for effect, or an account of the author’s personal experience. Maybe it’s art. But journalism is traditionally understood as “non-fiction,” and there is an empirical and quantitative claim at the heart of this language.

“Critical thinking has been abandoned as a cultural value” is an empirical statement because it speaks about something that is happening in the world with observable consequences. It is, in principle, a statement that can be tested against history. This gives us a basis for saying whether it’s true or false.

It’s quantitative because the word “abandoned” speaks about comparing amounts at two different times: something that we never had cannot be abandoned. At each point in time we need to decide whether or not “critical thinking” is a “cultural value.” This is in principle a yes or no question. A more realistic answer might involve shades of gray based on the number of people and institutions who are embodying the value of critical thinking, or perhaps how many acts of critical thinking are occurring. Of course “critical thinking” is not an easy thing to pin down, but if we choose any definition at all we are literally deciding which things “count” as critical thinking.

One way or another, testing this claim demands that we count something at two different points in time, and look for a big drop in the number. Compare this with the evidence provided:

  • a sitting congressman told a crowd that evolution and the Big Bang are “lies straight from the pit of hell”
  • the chairman of a Senate environmental panel brought a snowball into the chamber as evidence that climate change is a hoax
  • almost one in three citizens can’t name the vice president

The first two pieces of evidence seem to me more anti-science than anti-critical thinking, but let’s suppose our definitions allow it. The real problem is that these are anecdotes – which is just a judgmental word for “examples.” Anecdotes make poor evidence when it’s just as easy to come up with examples on the other side. Yeah, someone brought a snowball into Congress to argue against climate change, but also the EPA decided to start regulating carbon dioxide as a pollutant. The issue is one of generalization: we can’t draw conclusions about the state of an entire culture from just a few specific examples. Generalization is tricky at the best of times, but it’s much easier when you can count or measure the entirety of something. Instead we have only scattered facts, and no information about whether these cases are representative of the whole.

Or, as in historian G. Kitson Clark’s famous advice about generalization:

Do not guess; try to count. And if you cannot count, admit that you are guessing.

The fact that “one in three citizens can’t name the vice president” is closer to the sort of evidence we need. Let’s leave aside, for a moment, whether being able to name the vice president is really a good indication that “critical thinking” is a “cultural value.” This statement is still stronger than the first two examples because it generalizes in a way that individual examples cannot: it makes a claim about all U.S. citizens. It doesn’t matter how many people I can name who know who the vice president is, because we know (by counting) that there are 100 million who cannot. But this still only addresses one point in time. Were things better before? Was there any point in history where more than two thirds of the population could name the vice-president? We don’t know.

In short, the evidence in this paragraph is fundamentally not the right type. The word “abandoned” has embedded quantitative concepts that are not being properly handled. We need something tested or measured or counted across the entire culture at two different points in time, and we don’t have that.

Very many words have quantitative aspects. Words like “all” “every” “none” and “some” are so explicitly quantitative that they’re called “quantifiers” in mathematics. Comparisons like “more” and “fewer” are explicitly about counting, but much richer words like “better” and “worse” also require counting or measuring at least two things. There are words that compare different points in time, like “trend” “progress” and “abandoned.” There are words that imply magnitudes such as “few” “gargantuan” and “scant.” A series of Greek philosophers, long before Christ, showed that the logic of “if” “then” “and” “or” and “not” could be captured symbolically. To be sure, all of these words have meanings and resonances far beyond the mathematical. But they lose their central meaning if the quantitative core is ignored.

The relation between words and numbers is of fundamental importance in journalism. It tells you when you need to get quantitative. It’s essential for planning data journalism work and for communicating the results. It’s the heart of the data journalist’s job, really. The first step is to become aware of when quantitative concepts are being used in everyday language.