The state of The State of the Union coverage, online

The state of the union is a big pre-planned event, so it’s a great place to showcase new approaches and techniques. What do news digital news organizations do when they go all out? Here’s my roundup of online coverage Tuesday night.

Live coverage

The Huffington Post, the New York Times, the Wall Street JournalABCCNNMashable, and many others, including even Mother Jones had live web video. But you can get live video on television, so perhaps the digitally native form of the live blog is more interesting. This can include commentary from multiple reporters, reactions from social media, link round-ups, etc. The New York Times, the Boston Globe, The Wall Street JournalCNNMSNBC, and many others had a live blog. The Huffington Post’s effort was particularly comprehensive, continuing well into Wednesday afternoon.

Multi-format, socially-aware live coverage is now standard, and by my reckoning makes television look meagre. But the experience is not really available on tablet and mobile yet. For example, almost all of the live video feeds were in Flash and therefore unavailable on Apple devices, as CNET reports.

As far as tools, there was some use of Coveritlive, but most live blogs seemed to be using nondescript custom software.

Visualizations

Lots of visualization love this year. But visualizations take time to create, so most of them were rooted in previously available SOTU information. The Wall Street Journal did an interactive topic and keyword breakdown of Obama’s addresses to congress since 2009, which moved about an hour after Tuesday’s speech concluded.

The New York Times had a snazzy graphic comparing the topics of 75 years of SOTU addresses,  by looking at the rates of certain carefully chosen words. Rollovers for individual counts, but mostly a flat thing.

The Guardian Data Blog took a similar historical approach, with Wordles for SOTU speeches from Obama and seven other presidents back to Washington. Being the Data Blog, they also put the word frequencies for these speeches into a downloadable spreadsheet. It’s a huge image, definitely intended for big print pages.

A shout-out to my AP colleagues for all their hard work on our SOTU interactive, which included the video, a fact-checked transcript, and an animated visualization of Twitter responses before, during, and after the State of the Union.

But it’s not clear what, if anything, we can actually learn from such visualizations. In terms of solid journalism content, possibly the best visualization came not from a news organization but from Nick Diakopoulos and co. at Rutgers University. Their Vox Civitas tool does filtering, search, and visualization of over  100,000 tweets captured during the address.

I find this interface a little too complex for general audience consumption — definitely a power user’s tool. But the algorithms are second to none. For example, Vox Civitas compares tweets to the text of the speech within the previous two minutes to detect “relevance,” and the automated keyword extraction — you can see the keywords at the bottom of the interface above — is based on tf-idf and seems to choose really interesting and relevant words. The interactive graph of keyword frequency over time clearly shows the sort of information that I had hoped to reveal with the AP’s visualization.

Fact Checking

A number of organizations did real-time or near real-time fact checking, as Yahoo reports. The Sunlight Foundation used itsSunlight Live system fo real-time fact checks and commentary. This platform, incorporating live video, social media monitoring, and other components is expected to be available as an open-source web app, for the use of other news organizations, by mid-2011.

The Associated Press published a long fact check piece (also integrated into the AP interactive), ABC had their own story, and CNN took a stab at it.

But the heaviest hitter was Politifact, who had a number of fact check rulings within hours and several more by Wednesday evening. These are together in a nice summary article, but as is their custom the individual fact checks are extensively documented and linked to primary sources.

Audience engagement

Pretty much every news organization had some SUTO action on social media, though with varying degrees of aggressiveness and creativity. Some of the more interesting efforts involved solicitation of audience responses of a specific kind. NPR asked people to describe their reaction to the state of the union in three words. This was promoted aggressively on Twitter and Facebook. They also asked for political affiliation, and split out the 4000 responses into Democratic and Republican word clouds:

Apparently, Obama’s salmon joke went down well. The Wall Street Journal went live Tuesday morning with “The State of the Union is…” asking viewers to leave a one word answer. This was also promoted on Twitter. Their results were presented in the same interactive, as a popularity-sorted list.

Aside from this type of interactive, we saw lots of agressive social media engagement in general. The more social-media savvy organizations were all over this, promoting their upcoming coverage and responding to their audiences. As usual, the Huffington Post was pretty seriously tweeting the event, posting about updates to their live blog, etc. and going well into Wednesday morning. Perhaps inspired by NPR, they encouraged people to tweet their #3wordreaction to the speech. They also collected and highlighted reaction from teachers, Sarah Palin, etc.

But as an AP colleague of mine asked, engagement to what end? Getting people’s attention is great, but then how do we, as journalists, focus that attention in a way that makes people think or act?

The White House

No online media roundup of the SOTU would be complete without a discussion of the White House’s own efforts, including web and mobile app presences. Fortunately, Nieman Journalism Lab has done this for us. Here I’ll just add that the White House livestreamed a Q&A session in front of  an audience immediately after the speech, in which White House Office of Public Engagement’s Kal Penn (aka Kumar) read questions from social media. Then Obama himself did an intervew Thursday afternoon in which he answered questions submitted as videos on YouTube.

What is news when the audience is editor?

This is a paper I wrote in December 2009. I’ve decided to post it now, partially because it contains a previously unreported 30-day content comparison of Digg versus the New York times. Looking back on this work, I think that its greatest weakness is an under-appreciation of the importance of production processes in determining what gets reported and how. In other words, I believe now that the intense pressure of daily deadlines shapes the news far more than external influences such as political and commercial pressures — at least in countries where the press is relatively free. Also available as a pdf.

Abstract
There are now several websites which allow users to assemble news content from around the internet by means of voting systems. The result is a new kind of front page that directly reflects what the audience believes to be salient, as opposed to what the editorial staff of a newsroom believes the audience should know. Content analyses of such sites show that they have little overlap with mainstream media agendas (5% in a previous study). In fact, many of the items selected by users would not traditionally be considered “news” at all. This paper examines the shift from editor to audience agendas in the context of previous theories of news production, discusses existing content analysis work on the subject, and reports on a new 30 day study of Digg.com versus NYTimes.com.

Introduction
No news organization can cover everything. Traditionally, it is ultimately the editor of a news publication who decides what is newsworthy: what stories reporters will follow, and what stories will be published. It has been considered part of the value of a news organization to determine what its audiences need to know about.

It’s never been entirely clear how professional journalists decide which events are worth reporting, out of all the events taking place in the world. Neither has it been obvious how editorial choices relate to the audience’s personal judgments about what is important, but  such questions were largely theoretical before the advent of the web. “I own a newspaper, you do not” was always the implicit end to discussions about who got to decide what was news.

Today, publishing is near-free and the news package has been disaggregated. An online audience member can select single stories that interest them, without reading or even really being aware of the traditional news package. Alongside this disaggregation we find a new class of online applications that re-aggregate content from multiple sources. Readers vote on pages from across the web, and the top-rated items are displayed on the aggregator’s home page.

News consumers are literally tearing the world’s newspapers apart and re-assembling them to fit their own agendas, including lots of content not traditionally considered news at all.

This paper examines what we can learn about the online audience’s judgment not only of what is important but what is news at all, and how it differs from that of traditional newsrooms. I review previous work on “news values”  and “news agenda” in professional journalism, look at measurements of what audiences view online, and report on my own 30 day quantitative study of Digg as compared to the New York Times.

Features of the audience-generated agenda
Continue reading What is news when the audience is editor?

By the numbers, American journalism failed to inform voters

A recent study by World Public Opinion.org shows that the majority of the American population believed false things about basic national issues, right before the 2010 mid-term elections. I don’t know how to interpret this as anything other than a catastrophic failure of American journalism, in its most fundamental, clichéd, “inform the public” role.

The most damning section of the report (PDF) is titled “Evidence of Misinformation Among Voters.”

The poll found strong evidence that voters were substantially misinformed on many of the issues prominent in the election campaign, including the stimulus legislation, the healthcare reform law, TARP, the state of the economy, climate change, campaign contributions by the US Chamber of Commerce and President Obama’s birthplace. In particular, voters had perceptions about the expert opinion of economists and other scientists that were quite different from actual expert opinion.

This study also found that Fox viewers were significantly more misinformed than average on many issues, which is mostly how this survey was covered in the blogosphere and mainstream news outlets. I think this Fox thing is a terrible diversion from the core problem: the American press did not succeed in informing the public. Not even right before an election, not even on the narrow set of issues that, by survey, voters cared to base their votes on.

The travesty here is that the relevant facts were instantly available from primary sources, such as the Congressional Budget Office and the Intergovernmental Panel on Climate Change. I interpret this failure in the following way: for many kinds of issues, the web makes it easy to find true information. But it doesn’t solve the problem of making people go look. That, perhaps, is a key role for modern journalism. Unfortunately, modern American journalism seems to be very bad at it. I imagine the same problem exists in the journalism of many other countries.

What the study actually says
The study compares what voters think experts believe with what those experts actually believe. This is a bit tricky, and the study isn’t saying that the experts are necessarily right, but we’ll get to that. First, some example findings:

  • 68% of voters thought that “most economists” believe that the stimulus package “saved or created a few jobs” and 20% thought most economists believe that the stimulus caused job losses, whereas only 8% correctly said that most economists think it “saved or created several million jobs.” (The Congressional Budget Office estimates that the stimulus saved several millions jobs, as do 75% of economists interviewed by the Wall Street Journal.)
  • 53% of voters thought that economists believe that Obama’s health care reform plan will increase the deficit, while 29% said that economists were evenly divided on this issue. Only 13% said correctly that a majority of economists think that health care reform will not increase the deficit. (The Congressional Budget Office estimates a net reduction in deficits of $143 billion over 2010-2019, and Boards of Trustees of the Medicare Fund also believe that the Affordable Care act will “postpone the exhaustion of … trust fund assets.”)
  • 12% of voters thought that “most scientists believe” that climate change is not occurring, while 33% thought scientists were evenly divided on the issue. That’s 45% with an incorrect perception, as opposed to the 54% who said, correctly, that most scientists think climate change is occurring. (Aside from the IPCC reports and virtually every governmental study of the issue worldwide, an April 2010 survey of climate scientists showed that 97% believe that human-caused climate change is occurring.)

A fussy but necessary digression: all of this rests on the reliability of the WorldPublicOpinion.org survey results. The survey was conducted by Knowledge Networks, Inc. using an online response panel randomly selected from the US population. Those without internet access were apparently provided it for free. I have been unable to find any serious independent evaluation of Knowledge Networks’ methodology, but their many research papers on sample design certainly talk the talk. All of the basic sampling errors, such as self-selection and language bias (what about Hispanics?) are at least addressed on paper. The margin of error is reported as 3.9%.

So let’s take these survey results as accurate, for the moment. This means that the majority of the American public had an incorrect conception of expert opinion on the issues that they voted on. That’s a mouthful. It’s not the same as “believed false things,” and in fact asking “what do you think experts believe” deliberately dodges the tricky question of what is true. If there is some misperception of expert belief, then in the strictest terms the public is misinformed. The study addresses this point as follows:

In most cases we inquired about respondents’ views of expert opinion, as well as the respondents’ own views. While one may argue that a respondent who had a belief that is at odds with expert opinion is misinformed, in designing this study we took the position that some respondents may have had correct information about prevailing expert opinion but nonetheless came to a contrary conclusion, and thus should not be regarded as ‘misinformed.’

So this study does not say “the American public are wrong about the economy and climate change.” It says that they haven’t really looked into it. I’m all for questioning authority’s claim to truth — anyone who follows my work knows that I’m generally a fan of Wikipedia, for example — but I believe we must take lifelong study and rigorous methodology seriously. To put it another way: voting contrary to the opinions of economists may be a fine thing, but voting without any awareness of their work is just silly. Yet that seems to be exactly what happened in the last election.

The role of the press, then and now
Of course, voting is hard and stuff is complex, which is why we rely on the media to break it all down for us. The sad part is that economics and climate change are familiar ground for journalists. It’s not like the facts of these issues were not published in mainstream news outlets. For that matter, journalists were not even necessary here. Any citizen with a web browser could have found out exactly what the Affordable Care Act was predicted to do to the deficit. The Congressional Budget Office published their report and then blogged about it in plain language.

Maybe publishing the truth was never enough. Maybe journalism never actually “informed the public,” but merely created conditions where the curious could get themselves informed by diligently reading the news. But on big issues like whether a piece of national legislation will affect the deficit, we no longer need professionals to enable this kind of self-motivated discovery. The sources go direct in such cases, as the Congressional Budget Office did. And do we really expect that the social media sphere — that’s all of us — will remain silent about the next big global warming study? We’re all going to use Facebook etc. to share links to the next IPCC report when it comes out.

If the problem of having access to true information about these sorts of “votable issues” is solved by the web, what isn’t solved by the web is getting every voter to go look at least once. That might be a job for informed professionals at the helm of big media channels. This is a big responsibility for a news organization to try to take, but I don’t see how it’s anything but the corollary to the responsibility to only publish true information. Presumably some of that information is important enough to know, so consumers would probably appreciate the idea that your mission is to ensure they are informed.

I suspect that paper-based habits are holding journalism back here. There is a deeply ingrained newsroom emphasis on reporting only what’s “new.” A budget report only gets to be news once, even if what it says is relevant for years. But there are no “editions” online; the same headline can float on the hot topics list for as long as it’s relevant. There is even more reason to keep directing attention to an issue if people are actively discussing it, if it is greatly polarized, or if there’s a lot of spin around it (see: the rise of fact-check journalism). In any case, journalists have long been good at keeping an issue in the news, by advancing the story daily in one way or another. But first they have to know what the public doesn’t know.

So the burning question that the World Public Opinion study leaves me with is just this: why wasn’t it a news organization that commissioned this survey?

See also: Does journalism work?

Does journalism work?

How do we know that the work that journalists do accomplishes anything at all? And what does journalism do, exactly, beyond vague statements like “supports democracy” and trivial ones like “gives me movie reviews”?

I made this image a couple months ago to introduce the question at a conference. A reporter researches and writes a story. The first arrow represents the process that gets that story published. We understand that process quite well, and the internet makes publishing really cheap and easy. Then there’s a process that takes published, accurate information and turns it into truth and justice for all. That’s the part that’s fuzzy. In fact I don’t think we understand it at all. I call this “the last mile problem” in journalism — how does journalism actually reach people?

Journalists occasionally claim a scalp, such as by embarrassing a politician enough to force them to resign, or focussing attention on some issue long enough to get legislation passed. Journalism also theoretically informs citizens so they can vote responsibly, in the elections which happen every few years. As I’ve argued before, these are weak levers by which to shift society. I’m less interested in what journalism does in extraordinary times, and more interested in how the journalist’s work improves the day-to-day operation of a society, and the experiences of the people living in it.

It’s possible that much of the journalism we have is effective. Maybe the mere existence of consistent reporting on the machinations of the powerful keeps them in line, and we’ll only know what journalism really gave us when it disappears and civilization collapses into a mire of secrecy and corruption. Or maybe that’s already happened. How would we know? How can we tell whether journalism, as a local or a global endeavor, is doing better this year than last?

Other fields have goals
I like to hang around the international development community, and those people have real problems. People working in public health are charged with improving access to clean water or preventing the spread of HIV. Others try to get more girls into school, or to raise entire communities out of poverty.

There are lots of ways to attack such complex social problems. An NGO or a foundation or a UN organ could lobby local politicians, produce research reports, provide services directly to affected populations, or launch a public awareness campaign. The way in which an organization proposes to have an effect is called their “theory of change.” This is a term I hear frequently at gatherings of development workers, and from the staff of NGOs and international organizations. Such organizations must continually develop and articulate their theory of change in order to secure philanthropic funding.

Journalism has no theory of change — at least not at the level of practice.

I’ve taken to asking editors, “what do you want your work to change in society?” The answer is generally along the lines of, “we aren’t here to change things. We are only here to publish information.” I don’t think that’s an acceptable answer. Journalism without effect does not deserve the special place in democracy that it tries to claim.

The question of “what change should journalism produce” is hard because it is unavoidably a normative question, a question about how journalists envision a “better” world. At the moment, the field of professional journalism is mired in intense confusion about its role and the meaning of classic standards such as “objectivity.” This has obscured discussion of the field’s goals at a moment of great transition brought on by new communications technology, precisely the time when clarity is most needed.

It’s telling that discussions of journalism’s fundamentals frequently harken back to the great debate of Lippman vs. Dewey. That happened in the 1920s. This was not only before live television and before the internet, it was before bastions of modern reasoning such as statistical inference, the study of cognitive biases, and the social construction of knowledge were fully developed. Other fields have done much better in adapting to the philosophical and technological revolutions of the last century.

Medicine in general and public health in particular have become relentlessly evidence-based. It’s no longer enough to run anti-smoking ads; we now require those responsible for public health to show that their preferred method of behavior modification actually reduces disease. Meanwhile, marketers have rallied around the idea that purpose of their work is to get targeted individuals to do something, whether that’s purchasing a product or voting for a particular candidate. That may not be an appropriate goal for non-advocacy journalism, but marketing and public relations researchers have made very careful studies of communication, recall, and belief.

Similar concerns over how messages are received arise in many fields, from crisis communications to public diplomacy. But not in journalism. If journalism does not change action it must change minds, but the tools and language of belief change seem to be entirely missing from the profession.

Journalism as surveillance of ignorance
It used to be the job of an editor to decide what to publish. Maybe it is now the job of an editor to decide what needs to be known. These are not at all the same thing. They used to be, when nothing could be done with a story after the ink hit paper. The internet allows so much more — promotion within specific communities, feedback on readership and reception, conversation as opposed to oratory. And potentially, cheap techniques to determine what people already believe.

We should expect that users will largely be choosing for themselves what to read and view. That’s reality, and that’s fine, and systems that make it easy to satisfy curiosity are systems that will make us smarter (even though we’ll mostly use them for entertainment.) But I believe there will still be an identifiable set of common content, the few things that the public — or some targeted fraction of it — absolutely has to know to participate meaningfully in the civic issues of the day. This is more or less what editors put on the front page today. But rather than the headlines reflecting the most important events, perhaps they should reflect the most pernicious misconceptions. Good journalists already have some sense of this, and every so often we learn of an alarming gap in public knowledge. A majority of Americans believed for years that Saddam Hussein was linked to 9/11, for example. Today, most Americans don’t know what’s actually in Obama’s new health care laws. (I apologize again to my international readers for the US-centric examples; I’d love to hear of similarly woeful tales from other countries.)

Combatting ignorance is harder than publishing. It’s my best guess for the second, mysterious arrow in the diagram above. Fortunately we also have new tools. We have reams and reams of data that people voluntarily put online, the “data exhaust” of entire societies. We also have old-fashioned public opinion polls, and their lightweight cousin online polls (though self-selection bias may render online surveys useless for all but the most casual work.) Somewhere in all this data and all this communication, it must be possible to figure out what it is that people actually believe — and where those beliefs are factually wrong in an uncomplicated way, precisely the way that an editor would say “that’s not true, we can’t print it.”

There are many possibilities for understanding the beliefs of an audience. I am particularly intrigued by opinion mapping, deliberative polling, and the attempts of UN Global Pulse to create data-driven societal monitoring systems. It may actually be possible to cheaply measure the state of public knowledge, which would also give us concrete metrics for improvement. We need new ways of thinking about the surveillance of ignorance, and we need software to implement them. But more than anything else, we need journalists attuned to what it is that people don’t know. Good journalists already are; they can see what is missing from discussion — whether that’s a question that no one has answered or a challenge to a prevalent belief — and do the hard work of adding it.

This effort applies at all scales. Each journalist has an audience or audiences, their communities of concern. Each could track what their audience already knows and believes. The job of the journalist, so conceived, is not merely to report the happenings, but to ensure that the audience is aware of and understands the most crucial of them. That won’t be easy. Aside from the challenges of determining what an audience already knows, people don’t like to be told they’re uninformed or wrong. This is why I believe a journalist needs to learn everything there is know about public communication, borrowing and adapting from marketing experts and public health planners. Genuine honesty and humility seems to me the ethical core, and newsroom transparency is a critical check on this power.

Of course, decisions would have to be made about what are misconceptions and which of them are important enough to combat. Decisions have to be made already about what to cover and promote with limited resources, and these hard choices are the iceberg that sinks any hope of a truly “impartial” journalism. It’s a reality that the profession has to deal with every day, and I wish we would get on with the work of crafting and communicating our normative stance, rather than insisting that “objectivity” means we don’t have one. (Even Wikipedia explains its norms in great detail.) I’d like to start with a list of things that journalists wish were better known. Be honest. I know you’ve already thought about this.

But if we can get over that hurdle — if we can admit that journalism needs concrete goals — then we stand a chance of doing better journalism, and knowing when we’re doing it. For me, the insane possibility of new communications technology carries with it the obligation to do better than we ever have before.

UPDATE: As if on cue, a major study was released four days after I published this, showing that a majority of American voters were misinformed about the issues they voted on in the recent mid-term elections. I discuss what that means here.

A full-text visualization of the Iraq War Logs

Update (Apr 2012): the exploratory work described in this post has since blossomed into the Overview Project, an open-source large document set visualization tool for investigative journalists and other curious people, and we’ve now completed several stories with this technique. If you’d like to apply this type of visualization to your own documents, give Overview a try!

Last month, my colleague Julian Burgess and I took a shot a peering into the Iraq War Logs by visualizing them in bulk, as opposed to using keyword searches in an attempt to figure out which of the 391,832 SIGACT reports we should be reading. Other people have created visualizations of this unique document set, such as plots of the incident locations on a map of Iraq, and graphs of monthly casualties. We wanted to go a step further, by designing a visualization based on the the richest part of each report: the free text summary, where a real human describes what happened, in jargon-inflected English.

Also, we wanted to investigate more general visualization techniques. At the Associated Press we get huge document dumps on a weekly or sometimes daily basis. It’s not unusual to get 10,000 pages from a FOIA request — emails, court records, meeting minutes, and many other types of documents, most of which don’t have latitude and longitude that can be plotted on a map. And all of us are increasingly flooded by large document sets released under government transparency initiatives. Such huge files are far too large to read, so they’re only as useful as our tools to access them. But how do you visualize a random bunch of documents?

We’ve found at least one technique that yields interesting results, a graph visualization where each document is node, and edges between them are weighted using cosine-similarity on TF-IDF vectors. I’ll explain exactly what that is and how to interpret it in a moment. But first, the journalism. We learned some things about the Iraq war. That’s one sense in which our experiment was a success; the other valuable lesson is that there are a boatload of research-grade visual analytics techniques just waiting to be applied to journalism.

click for super hi-res version

Interpreting the Iraq War, December 2006
This is a picture of the 11,616 SIGACT (“significant action”) reports from December 2006, the bloodiest month of the war. Each report is a dot. Each dot is labelled by the three most “characteristic” words in that report. Documents that are “similar” have edges drawn between them. The location of the dot is abstract, and has nothing to do with geography. Instead, dots with edges between them are pulled closer together. This produces a series of clusters, which are labelled by the words that are most “characteristic” of the reports in that cluster. I’ll explain precisely what “similar” and “characteristic” mean later, but that’s the intuition.

Continue reading A full-text visualization of the Iraq War Logs

Overview: a tool for exploring large document sets

This Associated Press, my employer, has submitted the following proposal to the Knight News Challenge, reproduced here because the original on Knight’s site eats the paragraph breaks and doesn’t display the full-res image.

click for larger image, or awesome hi-res version

Describe your project:
Overview will be a tool for finding stories in large document sets, stories that might otherwise be missed. We will use the AP’s newsroom as a real-world environment to design and build a system to help journalists clean, visualize, and explore very large document sets — tens or hundreds of thousands of pages. We want to build a tool to answer the question, “what’s in there?”

Continue reading Overview: a tool for exploring large document sets

ClimateRapidResponse.org aims to connect journalists to real climate scientists, on deadline

From climaterapidresponse.org:

We have assembled a group of leading scientists to improve communication on the issue of climate change. Our group is committed to providing rapid, high-quality information to media and government. Our members have expertise in virtually all areas of climate science and they are available to share their current understanding. Questions and requests can be submitted below.

Sounds great to me, but I was curious as to who was behind the project, and who these “scientists” I might talk to actually are. I submitted this query with their form, and got the following email back 20 minutes later. (Dr. Weymann kindly gave me permission to publish it; the added links are mine)

Continue reading ClimateRapidResponse.org aims to connect journalists to real climate scientists, on deadline

How do we know that short stories do better online?

Yesterday I had an interesting Twitter conversation with Alexis Madrigal, now Technology Editor at The Atlantic, after he posted the following:

@loisbeckett @mat for us, there is a positive correlation between word counts and pageviews. I.e. More words = more views

I asked him if he could say more, and he did:

Continue reading How do we know that short stories do better online?

What’s the point of social news?

According to Facebook, social news seems to be mostly about knowing what all my friends are reading. I’m not so sure. But I think there really is something to the idea of “social news” for journalism, and for journalism product design.

I take “social” to mean “interacting with other people.” That’s a fundamental technical possibility of digital media, as basic to the internet as moving pictures are to television. I’m not sure that anyone really knows yet what to do with that possibility, but happily there are already at least two very well-developed uses. Maybe social news isn’t about “friends” at all, but about filtering and news-gathering.

Twitter is really a filter
I get most of both my general and special interest news from Twitter. I rarely go to the home page of a news site, or use a news app. It’s not the tweets themselves that are informative, but the links within them to articles posted elsewhere. I follow a large set of people with varied interests, and some of them work for news organizations, but most do not. My twitter feed is faster, more diverse, and available across more platforms (all of them) than any one news organization’s output.

This doesn’t mean that Twitter is a perfect news delivery system, but to me it’s proven better than just about anything else at getting me the news mix that I want, and keeping me interested in the world at large. (Admittedly, I follow people I’ve met in other countries, so yeah, travel is way better than Twitter for that.) I am not alone in this opinion. The structure of follower relationships among Twitter users suggests that it’s more of a news network than a social network.

The usefulness of Twitter for news has a lot to do with certain basic design choices. First, a tweet is really as short as you can get and still communicate a complete concept, so it’s basically an extended headline. Second, Twitter differs from Facebook in that relationships can be unidirectional: I don’t need anyone’s permission to follow them, and they may not know or care that I do. Following someone on Twitter also differs from following a blog via RSS because most tweets refer to someone else’s work through a link — Twitter is more about re-publishing than publishing. Retweets also include the name of the original tweeter, which enables discovery of interesting new curators.

Filtering is much more valuable than it used to be, in this era of information overload, and these properties make Twitter an excellent filtering system. There are several news products based almost entirely on displaying links tweeted by the people you follow, such as The Twitter Tim.es and Flipboard. The medium that Twitter invented — global public short messaging with links — has already been endlessly replicated and will be with us forever.

There is a sense in which news organizations have always seen filtering as a big part of their value. One of the duties of the professional editor is to decide what you need to see. But at least one thing has upset that model irretrievably: the internet is not a broadcast medium. While each person reads an identical copy of the Times and watches an identical CNN broadcast, there’s no reason my internet has to look the same as your internet. A small team of human editors can’t personalize the headlines for every reader, so that leaves algorithmic filtering, such as Google News’ personalization features, or social filtering, such as Twitter.

The point is, there’s probably something to learn from how Twitter uses social relationships to route information. As the Nieman Journalism Lab said: “social news isn’t about the people you know so much as the people with whom you share interests.” To put this in terms of the product I wish I had: when I use your news product, I want to be able to follow the recommended reading of other members of the audience, if they so allow. Also, can I follow a particular reporter? And does your product integrate with the other methods I already use for getting information, so I don’t have to choose?

Social networks are great for reporting
Audience-journalist collaboration, blah blah blah. If the idea that professionals are no longer the only players in news is new to you, see blogging and Wikipedia. But a news organization probably has to look at this from a different angle. For me, the core idea of social news-gathering is that the audience is, or could be, an extension of the news organization’s source network.

Hopefully, a newsroom knows about interesting developments before anyone else, and then verifies and publicizes them, but that’s getting near impossible when anyone can publish, and when virality can amplify primary sources without the involvement of a media organization. We don’t know yet very much about collective news-gathering, but there are promising directions. It seems like maybe there are two broad categories of breaking news: public events that anyone could have witnessed, and private events initially known only to privileged observers.

Social media is now routinely used to augment reporting of public events. There are entire units in news organizations dedicated to getting stories from the audience, often under the awkward rubric of “user-generated content.” But why sift for events online when you can give your audience the tools to give you the story directly? Right now if I see a plane land in a river, I tweet it. Wouldn’t a news organization prefer that I send my eye-witness photo to the UGC editor instead? To this end, several mobile news apps include the ability to submit pictures. CNN’s iReport app and website is probably the best developed of these. Ideally, I could send that breaking news tweet to the newsroom and to my friends at the same time, within the same application.

Fast reporting of private events has always depended on having the right sources. A well established source may call the reporter or send an email when something newsworthy happens. Someone with a much looser connection to the organization may not, and perhaps this is an opportunity for social news tools. When someone knows something — or can talk about something — you want them to contact the newsroom first. The potential of this weak-tie news sourcing approach hasn’t really been studied, to my knowledge, but I imagine that it would require, at minimum, a trusted brand, an easily-reachable editorial staff, and frictionless communication tools. If it’s easier just to tweet or blog the news, the source will.

There are several other good examples of social news-gathering, on the theme of asking your audience for help. Crowdsourcing is usually thought of as the recruitment of many unspecialized helpers, as the Guardian did with its MP expenses project. But the Guardian also reached out to its audience to find that one specialist attorney who could unravel the mystery of Tony Blair’s tax returns. Hopefully the specialists a newsroom needs to consult are already among the audience, and they will see the call for experts when a reporter sends one out. For that matter, a smart and engaged audience can correct you quickly when you are wrong. Nothing says “we care about accuracy” like a fact check box on every story.

But is it journalism?
Yes, absolutely. The job of journalism is to collect accurate information on an ongoing basis and ensure that the audience for each story learns about that story. Any way you can deliver that service is fair game. People depend on each other for the news all the time, so journalists better get in those conversations.

Designing journalism to be used

There are lots of reasons people might want to follow the news, but to me, journalism’s core mission is to facilitate agency. I don’t think current news products are very good at this.

Journalism, capital J, is supposed to be about ideals such as “democracy” and “the public interest.” It’s probably important to be an informed voter, but this is a very shallow theory of why journalism is desirable. Most of what we see around us isn’t built on votes. It’s built on people imagining that some part of the world should be some other way, and then doing what it takes to accomplish that. Democracy is fine, but a real civic culture is far more participatory and empowering than elections. This requires not just information, but information tools.

Newspaper stories online and streaming video on a tablet are not those tools. They are transplantations of what was possible with paper and television. Much more is now possible, and I’m going to try to sketch the outlines of how newsroom products might better support the people who are actually changing the world.

What’s a journalism “product”?

Continue reading Designing journalism to be used