An in-depth analysis of how media organisations and individuals are dealing with user-generated content online in the paper “Newsroom Curators and Independent Storytellers: Content curation as a new form of journalism”, written for the Reuters Institute for the Study of Journalism.
After the explosion of the so-called Web 2.0, the huge success of social networks like Facebook and Twitter, the rise in circulation of smartphones and the increasing importance of mobile connections, what has changed is the amount of information available to journalists. The content – text, pictures and videos – generated by users has become so widespread that is theoretically possible to cover an event from afar, just with the aid of a computing device and an Internet connection, some Twitter lists and a careful look at the right YouTube videos or Facebook pages. And in some cases, when entrance to the actual scene is too dangerous or technically challenging, that may also be the only solution.
Whether we choose to call it “information overload”, as Alvin Toffler did in 1970, or “filter failure” following Clay Shirky’s more recent definition, the problem of how to find relevant information among the thousand tiny pieces of content published online every day cannot be underestimated.
Journalists have always had to face the problem of how to verify a certain piece of information or to distinguish reliable from unreliable sources, but the emergence of the Net and the explosion in the consumption of news on the Internet has brought new challenges, making it even more difficult than before to distinguish between truth, half-truth and falsehood and raising new questions as well. For instance: how to give due credit to the original uploader of footage and protect his rights when the video is embedded by a news organization in its pages? How to draw the thin red line between quoting and plagiarism, in an age in which a lot of outlets use the same multimedia content, found somewhere on the Internet, sometimes without even quoting the source?
That’s one of the reasons why a new professional figure is increasingly gaining importance: the content curator, a term used to define a person who selects the best information found online with regard to its quality and relevance, aggregates it, linking to the original source of news, and provides context and analysis. The curator doesn’t have to be a journalist: he or she may be a blogger or a tweeter as well but, since many of the skills required to a good curator are the same ones needed to a good reporter, journalists are maybe the best fit to play this role and many of them have already begun experimenting with new forms of storytelling based on content curation.
This is nothing new in its essence: journalists have been doing it for a while, although usually splitting different tasks among different professionals; the reporter discovers and selects sources and provides a first draft of content; the editor assembles, sometimes integrates and shapes that content; the verification part of the process can be done by the reporter, by the editor, or by both, according to the procedures of the newsrooms, or on a case by case basis.
Thanks to the rise of new curation tools and platforms, this kind of work can also be performed by single professionals, whom we might call “independent” curators. They might be freelancers or amateurs, or they might be working for a news organization, although with such a level of independence and visibility to transform their job into a one-man-show.
By collecting pictures, videos, links and other user generated content posted online, and by tweeting them in a sequence, or combining them in a more sophisticated chronological narration on networks such as Storify or Tumblr, they are often able to provide a perspective different from that of mainstream media. This is a complementary view in some cases or an alternative view in others.
When seen through the lense of traditional journalistic values, the problem with this kind of curation is that there is no longer a pretense of detachment and neutrality in the telling. Objectivity, defined as being equidistant from the main actors of a story, is no longer a goal. Or, better said, objectivity it is now considered to be equivalent to “transparency”.
In the paper “Newsroom curators and independent storytellers: content curation as a new form of journalism” which I wrote for the Reuters Institute for the Study of Journalism in Oxford, I deal in depth with the process of collecting, verifying and using UGC during two events that challenged traditional methods of reporting. The first one is the England Riots, the biggest unrest in the UK since the 1980s, taking place on a scale that would require an army of journalists to cover with the usual strategies; and the second is Occupy Wall Street, the spontaneous protest movement of a group of American citizens worried and angered by the constant growth in inequality in their country.
For each of these events, I examine the online coverage given to them by “independent storytellers” and by some well-known newspapers and broadcasters: the Guardian and the BBC for the riots, and the Washington Post for the Occupy Wall Street protests. I focus on some episodes that marked the protests, which became iconic in people’s perception of them, or represented a turning point for those involved.
In the case of Occupy Wall Street, for many observers the police raid and clearing of New York’s Zuccotti Park during the night of 15 November 2011 represented a turning point: a blow from which the movement struggled to recover. At the same time, for all its importance, the event was difficult to cover for mainstream reporters as the police evicted them too from the Park together with the protesters, even though they had been shown the journalists’ credentials and press cards.
That’s why curation platforms like Storify during this period “saved the news” (as the website ReadWriteWeb titled it) by allowing freelancers, professionals and journalism students to reconstruct what was happening thanks to pictures and videos uploaded by protesters using their smartphones.
Mainstream media took advantage of the wealth of user-generated content available, showing footage and photos in their coverage and sometimes embedding pre-curated stories on their pages.
Of course, this increased participation and the more significant role played by user-generated content in the making of news has both its drawbacks and opportunities. For news organisations, one of the dangers is to become lazy, to assume that what you find and hear on social media is the real and only “voice of the crowd”, without taking the time to analyse in more depth other possible points of view. Resources are scarce and this could be a real temptation; but not everybody is on the Internet, especially in developing countries, and even if they are, it should not be taken for granted that they are willing to testify online about what’s happening in their surroundings. Does it actually mean that nothing important is happening? Maybe yes, or maybe they are just too afraid, or censored. Mainstream media should not give up on checking where the truth lies.
As for independent storytellers, they could play a double role in the future: supplement, replace or somehow integrate the coverage done by mainstream media and at the same time be the first to experiment with new formats and new models of journalism. But with some caveats. First of all, they might not possess the verification skills now present in the main news organisations (although this does not save them from making mistakes), so they could unwillingly spread false information. Or they could also do this consciously, taking advantage of the new possibilities offered by curation tools just to transform them into propagandistic platforms.
There is also another fundamental issue that is often overlooked: content preservation. Journalists tend to assume that articles are eternal, as long as they are stored in the paper’s archive (both online and offline). However, as regards online coverage, as newsrooms rely more and more on user-generated content, the risk of all or part of this material disappearing after a while is something to be taken into account.
Amateur footage uploaded on YouTube and then embedded in a live blog or a story by an editor may disappear due to copyright infringement or because it has been removed by the uploader; the same fate may be faced by tweets or Facebook posts. This might make “cold case” analysis of UGCs more difficult, like the Guardian has done with the Reading the Riots project.
What’s more, as an increasing percentage of the coverage of breaking news is emerging through live blogs, and a large part of live blogs is composed by UGC, what will happen if, due to copyright reasons, users closing their account or other causes, a significant amount of this material were to become no longer available? The reader trying to read the story of how a major story has developed could find voids where once there used to be content, with meaningless text saying things like “as this video shot by the protesters shows”, where in fact just a “not found” message is visible. And it would also be difficult to correct old mistakes and give new interpretations to events that took place in the past, if the “evidence” (i.e. UGC) on which reports made at the time were based had disappeared.